• [技术干货] 昇腾平台YOLO训练和推理技术洞察
    1      YOLO介绍YOLO(You Only Look Once)是一种流行的实时目标检测算法,以其高速和高精度著称。与传统的目标检测方法(如R-CNN系列)不同,YOLO将目标检测任务视为单一的回归问题,直接从图像像素中预测边界框和类别概率,实现了“端到端”的检测。         YOLO将输入图像划分为 S×S 的网格(例如7×7),每个网格负责预测多个边界框(Bounding Box)及其置信度(Confidence Score)和类别概率。边界框:包含框的中心坐标、宽高。置信度:反映框内是否存在目标以及预测的准确性。类别概率:使用Softmax预测框内物体的类别。         传统方法(如滑动窗口)需要多次扫描图像,而YOLO仅需“看一次”(You Only Look Once),通过卷积神经网络一次性输出所有检测结果,因此速度极快。         在昇腾(Ascend)平台上运行YOLO(You Only Look Once)目标检测算法具有重要的技术意义和商业价值,尤其在AI加速计算领域。昇腾是华为推出的高性能AI处理器(如Ascend 910/310),结合昇腾AI软件栈(CANN、MindSpore等),能够显著提升YOLO的推理和训练效率。以下是其核心意义:1. 高性能加速,满足实时性需求;2. 边缘到云的灵活部署;3. 软硬件协同优化。 2      系统环境安装         昇腾平台运行YOLO需要安装这些工具:1.     Ascend-cann-toolkit_8.0.RC3_linux-aarch64,2.     Ascend-cann-kernels-910b_8.0.RC3_linux-aarch64,3.     mindspore=2.5.0,4.     python=3.9, python3.9的环境的安装命令如下,python的版本号为3.10或者3.11会报错:conda create -n yolo20250705python3d9d8 python=3.9conda activate yolo20250705python3d9d8 下载CANN8.0相关工具的网址:https://www.hiascend.com/developer/download/community/result?module=cann&cann=8.0.RC3.beta1;将下载得到的工具包传至服务器,然后安装;使用CANN=8.1或者8.2运行YOLO有可能会报错;安装CANN的命令如下:/tmp/Ascend-cann-toolkit_8.0.RC3_linux-aarch64.run --install/tmp/Ascend-cann-kernels-910b_8.0.RC3_linux-aarch64.run --devel  下载MindSpore的网址:cid:link_0;安装MindSpore的命令如下:pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.5.0/MindSpore/unified/aarch64/mindspore-2.5.0-cp39-cp39-linux_aarch64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple 代码目录:/apply/yolo20250811/ 进入代码目录:cd /apply/yolo20250811安装环境包资源:pip install -r requirements.txt系统需要安装mesa-libGL工具包,不安装有时会报错:sudo yum install mesa-libGLpython环境需要安装这些工具包,不安装有时会报错;albumentations的版本号>=2.0会报错:pip install sympypip install tepip install albumentations==1.4.24 3      昇腾平台的YOLO的训练与推理训练命令;没有“--ms_mode 1”会报错:python train.py --epochs 600 --config ./configs/yolov11/yolov11-n.yaml --data_dir  ./cache/data/coco --keep_checkpoint_max 1 --auto_accumulate True --per_batch_size 25 --weight ./cache/pretrain_ckpt/yolov11n.ckpt --ms_mode 1 推理命令:python ./demo/predict.py --config ./configs/yolov11/yolov11-n.yaml --weight ./cache/pretrain_ckpt/yolov11n.ckpt --image_path ./cache/data/coco/images/val2017/000000550691.jpg yolo训练日志: (yolo20250705python3d9d8) [root@bms-jp ascendyolo_run_for_v811_20250417a1]# python train.py --epochs 600 --config ./configs/yolov11/yolov11-n.yaml --data_dir  ./cache/data/coco --keep_checkpoint_max 1 --auto_accumulate True --per_batch_size 25 --weight ./cache/pretrain_ckpt/yolov11n.ckpt --ms_mode 1/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.  setattr(self, word, getattr(machar, word).flat[0])/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.  return self._float_to_str(self.smallest_subnormal)/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.  setattr(self, word, getattr(machar, word).flat[0])/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.  return self._float_to_str(self.smallest_subnormal)2025-07-05 22:45:27,447 [INFO] parse_args:2025-07-05 22:45:27,447 [INFO] task                                    detect2025-07-05 22:45:27,447 [INFO] device_target                           Ascend2025-07-05 22:45:27,447 [INFO] save_dir                                ./runs/2025.07.05-22.45.272025-07-05 22:45:27,447 [INFO] log_level                               INFO2025-07-05 22:45:27,447 [INFO] is_parallel                             False2025-07-05 22:45:27,447 [INFO] ms_mode                                 12025-07-05 22:45:27,447 [INFO] max_call_depth                          20002025-07-05 22:45:27,447 [INFO] ms_amp_level                            O22025-07-05 22:45:27,447 [INFO] keep_loss_fp32                          True2025-07-05 22:45:27,447 [INFO] anchor_base                             False2025-07-05 22:45:27,447 [INFO] ms_loss_scaler                          dynamic2025-07-05 22:45:27,447 [INFO] ms_loss_scaler_value                    65536.02025-07-05 22:45:27,447 [INFO] ms_jit                                  True2025-07-05 22:45:27,447 [INFO] ms_enable_graph_kernel                  False2025-07-05 22:45:27,447 [INFO] ms_datasink                             False2025-07-05 22:45:27,447 [INFO] overflow_still_update                   False2025-07-05 22:45:27,447 [INFO] clip_grad                               True2025-07-05 22:45:27,447 [INFO] clip_grad_value                         10.02025-07-05 22:45:27,447 [INFO] ema                                     True2025-07-05 22:45:27,447 [INFO] weight                                  ./cache/pretrain_ckpt/yolov11n.ckpt2025-07-05 22:45:27,447 [INFO] ema_weight                              2025-07-05 22:45:27,447 [INFO] freeze                                  []2025-07-05 22:45:27,447 [INFO] epochs                                  6002025-07-05 22:45:27,447 [INFO] per_batch_size                          252025-07-05 22:45:27,447 [INFO] img_size                                6402025-07-05 22:45:27,447 [INFO] nbs                                     642025-07-05 22:45:27,447 [INFO] accumulate                              3.02025-07-05 22:45:27,447 [INFO] auto_accumulate                         True2025-07-05 22:45:27,447 [INFO] log_interval                            1002025-07-05 22:45:27,447 [INFO] single_cls                              False2025-07-05 22:45:27,447 [INFO] sync_bn                                 False2025-07-05 22:45:27,447 [INFO] keep_checkpoint_max                     12025-07-05 22:45:27,447 [INFO] run_eval                                False2025-07-05 22:45:27,447 [INFO] conf_thres                              0.0012025-07-05 22:45:27,447 [INFO] iou_thres                               0.72025-07-05 22:45:27,447 [INFO] conf_free                               True2025-07-05 22:45:27,447 [INFO] rect                                    False2025-07-05 22:45:27,447 [INFO] nms_time_limit                          20.02025-07-05 22:45:27,447 [INFO] recompute                               False2025-07-05 22:45:27,447 [INFO] recompute_layers                        02025-07-05 22:45:27,447 [INFO] seed                                    22025-07-05 22:45:27,447 [INFO] summary                                 True2025-07-05 22:45:27,447 [INFO] profiler                                False2025-07-05 22:45:27,447 [INFO] profiler_step_num                       12025-07-05 22:45:27,447 [INFO] opencv_threads_num                      02025-07-05 22:45:27,447 [INFO] strict_load                             True2025-07-05 22:45:27,447 [INFO] enable_modelarts                        False2025-07-05 22:45:27,447 [INFO] data_url                                2025-07-05 22:45:27,447 [INFO] ckpt_url                                2025-07-05 22:45:27,447 [INFO] multi_data_url                          2025-07-05 22:45:27,447 [INFO] pretrain_url                            2025-07-05 22:45:27,447 [INFO] train_url                               2025-07-05 22:45:27,447 [INFO] data_dir                                ./cache/data/coco2025-07-05 22:45:27,447 [INFO] ckpt_dir                                /cache/pretrain_ckpt/2025-07-05 22:45:27,447 [INFO] data.dataset_name                       coco2025-07-05 22:45:27,447 [INFO] data.train_set                          /apply/yolo20250811/cache/data/coco/train2017.txt2025-07-05 22:45:27,447 [INFO] data.val_set                            /apply/yolo20250811/cache/data/coco/val2017.txt2025-07-05 22:45:27,447 [INFO] data.test_set                           /apply/yolo20250811/cache/data/coco/test-dev2017.txt2025-07-05 22:45:27,447 [INFO] data.nc                                 802025-07-05 22:45:27,447 [INFO] data.names                              ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']2025-07-05 22:45:27,447 [INFO] train_transforms.stage_epochs           [590, 10]2025-07-05 22:45:27,447 [INFO] train_transforms.trans_list             [[{'func_name': 'mosaic', 'prob': 1.0}, {'func_name': 'copy_paste', 'prob': 0.1, 'sorted': True}, {'func_name': 'resample_segments'}, {'func_name': 'random_perspective', 'prob': 1.0, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0}, {'func_name': 'albumentations'}, {'func_name': 'hsv_augment', 'prob': 1.0, 'hgain': 0.015, 'sgain': 0.7, 'vgain': 0.4}, {'func_name': 'fliplr', 'prob': 0.5}, {'func_name': 'label_norm', 'xyxy2xywh_': True}, {'func_name': 'label_pad', 'padding_size': 160, 'padding_value': -1}, {'func_name': 'image_norm', 'scale': 255.0}, {'func_name': 'image_transpose', 'bgr2rgb': True, 'hwc2chw': True}], [{'func_name': 'letterbox', 'scaleup': True}, {'func_name': 'resample_segments'}, {'func_name': 'random_perspective', 'prob': 1.0, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0}, {'func_name': 'albumentations'}, {'func_name': 'hsv_augment', 'prob': 1.0, 'hgain': 0.015, 'sgain': 0.7, 'vgain': 0.4}, {'func_name': 'fliplr', 'prob': 0.5}, {'func_name': 'label_norm', 'xyxy2xywh_': True}, {'func_name': 'label_pad', 'padding_size': 160, 'padding_value': -1}, {'func_name': 'image_norm', 'scale': 255.0}, {'func_name': 'image_transpose', 'bgr2rgb': True, 'hwc2chw': True}]]2025-07-05 22:45:27,447 [INFO] data.test_transforms                    [{'func_name': 'letterbox', 'scaleup': False, 'only_image': True}, {'func_name': 'image_norm', 'scale': 255.0}, {'func_name': 'image_transpose', 'bgr2rgb': True, 'hwc2chw': True}]。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。[INFO] albumentations load success[WARNING] ME(50295:281473090502688,MainProcess):2025-07-05-22:46:03.231.109 [mindspore/run_check/_check_version.py:305] The version 7.6 used for compiling the custom operator does not match Ascend AI software package version 7.5 in the current environment......2025-07-05 22:47:15,474 [WARNING] Epoch 1/600, Step 1/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 32768.02025-07-05 22:47:15,809 [WARNING] Epoch 1/600, Step 2/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 16384.02025-07-05 22:47:16,184 [INFO] Epoch 1/600, Step 2/2, imgsize (640, 640), loss: 3.5250, lbox: 1.0629, lcls: 1.3194, dfl: 1.1426, cur_lr: 1.9966999388998374e-052025-07-05 22:47:17,505 [INFO] Epoch 1/600, Step 2/2, step time: 49088.29 ms2025-07-05 22:47:18,444 [INFO] Saving model to ./runs/2025.07.05-22.45.27/weights/yolov11-n-1_2.ckpt2025-07-05 22:47:18,444 [INFO] Epoch 1/600, epoch time: 1.65 min.2025-07-05 22:47:18,710 [WARNING] Epoch 2/600, Step 1/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 8192.02025-07-05 22:47:19,024 [INFO] Epoch 2/600, Step 2/2, imgsize (640, 640), loss: 3.6963, lbox: 1.0847, lcls: 1.4422, dfl: 1.1694, cur_lr: 3.986799856647849e-052025-07-05 22:47:19,037 [INFO] Epoch 2/600, Step 2/2, step time: 296.27 ms2025-07-05 22:47:19,945 [INFO] Saving model to ./runs/2025.07.05-22.45.27/weights/yolov11-n-2_2.ckpt2025-07-05 22:47:19,946 [INFO] Epoch 2/600, epoch time: 0.03 min.2025-07-05 22:47:20,223 [WARNING] Epoch 3/600, Step 1/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 4096.0   yolo推理日志: (yolo20250705python3d9d8) [root@bms-jp ascendyolo_run_for_v811_20250417a1]# python ./demo/predict.py --config ./configs/yolov11/yolov11-n.yaml --weight ./cache/pretrain_ckpt/yolov11n.ckpt --image_path ./cache/data/coco/images/val2017/000000550691.jpg                                                                                                                                              /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for  type is zero.     setattr(self, word, getattr(machar, word).flat[0])                                                                                                                                                /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for  type is zero.      return self._float_to_str(self.smallest_subnormal)                                                                                                                                                /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for  type is zero.     setattr(self, word, getattr(machar, word).flat[0])                                                                                                                                                /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for  type is zero.      return self._float_to_str(self.smallest_subnormal)                                                                                                                                                2025-07-05 22:50:19,157 [WARNING] Parse Model, args: nearest, keep str type                                                                                                                         2025-07-05 22:50:19,204 [WARNING] Parse Model, args: nearest, keep str type                                                                                                                         2025-07-05 22:50:19,584 [INFO] number of network params, total: 2.639747M, trainable: 2.624064M                                                                                                     [WARNING] ME(66771:281473260486688,MainProcess):2025-07-05-22:50:26.245.7 [mindspore/train/serialization.py:1956] For 'load_param_into_net', remove parameter prefix name: ema., continue to load.  2025-07-05 22:50:26,023 [INFO] Load checkpoint from [./cache/pretrain_ckpt/yolov11n.ckpt] success.                                                                                                  .Warning: tiling offset out of range, index: 32                                                                                                                                                     .Warning: tiling offset out of range, index: 32                                                                                                                                                     .Warning: tiling offset out of range, index: 32                                                                                                                                                     Warning: tiling offset out of range, index: 32                                                                                                                                                      Warning: tiling offset out of range, index: 32                                                                                                                                                      Warning: tiling offset out of range, index: 32                                                                                                                                                      ..2025-07-05 22:51:27,507 [INFO] Predict result is: {'category_id': [6, 3, 3, 6, 6], 'bbox': [[194.125, 54.75, 243.875, 354.25], [115.25, 286.5, 82.25, 68.0], [442.0, 283.0, 24.0, 20.0], [3.25, 215.25, 160.75, 64.0], [3.875, 215.5, 159.875, 96.5]], 'score': [0.93115, 0.90283, 0.70898, 0.58154, 0.45508]}                                                                                        2025-07-05 22:51:27,507 [INFO] Speed: 61360.0/11.5/61371.5 ms inference/NMS/total per 640x640 image at batch-size 1;                                                                                2025-07-05 22:51:27,507 [INFO] Detect a image success.                                                                                                                                              2025-07-05 22:51:27,516 [INFO] Infer completed.     4      总结         在昇腾(Ascend)平台上成功运行YOLO模型的训练和推理,通过CANN软件栈和MindSpore框架的深度适配,实现了高效的算子优化及硬件加速(如昇腾910B/310)。关键技术包括动态分片、混合精度训练和DVPP硬件预处理,显著提升了目标检测的推理性能。昇腾NPU在CV任务中的具有强大的竞争力。昇腾显卡在边缘计算、智能安防等场景的AI部署具有重要的产业意义。        
  • [问题求助] 算法app安装签名校验失败
    M6781-10-GZ40-W5型号球机签名校验失败
  • [应用开发] SegFormer-B0 OM模型在MDC300F上推理时间为1000ms
    在将segformer-b0算法模型转OM后,在MDC300F MINI上推理耗时1000ms。ONNX中MatMul算子的输入数据的shape带batch维度,将算子的type类型更改为 BatchMatMul后,转OM时,发现MatMul算子前后各出现一个trans_TransData算子。通过profiling计算算子耗时,发现TransposeD、ArgMaxD、TransData、SoftmaxV2、BatchMatMul占用大量的推理时间。请问如何避免带batch维度的MatMul产生TransData算子?针对现在算子耗时,有什么优化的方法吗?
  • [问题求助] 80分类coco128数据集子训练yolov5,成功安装App,但检测不出来结果
    已经成功嵌入到相机里面了, 就是检测不出结果。是不是中间有什么坑!请问是为什么?
  • [云实验室] 基于华为云自动学习的垃圾分类图像识别系统
    一、简介本项目主要运用华为云 EI 的 ModelArts 的自动学习以及云对象存储的 OBS,实现简单的垃圾分类系统。二、内容描述本垃圾分类图像识别系统主要通过创建图像分类自动学习项目,进行数据标注,进行自动训练和部署测试,再到最后的结束测试。 三、主要流程四、图像分类任务介绍ModelArts 服务之自动学习图像分类项目,是对图像进行检测分类。添加图片并对图像进行分类标注,每个分类识别一种类型的图像。完成图片标注后开始自动训练,即可快速生成图像分类模型。可应用于商品的自动识别、运输车辆种类识别和残次品的自动检测。例如质量检查的场景,则可以上传产品图片,将图片标注“合格”、“不合格”,通过训练部署模型,实现产品的质检。五、系统创建1、创建项目2、添加图片3、数据标注进行“一次性快餐盒-其他垃圾” 的数据标注。 先将左下角的数字选择为 45, 点击图片选择同类的图片(一次可以选择一张或者多张),在标签名栏填写当前选择图片的标签(已有的标签可以直接选择) , 输入标签名, 点击确定。4、自动训练5、部署测试预测结果:
总条数:40 到第
上滑加载中