• [技术干货] 昇腾平台YOLO训练和推理技术洞察
    1      YOLO介绍YOLO(You Only Look Once)是一种流行的实时目标检测算法,以其高速和高精度著称。与传统的目标检测方法(如R-CNN系列)不同,YOLO将目标检测任务视为单一的回归问题,直接从图像像素中预测边界框和类别概率,实现了“端到端”的检测。         YOLO将输入图像划分为 S×S 的网格(例如7×7),每个网格负责预测多个边界框(Bounding Box)及其置信度(Confidence Score)和类别概率。边界框:包含框的中心坐标、宽高。置信度:反映框内是否存在目标以及预测的准确性。类别概率:使用Softmax预测框内物体的类别。         传统方法(如滑动窗口)需要多次扫描图像,而YOLO仅需“看一次”(You Only Look Once),通过卷积神经网络一次性输出所有检测结果,因此速度极快。         在昇腾(Ascend)平台上运行YOLO(You Only Look Once)目标检测算法具有重要的技术意义和商业价值,尤其在AI加速计算领域。昇腾是华为推出的高性能AI处理器(如Ascend 910/310),结合昇腾AI软件栈(CANN、MindSpore等),能够显著提升YOLO的推理和训练效率。以下是其核心意义:1. 高性能加速,满足实时性需求;2. 边缘到云的灵活部署;3. 软硬件协同优化。 2      系统环境安装         昇腾平台运行YOLO需要安装这些工具:1.     Ascend-cann-toolkit_8.0.RC3_linux-aarch64,2.     Ascend-cann-kernels-910b_8.0.RC3_linux-aarch64,3.     mindspore=2.5.0,4.     python=3.9, python3.9的环境的安装命令如下,python的版本号为3.10或者3.11会报错:conda create -n yolo20250705python3d9d8 python=3.9conda activate yolo20250705python3d9d8 下载CANN8.0相关工具的网址:https://www.hiascend.com/developer/download/community/result?module=cann&cann=8.0.RC3.beta1;将下载得到的工具包传至服务器,然后安装;使用CANN=8.1或者8.2运行YOLO有可能会报错;安装CANN的命令如下:/tmp/Ascend-cann-toolkit_8.0.RC3_linux-aarch64.run --install/tmp/Ascend-cann-kernels-910b_8.0.RC3_linux-aarch64.run --devel  下载MindSpore的网址:cid:link_0;安装MindSpore的命令如下:pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.5.0/MindSpore/unified/aarch64/mindspore-2.5.0-cp39-cp39-linux_aarch64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple 代码目录:/apply/yolo20250811/ 进入代码目录:cd /apply/yolo20250811安装环境包资源:pip install -r requirements.txt系统需要安装mesa-libGL工具包,不安装有时会报错:sudo yum install mesa-libGLpython环境需要安装这些工具包,不安装有时会报错;albumentations的版本号>=2.0会报错:pip install sympypip install tepip install albumentations==1.4.24 3      昇腾平台的YOLO的训练与推理训练命令;没有“--ms_mode 1”会报错:python train.py --epochs 600 --config ./configs/yolov11/yolov11-n.yaml --data_dir  ./cache/data/coco --keep_checkpoint_max 1 --auto_accumulate True --per_batch_size 25 --weight ./cache/pretrain_ckpt/yolov11n.ckpt --ms_mode 1 推理命令:python ./demo/predict.py --config ./configs/yolov11/yolov11-n.yaml --weight ./cache/pretrain_ckpt/yolov11n.ckpt --image_path ./cache/data/coco/images/val2017/000000550691.jpg yolo训练日志: (yolo20250705python3d9d8) [root@bms-jp ascendyolo_run_for_v811_20250417a1]# python train.py --epochs 600 --config ./configs/yolov11/yolov11-n.yaml --data_dir  ./cache/data/coco --keep_checkpoint_max 1 --auto_accumulate True --per_batch_size 25 --weight ./cache/pretrain_ckpt/yolov11n.ckpt --ms_mode 1/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.  setattr(self, word, getattr(machar, word).flat[0])/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.  return self._float_to_str(self.smallest_subnormal)/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.  setattr(self, word, getattr(machar, word).flat[0])/root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.  return self._float_to_str(self.smallest_subnormal)2025-07-05 22:45:27,447 [INFO] parse_args:2025-07-05 22:45:27,447 [INFO] task                                    detect2025-07-05 22:45:27,447 [INFO] device_target                           Ascend2025-07-05 22:45:27,447 [INFO] save_dir                                ./runs/2025.07.05-22.45.272025-07-05 22:45:27,447 [INFO] log_level                               INFO2025-07-05 22:45:27,447 [INFO] is_parallel                             False2025-07-05 22:45:27,447 [INFO] ms_mode                                 12025-07-05 22:45:27,447 [INFO] max_call_depth                          20002025-07-05 22:45:27,447 [INFO] ms_amp_level                            O22025-07-05 22:45:27,447 [INFO] keep_loss_fp32                          True2025-07-05 22:45:27,447 [INFO] anchor_base                             False2025-07-05 22:45:27,447 [INFO] ms_loss_scaler                          dynamic2025-07-05 22:45:27,447 [INFO] ms_loss_scaler_value                    65536.02025-07-05 22:45:27,447 [INFO] ms_jit                                  True2025-07-05 22:45:27,447 [INFO] ms_enable_graph_kernel                  False2025-07-05 22:45:27,447 [INFO] ms_datasink                             False2025-07-05 22:45:27,447 [INFO] overflow_still_update                   False2025-07-05 22:45:27,447 [INFO] clip_grad                               True2025-07-05 22:45:27,447 [INFO] clip_grad_value                         10.02025-07-05 22:45:27,447 [INFO] ema                                     True2025-07-05 22:45:27,447 [INFO] weight                                  ./cache/pretrain_ckpt/yolov11n.ckpt2025-07-05 22:45:27,447 [INFO] ema_weight                              2025-07-05 22:45:27,447 [INFO] freeze                                  []2025-07-05 22:45:27,447 [INFO] epochs                                  6002025-07-05 22:45:27,447 [INFO] per_batch_size                          252025-07-05 22:45:27,447 [INFO] img_size                                6402025-07-05 22:45:27,447 [INFO] nbs                                     642025-07-05 22:45:27,447 [INFO] accumulate                              3.02025-07-05 22:45:27,447 [INFO] auto_accumulate                         True2025-07-05 22:45:27,447 [INFO] log_interval                            1002025-07-05 22:45:27,447 [INFO] single_cls                              False2025-07-05 22:45:27,447 [INFO] sync_bn                                 False2025-07-05 22:45:27,447 [INFO] keep_checkpoint_max                     12025-07-05 22:45:27,447 [INFO] run_eval                                False2025-07-05 22:45:27,447 [INFO] conf_thres                              0.0012025-07-05 22:45:27,447 [INFO] iou_thres                               0.72025-07-05 22:45:27,447 [INFO] conf_free                               True2025-07-05 22:45:27,447 [INFO] rect                                    False2025-07-05 22:45:27,447 [INFO] nms_time_limit                          20.02025-07-05 22:45:27,447 [INFO] recompute                               False2025-07-05 22:45:27,447 [INFO] recompute_layers                        02025-07-05 22:45:27,447 [INFO] seed                                    22025-07-05 22:45:27,447 [INFO] summary                                 True2025-07-05 22:45:27,447 [INFO] profiler                                False2025-07-05 22:45:27,447 [INFO] profiler_step_num                       12025-07-05 22:45:27,447 [INFO] opencv_threads_num                      02025-07-05 22:45:27,447 [INFO] strict_load                             True2025-07-05 22:45:27,447 [INFO] enable_modelarts                        False2025-07-05 22:45:27,447 [INFO] data_url                                2025-07-05 22:45:27,447 [INFO] ckpt_url                                2025-07-05 22:45:27,447 [INFO] multi_data_url                          2025-07-05 22:45:27,447 [INFO] pretrain_url                            2025-07-05 22:45:27,447 [INFO] train_url                               2025-07-05 22:45:27,447 [INFO] data_dir                                ./cache/data/coco2025-07-05 22:45:27,447 [INFO] ckpt_dir                                /cache/pretrain_ckpt/2025-07-05 22:45:27,447 [INFO] data.dataset_name                       coco2025-07-05 22:45:27,447 [INFO] data.train_set                          /apply/yolo20250811/cache/data/coco/train2017.txt2025-07-05 22:45:27,447 [INFO] data.val_set                            /apply/yolo20250811/cache/data/coco/val2017.txt2025-07-05 22:45:27,447 [INFO] data.test_set                           /apply/yolo20250811/cache/data/coco/test-dev2017.txt2025-07-05 22:45:27,447 [INFO] data.nc                                 802025-07-05 22:45:27,447 [INFO] data.names                              ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']2025-07-05 22:45:27,447 [INFO] train_transforms.stage_epochs           [590, 10]2025-07-05 22:45:27,447 [INFO] train_transforms.trans_list             [[{'func_name': 'mosaic', 'prob': 1.0}, {'func_name': 'copy_paste', 'prob': 0.1, 'sorted': True}, {'func_name': 'resample_segments'}, {'func_name': 'random_perspective', 'prob': 1.0, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0}, {'func_name': 'albumentations'}, {'func_name': 'hsv_augment', 'prob': 1.0, 'hgain': 0.015, 'sgain': 0.7, 'vgain': 0.4}, {'func_name': 'fliplr', 'prob': 0.5}, {'func_name': 'label_norm', 'xyxy2xywh_': True}, {'func_name': 'label_pad', 'padding_size': 160, 'padding_value': -1}, {'func_name': 'image_norm', 'scale': 255.0}, {'func_name': 'image_transpose', 'bgr2rgb': True, 'hwc2chw': True}], [{'func_name': 'letterbox', 'scaleup': True}, {'func_name': 'resample_segments'}, {'func_name': 'random_perspective', 'prob': 1.0, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0}, {'func_name': 'albumentations'}, {'func_name': 'hsv_augment', 'prob': 1.0, 'hgain': 0.015, 'sgain': 0.7, 'vgain': 0.4}, {'func_name': 'fliplr', 'prob': 0.5}, {'func_name': 'label_norm', 'xyxy2xywh_': True}, {'func_name': 'label_pad', 'padding_size': 160, 'padding_value': -1}, {'func_name': 'image_norm', 'scale': 255.0}, {'func_name': 'image_transpose', 'bgr2rgb': True, 'hwc2chw': True}]]2025-07-05 22:45:27,447 [INFO] data.test_transforms                    [{'func_name': 'letterbox', 'scaleup': False, 'only_image': True}, {'func_name': 'image_norm', 'scale': 255.0}, {'func_name': 'image_transpose', 'bgr2rgb': True, 'hwc2chw': True}]。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。[INFO] albumentations load success[WARNING] ME(50295:281473090502688,MainProcess):2025-07-05-22:46:03.231.109 [mindspore/run_check/_check_version.py:305] The version 7.6 used for compiling the custom operator does not match Ascend AI software package version 7.5 in the current environment......2025-07-05 22:47:15,474 [WARNING] Epoch 1/600, Step 1/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 32768.02025-07-05 22:47:15,809 [WARNING] Epoch 1/600, Step 2/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 16384.02025-07-05 22:47:16,184 [INFO] Epoch 1/600, Step 2/2, imgsize (640, 640), loss: 3.5250, lbox: 1.0629, lcls: 1.3194, dfl: 1.1426, cur_lr: 1.9966999388998374e-052025-07-05 22:47:17,505 [INFO] Epoch 1/600, Step 2/2, step time: 49088.29 ms2025-07-05 22:47:18,444 [INFO] Saving model to ./runs/2025.07.05-22.45.27/weights/yolov11-n-1_2.ckpt2025-07-05 22:47:18,444 [INFO] Epoch 1/600, epoch time: 1.65 min.2025-07-05 22:47:18,710 [WARNING] Epoch 2/600, Step 1/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 8192.02025-07-05 22:47:19,024 [INFO] Epoch 2/600, Step 2/2, imgsize (640, 640), loss: 3.6963, lbox: 1.0847, lcls: 1.4422, dfl: 1.1694, cur_lr: 3.986799856647849e-052025-07-05 22:47:19,037 [INFO] Epoch 2/600, Step 2/2, step time: 296.27 ms2025-07-05 22:47:19,945 [INFO] Saving model to ./runs/2025.07.05-22.45.27/weights/yolov11-n-2_2.ckpt2025-07-05 22:47:19,946 [INFO] Epoch 2/600, epoch time: 0.03 min.2025-07-05 22:47:20,223 [WARNING] Epoch 3/600, Step 1/2, accumulate: 3.0, this step grad overflow, drop. Loss scale adjust to 4096.0   yolo推理日志: (yolo20250705python3d9d8) [root@bms-jp ascendyolo_run_for_v811_20250417a1]# python ./demo/predict.py --config ./configs/yolov11/yolov11-n.yaml --weight ./cache/pretrain_ckpt/yolov11n.ckpt --image_path ./cache/data/coco/images/val2017/000000550691.jpg                                                                                                                                              /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for  type is zero.     setattr(self, word, getattr(machar, word).flat[0])                                                                                                                                                /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for  type is zero.      return self._float_to_str(self.smallest_subnormal)                                                                                                                                                /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:549: UserWarning: The value of the smallest subnormal for  type is zero.     setattr(self, word, getattr(machar, word).flat[0])                                                                                                                                                /root/miniconda3/envs/yolo20250705python3d9d8/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for  type is zero.      return self._float_to_str(self.smallest_subnormal)                                                                                                                                                2025-07-05 22:50:19,157 [WARNING] Parse Model, args: nearest, keep str type                                                                                                                         2025-07-05 22:50:19,204 [WARNING] Parse Model, args: nearest, keep str type                                                                                                                         2025-07-05 22:50:19,584 [INFO] number of network params, total: 2.639747M, trainable: 2.624064M                                                                                                     [WARNING] ME(66771:281473260486688,MainProcess):2025-07-05-22:50:26.245.7 [mindspore/train/serialization.py:1956] For 'load_param_into_net', remove parameter prefix name: ema., continue to load.  2025-07-05 22:50:26,023 [INFO] Load checkpoint from [./cache/pretrain_ckpt/yolov11n.ckpt] success.                                                                                                  .Warning: tiling offset out of range, index: 32                                                                                                                                                     .Warning: tiling offset out of range, index: 32                                                                                                                                                     .Warning: tiling offset out of range, index: 32                                                                                                                                                     Warning: tiling offset out of range, index: 32                                                                                                                                                      Warning: tiling offset out of range, index: 32                                                                                                                                                      Warning: tiling offset out of range, index: 32                                                                                                                                                      ..2025-07-05 22:51:27,507 [INFO] Predict result is: {'category_id': [6, 3, 3, 6, 6], 'bbox': [[194.125, 54.75, 243.875, 354.25], [115.25, 286.5, 82.25, 68.0], [442.0, 283.0, 24.0, 20.0], [3.25, 215.25, 160.75, 64.0], [3.875, 215.5, 159.875, 96.5]], 'score': [0.93115, 0.90283, 0.70898, 0.58154, 0.45508]}                                                                                        2025-07-05 22:51:27,507 [INFO] Speed: 61360.0/11.5/61371.5 ms inference/NMS/total per 640x640 image at batch-size 1;                                                                                2025-07-05 22:51:27,507 [INFO] Detect a image success.                                                                                                                                              2025-07-05 22:51:27,516 [INFO] Infer completed.     4      总结         在昇腾(Ascend)平台上成功运行YOLO模型的训练和推理,通过CANN软件栈和MindSpore框架的深度适配,实现了高效的算子优化及硬件加速(如昇腾910B/310)。关键技术包括动态分片、混合精度训练和DVPP硬件预处理,显著提升了目标检测的推理性能。昇腾NPU在CV任务中的具有强大的竞争力。昇腾显卡在边缘计算、智能安防等场景的AI部署具有重要的产业意义。        
  • [问题求助] C2120-10-SIU二次开发
    我这边想调用C2120-10-SIU这款摄像头获取图像的帧数据,再用自己训练的模型进行检测。用python开发一个独立的软件用python,这样可行吗?之前用海康工业相机是可以这样操作的
  • 已发布三方算法清单
    帖子中的附件是已发布三方算法清单,包含了算法所适用的场景以及配套的软、硬件版本。
  • 三方算法版本更新步骤
    一、进入好望商城,选择商品管理页面---已上传商品---智能算法管理找到需要更新版本的算法名称,然后选择版本管理。二、选择升级三、按照要求输入版本描述名称在版本升级描述中填写新版本相比于上一个版本的修改点。并把算法包(rpm格式)上传同时提交新的使用手册和测试素材。如果云服务配置有变更,则算法配置参数也需要重新填写,如无变更则选择复用上个版本即可。
  • [问题求助] NVR800 登录数超过限制
    1.请问下NVR800最多支持多少登录数?2.在Web页面Configuration->Multi-User Mgmt里新增一个用户,似乎并不能扩大同时登录数,如何扩大登录数呢?
  • 行业感知海外英文文档对接资料
    针对海外项目提供了英文版等对接资料,新增《华为SDC restful 对接一站式开发指南》《SDC算法上车一站式开发指南》《IVS1800 restful 对接一站式开发指南》《实况&录像业务流程文档》《IVS1800第三方算法上车一站式开发指南》。项目类型文档明细海外英文文档名称下载链接SDCSDKHoloSens SDC SDK开发指南HoloSens SDC SDK Development GuideLINK第三方平台对接HOLOWITS摄像头TLV数据HoloSens SDC TLV Data for Third-Party Platform ConnectionLINKrestfulSDC restful 对接一站式开发指南 SDC RESTful Interconnection One-Stop Development Guide LINK南向算法HoloSens SDC APP开发指南HOLOWITS Camera App Development GuideLINKSDC算法上车一站式开发指南One-Stop Algorithm Development GuideLINK智能微边缘restfulIVS1800 restful 对接一站式开发指南IVS1800 RESTful Interconnection One-Stop Development GuideLINK实况&录像业务流程文档Obtaining URL Video Streams for Live Video and Recording PlaybackLINKIVS1800 11.1.0 接口参考(RESTFUL)HWT-IVS1800 11.1.0 Interface Reference (RESTful)LINK南向算法IVS1800-E第三方算法上车一站式开发指南HWT-IVS1800E 11.1 Algorithm Development GuideLINK
  • [开发资源] 三方算法相机flash擦写排查指南
    详见附件,为确保相机flash使用寿命,开发算法时需要注意对于擦写频率的控制
  • [问题求助] yolov5成功安装算法App,但检测不出来结果,有没有一个嵌入成功并且可检测的模型,需要排查一下问题。
     什么报错的信息都没有,就是检测框不出来。
  • [问题求助] 80分类coco128数据集子训练yolov5,成功安装App,但检测不出来结果
    已经成功嵌入到相机里面了, 就是检测不出结果。是不是中间有什么坑!请问是为什么?
  • [问题求助] HWT-D2152-10-SIU 智能事件 车牌抓拍抓不到图
    自己根据开发指南写了个demo进行车牌抓拍测试,测试发现无法获取全景图等信息,分析TlV第三层数据中数据type只有时间戳(0x09000001)、处理图片宽/高(0x07000100/1)和target类型(0x07000023),想问下问题出现的原因和解决发现,如何获取到全景图或车牌字符等其它类型。自己编写的demo中获取元数据使用的是IVS_User_GetMetaDataAll,下图是demo运行时打印的信息,p是时分析第三层数据时根据数据类型打印的信息,p2是登录后获取到的抓拍参数:p1:p2:
  • [技术干货] IVM远程升级/安装第三方算法/算法资源同步流程
    IVM远程升级/安装第三方算法/算法资源同步流程以下仅针对IVM企业管理平台远程升级/安装算法IVM portal:IVM 登陆入口好望商城:商城登陆入口第一步:好望商城>商品管理>配额共享资源池>我共享给别人>编辑共享资源池>共享配额>共享账号=账号ID ,算法资源才会同步到IVM企业。此操作为购买算法者账号【登陆好望商城】执行。账号ID可从IVM企业信息【点击头像】>华为云API企业凭证处获取。第二步:好望商城登陆与IVM企业信息【点击头像】>华为云API企业凭证>账号名一致的华为帐号。购买算法者账号与企业信息绑定的账号名用户使用的可能一致,此时均登陆同一账号即可。第三步:登陆成功后,看到算法可用配额,点击分配;弹出“分配License”对话框,选择“硬件输入类型”,输入被分配算法的设备ID即可;第四步:分配成功后,回到IVM企业管理平台>算法>算法管理,找到刚被分配过的设备,该企业的任意管理员均可到算法管理对该设备算法作升级或安装。
  • [开发资源] IVS1800-E第三方算法上车一站式开发指导
    IVS1800-E第三方算法上车一站式开发指导cid:link_0
  • [问题求助] 流程开发使用vxml在添加流程时路径能否使用主机名
     【问题来源】    公司开发环境    【问题简要】    流程开发使用vmxl,在添加流程时,流程路径IP能否换成服务器的主机名。【问题类别】    CTI【AICC解决方案版本】    AICC版本:AICC 22.200    UAP版本:UAP9600 V100R005C00SPC113    CTI版本:ICD V300R008C25spc012【期望解决时间】    尽快【问题现象描述】    流程开发使用vxml,在添加流程时,流程路径配置为http://99.85.165.70/WLB.IVR/VDN/37117333_CallFlow.vxml,请问能否将99.85.165.70这个IP地址修改为服务器的主机名,例如:/etc/hosts配置如下,路径能否修改成http://ivr/WLB.IVR/VDN/37117333_CallFlow.vxml​was配置示例:​
  • [问题求助] D6550-10-Z33 能否提供二次开发的接口
    带AI算法的视频摄像头,D6550-10-Z33 能否提供接口能植入“人形识别”算法和训练集,能够接收报警联动指令并实现摄像头转动、定位、图像缩放等动作,具备视频复核联动报警输出接口?
  • 使用M2141-10-EL的摄像机进行二次开发,现在遇到几个问题,希望得到帮助。
    1、平台侧是在内网: 勾选NAT 填写url 鉴权账号密码后,成功订阅,并且能够接收到上报的元数据。      平台侧是在外网:完成上面操作后,无法正常进入摄像头侧。 查看日志显示 接收信息超时。2、我们是通过接线的方式触发告警, 于是在摄像头web 界面配置了 输入告警1,并且能够在告警 日志中打印出来。 但是没有抓拍信息,是否需要配置联动?但是我又没有找到能够通过告警触发抓拍的配置。