• [问题求助] vscode注册设备时显示:无法将“xxx”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。
    按照这个教程进行,https://blog.csdn.net/aa2528877987/article/details/129701716?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522169293387216800213012751%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=169293387216800213012751&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-2-129701716-null-null.142。【愚公系列】华为云系列之基于ModelBox搭建的AI寻车系统到3)如下图依次点击,进行ModelBox设备注册时卡住。在Power Shell修改权限后出现的问题,一开始是禁止在设备上运行脚本。修改后CurrentUser的权限为RemoteSigned。但是修改后又出现了【运行Agent失败。原因: 无法将“xxx”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,】的问题这里的xxx指的是.modelbox的路径,为xxx/.modelbox。是要添加环境变量吗?按照网上的结局方法,修改powershell权限后仍不成功。Agent日志为:Get-Content : 找不到接受实际参数“hilens\log\hda\hdad.log”的位置形式参数。
  • [问题求助] RK3568问题求助
    这个报错怎么解决?yolox推理出来,buffer为空?
  • [问题求助] 在Atlas 500 Pro【配置300I Pro】ModelBox的示例mnist-mindspore,运行失败
    通过官网https://modelbox-ai.com/modelbox-book/environment/container-usage.html,使用开发镜像方式,拉取了:modelbox/modelbox-develop-mindspore_1.9.0-cann_6.0.1-d310p-ubuntu-aarch64modelbox/modelbox-develop-mindspore_1.6.1-cann_5.0.4-ubuntu-aarch64都报同样的问题。按照文档的安装流程,安装上ModelBox后,正确的在浏览器中打开editor,并能用vscode远程访问。但是在新建项目中选择mnist-mindspore示例,并运行时,报错:request invalid, job config is invalid, Not found, build graph failed, please check graph config. -> create flowunit 'mnist_infer' failed. -> current environment does not support the inference type: 'mindspore:cpu'查看python,提示mindspore 安装成功:>>> import mindspore>>> mindspore.run_check()MindSpore version:  1.9.0MindSpore running check failed.Ascend kernel runtime initialization failed.----------------------------------------------------- Ascend Error Message:----------------------------------------------------EE8888: Inner Error!        Unsupport flags, flags=4[FUNC:StreamCreate][FILE:api_error.cc][LINE:300]        rtStreamCreateWithFlags execute failed, reason=[feature not support][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49]        Solution: Please contact support engineer.----------------------------------------------------- Framework Error Message: (For framework developers)----------------------------------------------------Create stream failed, ret:207000----------------------------------------------------- C++ Call Stack: (For framework developers)----------------------------------------------------mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:425 Initmindspore/ccsrc/plugin/device/ascend/hal/device/ascend_stream_manager.cc:96 CreateStreamWithFlags>>> mindspore.set_context(device_target='Ascend')>>> mindspore.run_check()MindSpore version:  1.9.0MindSpore running check failed.Ascend kernel runtime initialization failed.----------------------------------------------------- Ascend Error Message:----------------------------------------------------EE8888: Inner Error!        Unsupport flags, flags=4[FUNC:StreamCreate][FILE:api_error.cc][LINE:300]        rtStreamCreateWithFlags execute failed, reason=[feature not support][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:49]        Solution: Please contact support engineer.----------------------------------------------------- Framework Error Message: (For framework developers)----------------------------------------------------Create stream failed, ret:207000----------------------------------------------------- C++ Call Stack: (For framework developers)----------------------------------------------------mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:425 Initmindspore/ccsrc/plugin/device/ascend/hal/device/ascend_stream_manager.cc:96 CreateStreamWithFlags>>> mindspore.set_context(device_target='CPU')>>> mindspore.run_check()MindSpore version:  1.9.0The result of multiplication calculation is correct, MindSpore has been installed successfully!在容器中执行npu-smi正常:[root@e45619086c67 runtime]$ npu-smi info+--------------------------------------------------------------------------------------------------------+| npu-smi 22.0.3                                   Version: 22.0.3.2.b030                                |+-------------------------------+-----------------+------------------------------------------------------+| NPU     Name                  | Health          | Power(W)     Temp(C)           Hugepages-Usage(page) || Chip    Device                | Bus-Id          | AICore(%)    Memory-Usage(MB)                        |+===============================+=================+======================================================+| 0       310P3                 | OK              | NA           39                15   / 15             || 0       0                     | 0000:01:00.0    | 0            3363 / 21527                            |+===============================+=================+======================================================+请问,这是什么问题,该如何解决?
  • [问题求助] rk3568接入hilens平台求助
    rk3568接入hilens平台求助是使用什么协议进行传输的?
  • [问题求助] modelbox中我怎么让一些功能单元只需执行一次?
    modelbox中我怎么让一些功能单元只需执行一次?modelbox中我怎么让一些功能单元只需执行一次?
  • [问题求助] modelbox 与 华为云IOT平台结合
    modelbox与华为云IOT结合,用MQTT协议数据属性上报,触发告警,能够实现吗?
  • [问题求助] modelbox rtsp推理问题
    求问modelbox,最后的rtsp推流,只能推流到自己的pc端吗,能否推流到安卓端,如果安卓端有拉流应用的话?
  • [问题求助] modelbox 报错
    这种报错改怎么解决呀?
  • [问题求助] modelbox问题
    请问,这个报错,怎么解决?
  • [问题求助] modelbox 图编排
    请问,这个问题,怎么解决
  • [问题求助] modelbox rk3568求助
    按照这个教程,装libatomic.so.1库,但是一直出错,报错情况如上
  • [问题求助] modelbox求助
    modelbox的输出为WHC格式,但是通用pytorch训练的模型输入为CWH格式,有什么办法解决吗?
  • [问题求助] 问题求助
    有大佬知道,这是怎么回事吗?
  • [问题求助] 求助
    有大佬知道这是怎么回事吗?
  • [问题求助] tensorflow转rknn失败
    tf_pb转rknn报错:google.protobuf.message.DecodeError: Error parsing message转换脚本:https://developer.huaweicloud.com/develop/aigallery/notebook/detail?id=233aefd9-e6ec-4cbd-b66d-7053ecbbdbfb参考内容:训练环境:tensorflow-gpu 1.15
总条数:30 到第
上滑加载中