• [问题求助] 昇腾GPU服务器SDK找不到问题
    环境docker镜像  swr.cn-central-221.ovaijisuan.com/mindformers/mindformers1.0_mindspore2.2.11:aarch_20240125docker run -it -u root \--ipc=host \--network host \--device=/dev/davinci0 \--device=/dev/davinci_manager \--device=/dev/devmm_svm \--device=/dev/hisi_hdc \-v /etc/localtime:/etc/localtime \-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \-v /var/log/npu/:/usr/slog \-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \--name bc2 \swr.cn-central-221.ovaijisuan.com/mindformers/mindformers1.0_mindspore2.2.11:aarch_20240125 \/bin/bashmindspore2.2.11_py39 mindformers  https://gitee.com/mindspore/mindformers 最新版,build.sh 安装的hccn_tool 找不到python ./mindformers/tools/hccl_tools.py --device_num "[0,1)"模型转换失败    python ./research/baichuan/convert_weight.py --torch_ckpt_path TORCH_CKPT_PATH --mindspore_ckpt_path MS_CKPT_NAME推理失败命令python run_baichuan2.py \--config run_baichuan2_7b.yaml \--run_mode predict \--use_parallel False \--load_checkpoint '../../models/Baichuan2-7B-Chat/Baichuan2_7B_Chat.ckpt' \--auto_trans_ckpt False \--predict_data "<reserved_106>你是谁?<reserved_107>"报错2024-03-19 09:02:31,847 - mindformers[mindformers/models/llama/llama_config.py:199] - WARNING - Argument `compute_in_2d` is deprecated.2024-03-19 09:02:31,847 - mindformers[mindformers/version_control.py:62] - INFO - The Cell Reuse compilation acceleration feature is not supported when the environment variable ENABLE_CELL_REUSE is 0 or MindSpore version is earlier than 2.1.0 or stand_alone mode or pipeline_stages <= 12024-03-19 09:02:31,847 - mindformers[mindformers/version_control.py:66] - INFO - The current ENABLE_CELL_REUSE=0, please set the environment variable as follows: export ENABLE_CELL_REUSE=1 to enable the Cell Reuse compilation acceleration feature.2024-03-19 09:02:31,847 - mindformers[mindformers/version_control.py:72] - INFO - The Cell Reuse compilation acceleration feature does not support single-card mode.This feature is disabled by default. ENABLE_CELL_REUSE=1 does not take effect.2024-03-19 09:02:31,848 - mindformers[mindformers/version_control.py:75] - INFO - The Cell Reuse compilation acceleration feature only works in pipeline parallel mode(pipeline_stage>1).Current pipeline stage=1, the feature is disabled by default.[WARNING] ME(304:281465918746208,MainProcess):2024-03-19-09:02:35.841.256 [mindspore/ops/primitive.py:228] The in_strategy of the operator in your network will not take effect in stand_alone mode. This means the the shard function called in the network is ignored. If you want to enable it, please use semi auto or auto parallel mode by context.set_auto_parallel_context(parallel_mode=ParallelMode.SEMI_AUTO_PARALLEL or context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)Traceback (most recent call last):  File "/opt/mindformers/research/baichuan2/run_baichuan2_chat.py", line 237, in <module>    main(config=args.config,  File "/opt/mindformers/research/baichuan2/run_baichuan2_chat.py", line 109, in main    network = model_dict[model_name](model_config)  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindformers/version_control.py", line 78, in decorator    func(*args, **kwargs)  File "/opt/mindformers/research/baichuan2/baichuan2_7b.py", line 363, in __init__    self.model = Baichuan7BV2Model(config=config)  File "/opt/mindformers/research/baichuan2/baichuan2_7b.py", line 114, in __init__    self.casual_mask = LowerTriangularMaskWithDynamic(seq_length=config.seq_length,  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindformers/tools/logger.py", line 575, in wrapper    res = func(*args, **kwargs)  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindformers/modules/transformer/transformer.py", line 888, in __init__    self.lower_triangle_mask = ops.cast(Tensor(np.tril(np.ones(shape=(seq_length, seq_length))), mstype.float32),  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/ops/primitive.py", line 314, in __call__    return _run_op(self, self.name, args)  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/ops/primitive.py", line 913, in _run_op    stub = _pynative_executor.run_op_async(obj, op_name, args)  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1186, in run_op_async    return self._executor.run_op_async(*args)RuntimeError: The pointer[res_manager_] is null.----------------------------------------------------- Framework Unexpected Exception Raised:----------------------------------------------------This exception is caused by framework's unexpected error. Please create an issue at https://gitee.com/mindspore/mindspore/issues to get help.
  • 如何在华为昇腾910b上部署bert模型,是否有成熟的docker镜像
    我使用transformer库微调了BERT模型,实现了一个简单的文本分类,现在需要将这个模型部署在华为昇腾910b上,有以下几个问题1.论坛中有在华为昇腾910b上部署BERT模型的详细教程吗?需要从0开始的那种详细教程那。2.是否有官方镜像,可以直接替换模型实现镜像部署。
  • [问题求助] 转示例模型Transformer-SSL报错:EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than
    ATC run failed, Please check the detail log, Try 'atc --help' for more information EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011        temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012                                           name="temp_area_ub", scope=tik.scope_ubuf) 1013        temp_iou_ub = \ 1014 ->         tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015                                            16], 1016                                name="temp_iou_ub", scope=tik.scope_ubuf) 1017        temp_join_ub = \ , Traceback:   File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute          tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128,         TraceBack (most recent call last):         Failed to compile Op [PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]]. (oppath: [Compile /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py failed with errormsg/stack: File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/tik/tik_lib/tik_check_util.py", line 313, in print_error_msg     raise RuntimeError(dict_arg) RuntimeError: {'errCode': 'EA0000', 'message': 'Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011        temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012                                           name="temp_area_ub", scope=tik.scope_ubuf) 1013        temp_iou_ub = \\ 1014 ->         tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015                                            16], 1016                                name="temp_iou_ub", scope=tik.scope_ubuf) 1017        temp_join_ub = \\ ', 'traceback': 'Traceback:   File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute          tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128,'} ], optype: [NonMaxSuppressionV7])         Compile op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]] failed, oppath[/usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py], optype[NonMaxSuppressionV7], taskID[1385]. Please check op's compilation error message.[FUNC:ReportBuildErrMessage][FILE:fusion_op.cc][LINE:858]         [SubGraphOpt][Compile][ProcFailedCompTask] Thread[281466487410816] recompile single op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117] failed[FUNC:ProcessAllFailedCompileTasks][FILE:tbe_op_store_adapter.cc][LINE:910]         [SubGraphOpt][Compile][ParalCompOp] Thread[281466487410816] process fail task failed[FUNC:ParallelCompileOp][FILE:tbe_op_store_adapter.cc][LINE:950]         [SubGraphOpt][Compile][CompOpOnly] CompileOp failed.[FUNC:CompileOpOnly][FILE:op_compiler.cc][LINE:988]         [GraphOpt][FusedGraph][RunCompile] Failed to compile graph with compiler Normal mode Op Compiler[FUNC:SubGraphCompile][FILE:fe_graph_optimizer.cc][LINE:1245]         Call OptimizeFusedGraph failed, ret:-1, engine_name:AIcoreEngine, graph_name:partition0_rank490_new_sub_graph936[FUNC:OptimizeSubGraph][FILE:graph_optimize.cc][LINE:131]         subgraph 489 optimize failed[FUNC:OptimizeSubGraphWithMultiThreads][FILE:graph_manager.cc][LINE:748]         build graph failed, graph id:0, ret:-1[FUNC:BuildModel][FILE:ge_generator.cc][LINE:1443]
  • [AI类] 转示例模型Transformer-SSL报错:EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than
    目录下的模型:https://gitee.com/ascend/ModelZoo-PyTorch/tree/master/ACL_PyTorch/contrib/cv/segmentation/Transformer-SSL转换成om过程中报错:ATC run failed, Please check the detail log, Try 'atc --help' for more information EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011        temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012                                           name="temp_area_ub", scope=tik.scope_ubuf) 1013        temp_iou_ub = \ 1014 ->         tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015                                            16], 1016                                name="temp_iou_ub", scope=tik.scope_ubuf) 1017        temp_join_ub = \ , Traceback:   File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute          tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128,         TraceBack (most recent call last):         Failed to compile Op [PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]]. (oppath: [Compile /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py failed with errormsg/stack: File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/tik/tik_lib/tik_check_util.py", line 313, in print_error_msg     raise RuntimeError(dict_arg) RuntimeError: {'errCode': 'EA0000', 'message': 'Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011        temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012                                           name="temp_area_ub", scope=tik.scope_ubuf) 1013        temp_iou_ub = \\ 1014 ->         tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015                                            16], 1016                                name="temp_iou_ub", scope=tik.scope_ubuf) 1017        temp_join_ub = \\ ', 'traceback': 'Traceback:   File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute          tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128,'} ], optype: [NonMaxSuppressionV7])         Compile op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]] failed, oppath[/usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py], optype[NonMaxSuppressionV7], taskID[1385]. Please check op's compilation error message.[FUNC:ReportBuildErrMessage][FILE:fusion_op.cc][LINE:858]         [SubGraphOpt][Compile][ProcFailedCompTask] Thread[281466487410816] recompile single op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117] failed[FUNC:ProcessAllFailedCompileTasks][FILE:tbe_op_store_adapter.cc][LINE:910]         [SubGraphOpt][Compile][ParalCompOp] Thread[281466487410816] process fail task failed[FUNC:ParallelCompileOp][FILE:tbe_op_store_adapter.cc][LINE:950]         [SubGraphOpt][Compile][CompOpOnly] CompileOp failed.[FUNC:CompileOpOnly][FILE:op_compiler.cc][LINE:988]         [GraphOpt][FusedGraph][RunCompile] Failed to compile graph with compiler Normal mode Op Compiler[FUNC:SubGraphCompile][FILE:fe_graph_optimizer.cc][LINE:1245]         Call OptimizeFusedGraph failed, ret:-1, engine_name:AIcoreEngine, graph_name:partition0_rank490_new_sub_graph936[FUNC:OptimizeSubGraph][FILE:graph_optimize.cc][LINE:131]         subgraph 489 optimize failed[FUNC:OptimizeSubGraphWithMultiThreads][FILE:graph_manager.cc][LINE:748]         build graph failed, graph id:0, ret:-1[FUNC:BuildModel][FILE:ge_generator.cc][LINE:1443]
  • [问题求助] [问题求助]偶现SVP_NPU推理报错,错误码:200005
    问题描述:我们在测试机器重复开关机中发现,偶然会出现svp_npu推理报错的问题。具体的流程和日志如下图,麻烦路过的朋友帮忙看看报错日志:[Func]:svp_npu_runtime_impl_get_device_and_stream_node_id [Line]:466 [Info]:Error, please set device or create context first [Func]:svp_npu_runtime_impl_execute_model_async [Line]:1519 [Info]:Error, get device and stream id failed when execute model async [Func]:svp_npu_model_execute_async [Line]:794 [Info]:Error, runtime execute model async failed  failed at InferModelAsync: LINE: 82 with 0x30d45!硬件型号: SS928SDK版本:[SVP_NPU] Version: [SS928V100V2.0.2.1 B050 Release], Build Time[Apr 27 2022, 16:54:46]程序开机后的运行流程:每次开机后,机器会先创建线程1,等线程1结束后,再创建线程2。线程1和线程2跑的模型不一样。有时会出现在线程2中,第一次调用推理时就报错
  • [技术干货] 2024年2月人工智能问题总结合集
    二月问题总结如下:【1】在ECS windows部署Llama2 尝试使用MLC运行,但出现以下报错,求助cid:link_0【2】atlas300P3 在容器中访问rtsp流地址报错No route to host cid:link_1【3】ECS上面,我看机器学习推荐的只有N卡,想问下华为自己的显卡在ModelArts那边不是能用,为啥还没上ECS cid:link_2【4】昇腾310(Ascend 310)能不能用来搭建stable diffusecid:link_3【5】 acl init failed, errorCode = 100039cid:link_4
  • [问题求助] acl init failed, errorCode = 100039
    在atlas 300I的硬件中执行了两个自己的项目。其中第一个项目初始执行时报错acl init failed, errorCode = 100039,通过source ~/.bashrc命令解决了。但是第二个项目仍报错acl init failed, errorCode = 100039,并且无法解决,请问是什么问题?npu-smi info是可以被正确执行的。
  • [问题求助] 直接使用mindir模型不转也能在ascend310设备使用嘛?
    直接使用mindir模型不转也能在ascend310设备使用嘛?具体如何使用的操作步骤
  • [常见问题汇总帖] 自己在Ascend910环境,如何实现mindir转om,具体的操作步骤是什么
    自己在Ascend910环境,如何实现mindir转om,具体的操作步骤是什么
  • [问题求助] 用华为atlas300-3010 用于训练失败
    1、成功安装atlas300-3010相关驱动等成功安装Ascend-cann-toolkit_5.1.RC2_linux-x86_64.runmindspore_ascend-1.8.0-cp37-cp37m-linux_x86_64.whl2、执行python3 -c "import mindspore;mindspore.set_context(device_target='Ascend');mindspore.run_check()"MindSpore version:  1.8.0 MindSpore running check failed. Internal Error: Get Device HBM memory size failed, ret = 0, total HBM size :0  ---------------------------------------------------- - C++ Call Stack: (For framework developers) ---------------------------------------------------- mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:53 Initialize 3、复用分类代码,写了一个python文件叫ResNet-1.8.pyhttps://www.mindspore.cn/tutorials/zh-CN/r1.8/beginner/infer.html其中设置了调用设备的方式;parser = argparse.ArgumentParser(description='MindSpore Example') parser.add_argument('--device_target', type=str, default="Ascend", choices=['Ascend', 'GPU', 'CPU']) args = parser.parse_args() context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)执行python3 ResNet-1.8.py,得到Delete parameter from checkpoint:  head.classifier.weight Delete parameter from checkpoint:  head.classifier.bias Delete parameter from checkpoint:  moments.head.classifier.weight Delete parameter from checkpoint:  moments.head.classifier.bias [WARNING] ME(46532:140370453165888,MainProcess):2024-01-26-13:57:07.963.642 [mindspore/train/serialization.py:712] For 'load_param_into_net', 2 parameters in the 'net' are not loaded, because they are not in the 'parameter_dict', please check whether the network structure is consistent when training and loading checkpoint. [WARNING] ME(46532:140370453165888,MainProcess):2024-01-26-13:57:07.963.854 [mindspore/train/serialization.py:714] head.classifier.weight is not loaded. [WARNING] ME(46532:140370453165888,MainProcess):2024-01-26-13:57:07.963.911 [mindspore/train/serialization.py:714] head.classifier.bias is not loaded. Traceback (most recent call last):   File "ResNet-1.8.py", line 122, in <module>     callbacks=None)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 1069, in train     initial_epoch=initial_epoch)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 96, in wrapper     func(self, *args, **kwargs)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 622, in _train     cb_params, sink_size, initial_epoch, valid_infos)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 681, in _train_dataset_sink_process     dataset_helper=dataset_helper)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 437, in _exec_preprocess     dataset_helper = DatasetHelper(dataset, dataset_sink_mode, sink_size, epoch_num)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/dataset_helper.py", line 338, in __init__     self.iter = iterclass(dataset, sink_size, epoch_num)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/dataset_helper.py", line 557, in __init__     super().__init__(dataset, sink_size, epoch_num)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/dataset_helper.py", line 455, in __init__     is_dynamic_shape=self.dynamic_shape)   File "/usr/local/lib/python3.7/site-packages/mindspore/train/_utils.py", line 77, in _exec_datagraph     need_run=need_run)   File "/usr/local/lib/python3.7/site-packages/mindspore/common/api.py", line 1009, in init_dataset     need_run=need_run): RuntimeError: Internal Error: Get Device HBM memory size failed, ret = 0, total HBM size :0 ---------------------------------------------------- - C++ Call Stack: (For framework developers) ---------------------------------------------------- mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:53 Initialize 4、执行npu-smi info,得到信息如下:+--------------------------------------------------------------------------------------------------------+ | npu-smi 22.0.4                                   Version: 22.0.4                                       | +-------------------------------+-----------------+------------------------------------------------------+ | NPU     Name                  | Health          | Power(W)     Temp(C)           Hugepages-Usage(page) | | Chip    Device                | Bus-Id          | AICore(%)    Memory-Usage(MB)                        | +===============================+=================+======================================================+ | 1       310                   | OK              | 12.8         59                0    / 969            | | 0       0                     | 0000:3D:00.0    | 0            666  / 7759                             | +-------------------------------+-----------------+------------------------------------------------------+ | 1       310                   | OK              | 12.8         60                0    / 969            | | 1       1                     | 0000:3E:00.0    | 0            626  / 7759                             | +-------------------------------+-----------------+------------------------------------------------------+ | 1       310                   | OK              | 12.8         59                0    / 969            | | 2       2                     | 0000:3F:00.0    | 0            628  / 7759                             | +-------------------------------+-----------------+------------------------------------------------------+ | 1       310                   | OK              | 12.8         59                0    / 969            | | 3       3                     | 0000:40:00.0    | 0            627  / 7759                             | +===============================+=================+======================================================+ 
  • [开发环境] notebook的tf预置镜像使用Ascend910失败
    镜像:tensorflow1.15-cann5.1.0-py3.7-euler2.8.3规格:Ascend: 1*Ascend910|ARM: 24核 96GB参考Step1 在Notebook中拷贝模型包_AI开发平台ModelArts_镜像管理_使用自定义镜像创建AI应用(推理部署)_无需构建直接在开发环境中调试并保存镜像用于推理_华为云 (huaweicloud.com)调试个人tensorflow的SavedModel格式模型执行run.sh后,发现模型被加载内存中,调用npu-smi info发现没有任何占用,HBM为0%,没有使用910。请问这是哪里出现了异常,有什么调试的方向.......?
  • [热门活动] 【有奖专家坐堂答疑】DTSE Tech Talk 年度收官直播:看直播提问互动赢华为云定制长袖卫衣!!
    中奖结果公示感谢各位小伙伴参与本次活动,本次活动获奖名单如下:请获奖的伙伴在1月14日之前点击此处填写收货地址,如逾期未填写视为弃奖。再次感谢各位小伙伴参与本次活动,欢迎关注华为云DTSE Tech Talk 技术直播更多活动~直播简介【直播主题】华为云《 DTSE Tech Talk 》年度收官:AI创造无限可能【直播时间】2023年1月5日 15:00-17:30【直播专家】夏飞 华为云EI DTSE技术布道师左雯 华为云媒体DTSE技术布道师肖斐 华为云DTSE技术布道师徐伟招 今日人才联合创始人&VP周汝霖 华为昇思MindSpore学生布道师徐毅 华为云DTSE技术布道师【直播简介】在我们的科技进步不断加速的今天,人工智能已经成为了我们生活中不可或缺的一部分。 AI 的发展为我们带来了前所未有的可能性,它正在改变我们的工作方式、生活方式,甚至是我们对世界的理解。DTT年度收官盛典将与大家一起探讨AI的应用领域与技术发展,企业及个人如何在智能化时代下完成技术飞跃。直播链接:cid:link_2活动介绍即日起—1月8日,在本贴提出与直播相关的问题,坐堂专家评选优质问题送华为云定制长袖卫衣。戳我了解更多活动【注意事项】1、所有参与活动的问题,如发现为抄袭内容或与直播期间内容重复,则取消获奖资格。2、为保证您顺利领取活动奖品,请您在活动公示奖项后2个工作日内私信提前填写奖品收货信息,如您没有填写,视为自动放弃奖励。3、活动奖项公示时间截止2023年1月8日,如未反馈邮寄信息视为弃奖。本次活动奖品将于奖项公示后30个工作日内统一发出,请您耐心等待。4、活动期间同类子活动每个ID(同一姓名/电话/收货地址)只能获奖一次,若重复则中奖资格顺延至下一位合格开发者,仅一次顺延。5、如活动奖品出现没有库存的情况,华为云工作人员将会替换等价值的奖品,获奖者不同意此规则视为放弃奖品。6、其他事宜请参考【华为云社区常规活动规则】。
  • [热门活动] DTSE Tech Talk 年度收官直播:分享我与DTT的独家记忆,赢一整套华为云云宝公仔,是一整套的那种!!
    中奖结果公示感谢各位小伙伴参与本次活动,本次活动获奖名单如下:        请获奖的伙伴在1月14日之前点击此处填写收货地址,如逾期未填写视为弃奖。再次感谢各位小伙伴参与本次活动,欢迎关注华为云DTSE Tech Talk 技术直播更多活动~【直播主题】华为云《 DTSE Tech Talk 》年度收官:AI创造无限可能【直播时间】2023年1月5日 15:00-17:30【直播专家】夏飞 华为云EI DTSE技术布道师左雯 华为云媒体DTSE技术布道师肖斐 华为云DTSE技术布道师徐伟招 今日人才联合创始人&VP周汝霖 华为昇思MindSpore学生布道师徐毅 华为云DTSE技术布道师【直播简介】在我们的科技进步不断加速的今天,人工智能已经成为了我们生活中不可或缺的一部分。 AI 的发展为我们带来了前所未有的可能性,它正在改变我们的工作方式、生活方式,甚至是我们对世界的理解。DTT年度收官盛典将与大家一起探讨AI的应用领域与技术发展,企业及个人如何在智能化时代下完成技术飞跃。直播链接:cid:link_2【活动介绍】即日起——1月7日23:59,在本帖分享300字以上2023年DTT直播对你产生的影响、给你带来的收获与帮助,评选优质内容送华为云云宝公仔一套。戳我了解更多活动【注意事项】1、所有参与活动的内容,如发现为抄袭内容或水文,则取消获奖资格。2、为保证您顺利领取活动奖品,请您在活动公示奖项后2个工作日内私信提前填写奖品收货信息,如您没有填写,视为自动放弃奖励。3、活动奖项公示时间截止2023年1月8日,如未反馈邮寄信息视为弃奖。本次活动奖品将于奖项公示后30个工作日内统一发出,请您耐心等待。4、活动期间同类子活动每个ID(同一姓名/电话/收货地址)只能获奖一次,若重复则中奖资格顺延至下一位合格开发者,仅一次顺延。5、如活动奖品出现没有库存的情况,华为云工作人员将会替换等价值的奖品,获奖者不同意此规则视为放弃奖品。6、其他事宜请参考【华为云社区常规活动规则】。
  • [问题求助] File in GST_PLUGIN_PATH is invalid
    E20231226 03:38:29.322641 314620 FileUtils.cpp:471] Check Owner group permission failed: Current permission is 5, but required no greater than 4. (Code = 1001, Message = "General Failed") E20231226 03:38:29.325861 314620 MxStreamManagerDptr.cpp:364] File in GST_PLUGIN_PATH is invalid. (Code = 1001, Message = "General Failed") E20231226 03:38:29.325950 314620 MxStreamManagerDptr.cpp:384] Check directories in GST_PLUGIN_PATH failed. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326009 314620 MxStreamManagerDptr.cpp:465] Check GST_PLUGIN_PATH failed. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326079 314620 MxStreamManagerDptr.cpp:503] Handle environment: GST_PLUGIN_PATH failed. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326140 314620 MxStreamManager.cpp:89] Before creating a pipeline, please set related environment variables. The following two methods are available: (1) Permanent method: set the environment variable in the ~/.bashrc file of the current user, and run the "source ~/.bashrc" command manually in the current window. (2) Temporary method: run the export command to import the required environment variables in the current window. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326210 314620 main.cpp:62] Failed to init Stream manager, ret = 1001. 用的昇腾DeepSort案例,开发环境昇腾AI 200I DK
  • [热门活动] 【看直播赢豪礼!!】DTSE Tech Talk 年度收官直播:参与直播互动赢华为Freelace Pro无线耳机及各种华为周边!!
    直播简介【直播主题】华为云《 DTSE Tech Talk 》年度收官:AI创造无限可能【直播时间】2023年1月5日 15:00-17:30【直播专家】夏飞 华为云EI DTSE技术布道师 左雯 华为云媒体DTSE技术布道师肖斐 华为云DTSE技术布道师徐伟招 今日人才联合创始人&VP周汝霖 华为昇思MindSpore学生布道师徐毅 华为云DTSE技术布道师【直播简介】在我们的科技进步不断加速的今天,人工智能已经成为了我们生活中不可或缺的一部分。 AI 的发展为我们带来了前所未有的可能性,它正在改变我们的工作方式、生活方式,甚至是我们对世界的理解。DTT年度收官盛典将与大家一起探讨AI的应用领域与技术发展,企业及个人如何在智能化时代下完成技术飞跃。直播链接:cid:link_2活动介绍报名转发抽奖:即日起——1月5日18:00,报名直播并转发海报至朋友圈随机抽送华为云定制Polo衫,海报在本页获取(长按图片保存)。我与DTT的独家记忆:即日起——1月7日23:59,在论坛帖分享300字以上2023年DTT直播对你产生的影响、给你带来的收获与帮助,评选优质内容送华为云云宝公仔一套。有奖调研:即日起——1月5日18:00,参与直播调研问卷抽送华为云定制无线鼠标。 口令抽奖:1月5日15:00——18:00,通过官网直播间评论区发送“DTT圆满收官”口令抽送华为云定制雨伞。 有奖提问:1月5日15:00——18:00,直播期间提出直播相关问题,评选优质问题送华为云定制双肩包。我爱DTT:1月5日15:00——18:00,直播观看时长超过45分钟抽送华为Freelace Pro无线耳机。 专家坐堂有奖:即日起-1月8日,在指定论坛贴提出与直播相关的问题,坐堂专家评选优质问题送华为云定制长袖卫衣。 重磅干货: 即日起-1月7日,加入直播交流群即可获取《DTT技术公开课2023精华电子书》 。 直播回顾:1月5日18:00后,直播结束后返回原直播页回顾直播精彩内容。 【注意事项】1、所有参与活动的问题/内容,如发现为抄袭内容,则取消获奖资格。2、为保证您顺利领取活动奖品,请您在活动公示奖项后2个工作日内私信提前填写奖品收货信息,如您没有填写,视为自动放弃奖励。3、活动奖项公示时间截止2023年1月8日,如未反馈邮寄信息视为弃奖。本次活动奖品将于奖项公示后30个工作日内统一发出,请您耐心等待。4、活动期间同类子活动每个ID(同一姓名/电话/收货地址)只能获奖一次,若重复则中奖资格顺延至下一位合格开发者,仅一次顺延。5、如活动奖品出现没有库存的情况,华为云工作人员将会替换等价值的奖品,获奖者不同意此规则视为放弃奖品。6、其他事宜请参考【华为云社区常规活动规则】。