-
[问题求助] 转示例模型Transformer-SSL报错:EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more thanATC run failed, Please check the detail log, Try 'atc --help' for more information EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011 temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012 name="temp_area_ub", scope=tik.scope_ubuf) 1013 temp_iou_ub = \ 1014 -> tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015 16], 1016 name="temp_iou_ub", scope=tik.scope_ubuf) 1017 temp_join_ub = \ , Traceback: File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, TraceBack (most recent call last): Failed to compile Op [PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]]. (oppath: [Compile /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py failed with errormsg/stack: File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/tik/tik_lib/tik_check_util.py", line 313, in print_error_msg raise RuntimeError(dict_arg) RuntimeError: {'errCode': 'EA0000', 'message': 'Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011 temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012 name="temp_area_ub", scope=tik.scope_ubuf) 1013 temp_iou_ub = \\ 1014 -> tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015 16], 1016 name="temp_iou_ub", scope=tik.scope_ubuf) 1017 temp_join_ub = \\ ', 'traceback': 'Traceback: File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128,'} ], optype: [NonMaxSuppressionV7]) Compile op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]] failed, oppath[/usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py], optype[NonMaxSuppressionV7], taskID[1385]. Please check op's compilation error message.[FUNC:ReportBuildErrMessage][FILE:fusion_op.cc][LINE:858] [SubGraphOpt][Compile][ProcFailedCompTask] Thread[281466487410816] recompile single op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117] failed[FUNC:ProcessAllFailedCompileTasks][FILE:tbe_op_store_adapter.cc][LINE:910] [SubGraphOpt][Compile][ParalCompOp] Thread[281466487410816] process fail task failed[FUNC:ParallelCompileOp][FILE:tbe_op_store_adapter.cc][LINE:950] [SubGraphOpt][Compile][CompOpOnly] CompileOp failed.[FUNC:CompileOpOnly][FILE:op_compiler.cc][LINE:988] [GraphOpt][FusedGraph][RunCompile] Failed to compile graph with compiler Normal mode Op Compiler[FUNC:SubGraphCompile][FILE:fe_graph_optimizer.cc][LINE:1245] Call OptimizeFusedGraph failed, ret:-1, engine_name:AIcoreEngine, graph_name:partition0_rank490_new_sub_graph936[FUNC:OptimizeSubGraph][FILE:graph_optimize.cc][LINE:131] subgraph 489 optimize failed[FUNC:OptimizeSubGraphWithMultiThreads][FILE:graph_manager.cc][LINE:748] build graph failed, graph id:0, ret:-1[FUNC:BuildModel][FILE:ge_generator.cc][LINE:1443]
-
[AI类] 转示例模型Transformer-SSL报错:EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than目录下的模型:https://gitee.com/ascend/ModelZoo-PyTorch/tree/master/ACL_PyTorch/contrib/cv/segmentation/Transformer-SSL转换成om过程中报错:ATC run failed, Please check the detail log, Try 'atc --help' for more information EA0000: Compile operator failed, cause: Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011 temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012 name="temp_area_ub", scope=tik.scope_ubuf) 1013 temp_iou_ub = \ 1014 -> tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015 16], 1016 name="temp_iou_ub", scope=tik.scope_ubuf) 1017 temp_join_ub = \ , Traceback: File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, TraceBack (most recent call last): Failed to compile Op [PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]]. (oppath: [Compile /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py failed with errormsg/stack: File "/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/tbe/tik/tik_lib/tik_check_util.py", line 313, in print_error_msg raise RuntimeError(dict_arg) RuntimeError: {'errCode': 'EA0000', 'message': 'Tensor temp_iou_ub appiles buffer size(156160B) more than available buffer size(14528B). File path: /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014 The context code cause the exception is: 1011 temp_area_ub = tik_instance.Tensor("float16", [BURST_PROPOSAL_NUM], 1012 name="temp_area_ub", scope=tik.scope_ubuf) 1013 temp_iou_ub = \\ 1014 -> tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128, 1015 16], 1016 name="temp_iou_ub", scope=tik.scope_ubuf) 1017 temp_join_ub = \\ ', 'traceback': 'Traceback: File /usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py, line 1014, in do_nms_compute tik_instance.Tensor("float16", [ceil_div(total_output_proposal_num, RPN_PROPOSAL_NUM) * RPN_PROPOSAL_NUM + 128,'} ], optype: [NonMaxSuppressionV7]) Compile op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117,[NonMaxSuppression_8285,NonMaxSuppression_8285]] failed, oppath[/usr/local/Ascend/ascend-toolkit/6.3.RC1/opp/built-in/op_impl/ai_core/tbe/impl/non_max_suppression_v7.py], optype[NonMaxSuppressionV7], taskID[1385]. Please check op's compilation error message.[FUNC:ReportBuildErrMessage][FILE:fusion_op.cc][LINE:858] [SubGraphOpt][Compile][ProcFailedCompTask] Thread[281466487410816] recompile single op[PartitionedCall_NonMaxSuppression_8285_NonMaxSuppressionV6_117] failed[FUNC:ProcessAllFailedCompileTasks][FILE:tbe_op_store_adapter.cc][LINE:910] [SubGraphOpt][Compile][ParalCompOp] Thread[281466487410816] process fail task failed[FUNC:ParallelCompileOp][FILE:tbe_op_store_adapter.cc][LINE:950] [SubGraphOpt][Compile][CompOpOnly] CompileOp failed.[FUNC:CompileOpOnly][FILE:op_compiler.cc][LINE:988] [GraphOpt][FusedGraph][RunCompile] Failed to compile graph with compiler Normal mode Op Compiler[FUNC:SubGraphCompile][FILE:fe_graph_optimizer.cc][LINE:1245] Call OptimizeFusedGraph failed, ret:-1, engine_name:AIcoreEngine, graph_name:partition0_rank490_new_sub_graph936[FUNC:OptimizeSubGraph][FILE:graph_optimize.cc][LINE:131] subgraph 489 optimize failed[FUNC:OptimizeSubGraphWithMultiThreads][FILE:graph_manager.cc][LINE:748] build graph failed, graph id:0, ret:-1[FUNC:BuildModel][FILE:ge_generator.cc][LINE:1443]
-
问题描述:我们在测试机器重复开关机中发现,偶然会出现svp_npu推理报错的问题。具体的流程和日志如下图,麻烦路过的朋友帮忙看看报错日志:[Func]:svp_npu_runtime_impl_get_device_and_stream_node_id [Line]:466 [Info]:Error, please set device or create context first [Func]:svp_npu_runtime_impl_execute_model_async [Line]:1519 [Info]:Error, get device and stream id failed when execute model async [Func]:svp_npu_model_execute_async [Line]:794 [Info]:Error, runtime execute model async failed failed at InferModelAsync: LINE: 82 with 0x30d45!硬件型号: SS928SDK版本:[SVP_NPU] Version: [SS928V100V2.0.2.1 B050 Release], Build Time[Apr 27 2022, 16:54:46]程序开机后的运行流程:每次开机后,机器会先创建线程1,等线程1结束后,再创建线程2。线程1和线程2跑的模型不一样。有时会出现在线程2中,第一次调用推理时就报错
-
二月问题总结如下:【1】在ECS windows部署Llama2 尝试使用MLC运行,但出现以下报错,求助cid:link_0【2】atlas300P3 在容器中访问rtsp流地址报错No route to host cid:link_1【3】ECS上面,我看机器学习推荐的只有N卡,想问下华为自己的显卡在ModelArts那边不是能用,为啥还没上ECS cid:link_2【4】昇腾310(Ascend 310)能不能用来搭建stable diffusecid:link_3【5】 acl init failed, errorCode = 100039cid:link_4
-
在atlas 300I的硬件中执行了两个自己的项目。其中第一个项目初始执行时报错acl init failed, errorCode = 100039,通过source ~/.bashrc命令解决了。但是第二个项目仍报错acl init failed, errorCode = 100039,并且无法解决,请问是什么问题?npu-smi info是可以被正确执行的。
-
直接使用mindir模型不转也能在ascend310设备使用嘛?具体如何使用的操作步骤
-
自己在Ascend910环境,如何实现mindir转om,具体的操作步骤是什么
-
1、成功安装atlas300-3010相关驱动等成功安装Ascend-cann-toolkit_5.1.RC2_linux-x86_64.runmindspore_ascend-1.8.0-cp37-cp37m-linux_x86_64.whl2、执行python3 -c "import mindspore;mindspore.set_context(device_target='Ascend');mindspore.run_check()"MindSpore version: 1.8.0 MindSpore running check failed. Internal Error: Get Device HBM memory size failed, ret = 0, total HBM size :0 ---------------------------------------------------- - C++ Call Stack: (For framework developers) ---------------------------------------------------- mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:53 Initialize 3、复用分类代码,写了一个python文件叫ResNet-1.8.pyhttps://www.mindspore.cn/tutorials/zh-CN/r1.8/beginner/infer.html其中设置了调用设备的方式;parser = argparse.ArgumentParser(description='MindSpore Example') parser.add_argument('--device_target', type=str, default="Ascend", choices=['Ascend', 'GPU', 'CPU']) args = parser.parse_args() context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)执行python3 ResNet-1.8.py,得到Delete parameter from checkpoint: head.classifier.weight Delete parameter from checkpoint: head.classifier.bias Delete parameter from checkpoint: moments.head.classifier.weight Delete parameter from checkpoint: moments.head.classifier.bias [WARNING] ME(46532:140370453165888,MainProcess):2024-01-26-13:57:07.963.642 [mindspore/train/serialization.py:712] For 'load_param_into_net', 2 parameters in the 'net' are not loaded, because they are not in the 'parameter_dict', please check whether the network structure is consistent when training and loading checkpoint. [WARNING] ME(46532:140370453165888,MainProcess):2024-01-26-13:57:07.963.854 [mindspore/train/serialization.py:714] head.classifier.weight is not loaded. [WARNING] ME(46532:140370453165888,MainProcess):2024-01-26-13:57:07.963.911 [mindspore/train/serialization.py:714] head.classifier.bias is not loaded. Traceback (most recent call last): File "ResNet-1.8.py", line 122, in <module> callbacks=None) File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 1069, in train initial_epoch=initial_epoch) File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 96, in wrapper func(self, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 622, in _train cb_params, sink_size, initial_epoch, valid_infos) File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 681, in _train_dataset_sink_process dataset_helper=dataset_helper) File "/usr/local/lib/python3.7/site-packages/mindspore/train/model.py", line 437, in _exec_preprocess dataset_helper = DatasetHelper(dataset, dataset_sink_mode, sink_size, epoch_num) File "/usr/local/lib/python3.7/site-packages/mindspore/train/dataset_helper.py", line 338, in __init__ self.iter = iterclass(dataset, sink_size, epoch_num) File "/usr/local/lib/python3.7/site-packages/mindspore/train/dataset_helper.py", line 557, in __init__ super().__init__(dataset, sink_size, epoch_num) File "/usr/local/lib/python3.7/site-packages/mindspore/train/dataset_helper.py", line 455, in __init__ is_dynamic_shape=self.dynamic_shape) File "/usr/local/lib/python3.7/site-packages/mindspore/train/_utils.py", line 77, in _exec_datagraph need_run=need_run) File "/usr/local/lib/python3.7/site-packages/mindspore/common/api.py", line 1009, in init_dataset need_run=need_run): RuntimeError: Internal Error: Get Device HBM memory size failed, ret = 0, total HBM size :0 ---------------------------------------------------- - C++ Call Stack: (For framework developers) ---------------------------------------------------- mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:53 Initialize 4、执行npu-smi info,得到信息如下:+--------------------------------------------------------------------------------------------------------+ | npu-smi 22.0.4 Version: 22.0.4 | +-------------------------------+-----------------+------------------------------------------------------+ | NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page) | | Chip Device | Bus-Id | AICore(%) Memory-Usage(MB) | +===============================+=================+======================================================+ | 1 310 | OK | 12.8 59 0 / 969 | | 0 0 | 0000:3D:00.0 | 0 666 / 7759 | +-------------------------------+-----------------+------------------------------------------------------+ | 1 310 | OK | 12.8 60 0 / 969 | | 1 1 | 0000:3E:00.0 | 0 626 / 7759 | +-------------------------------+-----------------+------------------------------------------------------+ | 1 310 | OK | 12.8 59 0 / 969 | | 2 2 | 0000:3F:00.0 | 0 628 / 7759 | +-------------------------------+-----------------+------------------------------------------------------+ | 1 310 | OK | 12.8 59 0 / 969 | | 3 3 | 0000:40:00.0 | 0 627 / 7759 | +===============================+=================+======================================================+
-
镜像:tensorflow1.15-cann5.1.0-py3.7-euler2.8.3规格:Ascend: 1*Ascend910|ARM: 24核 96GB参考Step1 在Notebook中拷贝模型包_AI开发平台ModelArts_镜像管理_使用自定义镜像创建AI应用(推理部署)_无需构建直接在开发环境中调试并保存镜像用于推理_华为云 (huaweicloud.com)调试个人tensorflow的SavedModel格式模型执行run.sh后,发现模型被加载内存中,调用npu-smi info发现没有任何占用,HBM为0%,没有使用910。请问这是哪里出现了异常,有什么调试的方向.......?
-
中奖结果公示感谢各位小伙伴参与本次活动,本次活动获奖名单如下:请获奖的伙伴在1月14日之前点击此处填写收货地址,如逾期未填写视为弃奖。再次感谢各位小伙伴参与本次活动,欢迎关注华为云DTSE Tech Talk 技术直播更多活动~直播简介【直播主题】华为云《 DTSE Tech Talk 》年度收官:AI创造无限可能【直播时间】2023年1月5日 15:00-17:30【直播专家】夏飞 华为云EI DTSE技术布道师左雯 华为云媒体DTSE技术布道师肖斐 华为云DTSE技术布道师徐伟招 今日人才联合创始人&VP周汝霖 华为昇思MindSpore学生布道师徐毅 华为云DTSE技术布道师【直播简介】在我们的科技进步不断加速的今天,人工智能已经成为了我们生活中不可或缺的一部分。 AI 的发展为我们带来了前所未有的可能性,它正在改变我们的工作方式、生活方式,甚至是我们对世界的理解。DTT年度收官盛典将与大家一起探讨AI的应用领域与技术发展,企业及个人如何在智能化时代下完成技术飞跃。直播链接:cid:link_2活动介绍即日起—1月8日,在本贴提出与直播相关的问题,坐堂专家评选优质问题送华为云定制长袖卫衣。戳我了解更多活动【注意事项】1、所有参与活动的问题,如发现为抄袭内容或与直播期间内容重复,则取消获奖资格。2、为保证您顺利领取活动奖品,请您在活动公示奖项后2个工作日内私信提前填写奖品收货信息,如您没有填写,视为自动放弃奖励。3、活动奖项公示时间截止2023年1月8日,如未反馈邮寄信息视为弃奖。本次活动奖品将于奖项公示后30个工作日内统一发出,请您耐心等待。4、活动期间同类子活动每个ID(同一姓名/电话/收货地址)只能获奖一次,若重复则中奖资格顺延至下一位合格开发者,仅一次顺延。5、如活动奖品出现没有库存的情况,华为云工作人员将会替换等价值的奖品,获奖者不同意此规则视为放弃奖品。6、其他事宜请参考【华为云社区常规活动规则】。
-
中奖结果公示感谢各位小伙伴参与本次活动,本次活动获奖名单如下: 请获奖的伙伴在1月14日之前点击此处填写收货地址,如逾期未填写视为弃奖。再次感谢各位小伙伴参与本次活动,欢迎关注华为云DTSE Tech Talk 技术直播更多活动~【直播主题】华为云《 DTSE Tech Talk 》年度收官:AI创造无限可能【直播时间】2023年1月5日 15:00-17:30【直播专家】夏飞 华为云EI DTSE技术布道师左雯 华为云媒体DTSE技术布道师肖斐 华为云DTSE技术布道师徐伟招 今日人才联合创始人&VP周汝霖 华为昇思MindSpore学生布道师徐毅 华为云DTSE技术布道师【直播简介】在我们的科技进步不断加速的今天,人工智能已经成为了我们生活中不可或缺的一部分。 AI 的发展为我们带来了前所未有的可能性,它正在改变我们的工作方式、生活方式,甚至是我们对世界的理解。DTT年度收官盛典将与大家一起探讨AI的应用领域与技术发展,企业及个人如何在智能化时代下完成技术飞跃。直播链接:cid:link_2【活动介绍】即日起——1月7日23:59,在本帖分享300字以上2023年DTT直播对你产生的影响、给你带来的收获与帮助,评选优质内容送华为云云宝公仔一套。戳我了解更多活动【注意事项】1、所有参与活动的内容,如发现为抄袭内容或水文,则取消获奖资格。2、为保证您顺利领取活动奖品,请您在活动公示奖项后2个工作日内私信提前填写奖品收货信息,如您没有填写,视为自动放弃奖励。3、活动奖项公示时间截止2023年1月8日,如未反馈邮寄信息视为弃奖。本次活动奖品将于奖项公示后30个工作日内统一发出,请您耐心等待。4、活动期间同类子活动每个ID(同一姓名/电话/收货地址)只能获奖一次,若重复则中奖资格顺延至下一位合格开发者,仅一次顺延。5、如活动奖品出现没有库存的情况,华为云工作人员将会替换等价值的奖品,获奖者不同意此规则视为放弃奖品。6、其他事宜请参考【华为云社区常规活动规则】。
-
E20231226 03:38:29.322641 314620 FileUtils.cpp:471] Check Owner group permission failed: Current permission is 5, but required no greater than 4. (Code = 1001, Message = "General Failed") E20231226 03:38:29.325861 314620 MxStreamManagerDptr.cpp:364] File in GST_PLUGIN_PATH is invalid. (Code = 1001, Message = "General Failed") E20231226 03:38:29.325950 314620 MxStreamManagerDptr.cpp:384] Check directories in GST_PLUGIN_PATH failed. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326009 314620 MxStreamManagerDptr.cpp:465] Check GST_PLUGIN_PATH failed. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326079 314620 MxStreamManagerDptr.cpp:503] Handle environment: GST_PLUGIN_PATH failed. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326140 314620 MxStreamManager.cpp:89] Before creating a pipeline, please set related environment variables. The following two methods are available: (1) Permanent method: set the environment variable in the ~/.bashrc file of the current user, and run the "source ~/.bashrc" command manually in the current window. (2) Temporary method: run the export command to import the required environment variables in the current window. (Code = 1001, Message = "General Failed") E20231226 03:38:29.326210 314620 main.cpp:62] Failed to init Stream manager, ret = 1001. 用的昇腾DeepSort案例,开发环境昇腾AI 200I DK
-
直播简介【直播主题】华为云《 DTSE Tech Talk 》年度收官:AI创造无限可能【直播时间】2023年1月5日 15:00-17:30【直播专家】夏飞 华为云EI DTSE技术布道师 左雯 华为云媒体DTSE技术布道师肖斐 华为云DTSE技术布道师徐伟招 今日人才联合创始人&VP周汝霖 华为昇思MindSpore学生布道师徐毅 华为云DTSE技术布道师【直播简介】在我们的科技进步不断加速的今天,人工智能已经成为了我们生活中不可或缺的一部分。 AI 的发展为我们带来了前所未有的可能性,它正在改变我们的工作方式、生活方式,甚至是我们对世界的理解。DTT年度收官盛典将与大家一起探讨AI的应用领域与技术发展,企业及个人如何在智能化时代下完成技术飞跃。直播链接:cid:link_2活动介绍报名转发抽奖:即日起——1月5日18:00,报名直播并转发海报至朋友圈随机抽送华为云定制Polo衫,海报在本页获取(长按图片保存)。我与DTT的独家记忆:即日起——1月7日23:59,在论坛帖分享300字以上2023年DTT直播对你产生的影响、给你带来的收获与帮助,评选优质内容送华为云云宝公仔一套。有奖调研:即日起——1月5日18:00,参与直播调研问卷抽送华为云定制无线鼠标。 口令抽奖:1月5日15:00——18:00,通过官网直播间评论区发送“DTT圆满收官”口令抽送华为云定制雨伞。 有奖提问:1月5日15:00——18:00,直播期间提出直播相关问题,评选优质问题送华为云定制双肩包。我爱DTT:1月5日15:00——18:00,直播观看时长超过45分钟抽送华为Freelace Pro无线耳机。 专家坐堂有奖:即日起-1月8日,在指定论坛贴提出与直播相关的问题,坐堂专家评选优质问题送华为云定制长袖卫衣。 重磅干货: 即日起-1月7日,加入直播交流群即可获取《DTT技术公开课2023精华电子书》 。 直播回顾:1月5日18:00后,直播结束后返回原直播页回顾直播精彩内容。 【注意事项】1、所有参与活动的问题/内容,如发现为抄袭内容,则取消获奖资格。2、为保证您顺利领取活动奖品,请您在活动公示奖项后2个工作日内私信提前填写奖品收货信息,如您没有填写,视为自动放弃奖励。3、活动奖项公示时间截止2023年1月8日,如未反馈邮寄信息视为弃奖。本次活动奖品将于奖项公示后30个工作日内统一发出,请您耐心等待。4、活动期间同类子活动每个ID(同一姓名/电话/收货地址)只能获奖一次,若重复则中奖资格顺延至下一位合格开发者,仅一次顺延。5、如活动奖品出现没有库存的情况,华为云工作人员将会替换等价值的奖品,获奖者不同意此规则视为放弃奖品。6、其他事宜请参考【华为云社区常规活动规则】。
-
export LD_LIBRARY_PATH=/usr/local/python3.7.5/lib:$LD_LIBRARY_PATH export PATH=/usr/local/python3.7.5/bin:$PATH export ASCEND_HOME=/home/w/Ascend/ascend-toolkit/latest export LD_LIBRARY_PATH=${ASCEND_HOME}/atc/lib64:${ASCEND_HOME}/compiler/lib64/plugin/opskernel:${ASCEND_HOME}/compiler/lib64/plugin/opskernel/nnengine:/home/zt/Ascend/ascend-toolkit/5.0.4/arm64-linux/runtime/lib64/stub:/home/zt/Ascend/ascend-toolkit/5.0.4/arm64-linux/aarch64-linux/devlib:/home/zt/Ascend/ascend-toolkit/5.0.4/x86_64-linux/x86_64-linux/devlib:/home/zt/Ascend/ascend-toolkit/5.0.4/x86_64-linux/runtime/lib64/stub:$LD_LIBRARY_PATH export FFMPEG_PATH=/home/wpf/Euler_compile_env_cross/arm/cross_compile/install/sysroot/usr/local export ASCEND_HOME_PATH==${ASCEND_HOME}:$ASCEND_HOME_PATH export PATH=${ASCEND_HOME}/atc/ccec_compiler/bin:${ASCEND_HOME}/atc/bin:${ASCEND_HOME}/compiler/ccec_compiler/bin:$PATH export PYTHONPATH=${ASCEND_HOME}/atc/python/site-packages:${ASCEND_HOME}/python/site-packages:${ASCEND_HOME}/opp/op_impl/built-in/ai_core/tbe:$PYTHONPATH export LD_LIBRARY_PATH=${ASCEND_HOME}/atc/lib64:${ASCEND_HOME}/acllib/lib64:$LD_LIBRARY_PATH export ASCEND_OPP_PATH=${ASCEND_HOME}/opp export ASCEND_AICPU_PATH=$ASCEND_HOME export KERNEL_PERF_COMM_PATH=${ASCEND_HOME}/atc/bin export CROSS_COMPILER_HOME=/home/w/Euler_compile_env_cross/arm/cross_compile/install环境变量如上,使用 atc --framework=5 --model=best_sim_dianyun_231206_89L.onnx --output=best --log=error --soc_version=Ascend310转换模型时发生下图错误
-
基于ModelArts实现智能花卉识别概述ModelArts实现智能花卉识别的优势快速识别准确度高服务部署简单ModelArts实现智能化的优势AI共享 帮助开发实现AI资源复用快速有效管理 实现全流程管理训练加速 模型训练耗时大幅度降低自动学习 用AI方式加速AI开发过程Model功能介绍开发流程资源准备获取访问密钥1、点击右上角我的凭证2、点击访问密钥3、新建密钥完成后就可以授权ModelArts访问其他服务创建桶和文件夹,用于存放样例数据集以及模型进入控制台后选择创建资源 选择创建OBS桶创建文件夹进入到该OBS中 点击该名称 创建文件夹 (记得记号OBS名和文件夹名)环境配置1、配置notebook 2、加载数据集到obs回到ModelArts界面 订阅之后加载到OBS就可以进行训练了结果等待几分钟就可以启动成功 选择tensorflow 输入 import moxing as moxrun之后输入 mox.file.copy_parallel('s3://sandbox-experiment-resource-north-4/flowers-data/flowers-100', 's3://your_bucket_name/your_folder_name')其中“your_bucket_name”为创建的OBS桶名;“your_folder_name为OBS桶中创建的文件夹名称训练结束之后就可以上传图片来进行识别了
上滑加载中
推荐直播
-
TinyEngine低代码引擎系列.第1讲——低代码浪潮之下,带你走进TinyEngine
2024/11/11 周一 16:00-18:00
李老师 高级前端开发工程师
低代码浪潮之下,带你走进TinyEngine。李旭宏老师将从低代码的发展趋势、TinyEngine的项目介绍,三方物料组件的使用、跨技术栈的使用、源码生成能力的差异性对比等多个方面带大家对TinyEngine低代码引擎有一个更清晰的认知和了解。
即将直播 -
0代码智能构建AI Agent——华为云AI原生应用引擎的架构与实践
2024/11/13 周三 16:30-18:00
苏秦 华为云aPaaS DTSE技术布道师
大模型及生成式AI对应用和软件产业带来了哪些影响?从企业场景及应用开发视角,面向AI原生应用需要什么样的工具及平台能力?企业要如何选好、用好、管好大模型,使能AI原生应用快速创新?本期直播,华为云aPaaS DTSE技术布道师苏秦将基于华为云自身实践出发,深入浅出地介绍华为云AI原生应用引擎,通过分钟级智能生成Agent应用的方式帮助企业完成从传统应用到智能应用的竞争力转型,使能千行万业智能应用创新。
去报名
热门标签