-
【功能模块】TSM网络训练报错,没有具体报错位置。报错内容见附件。【操作步骤&问题现象】1、ascend 910 多卡 静态图训练报错。2. ascend 910 动态图训练单卡多卡都没有问题,静态图单卡训练没有问题,多卡训练报错如下:3. 无论4卡还是8卡训练都报错。【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
【功能模块】使用Vision Transformer(ViT)模型对cifar10数据集进行分类时,训练速度慢(Ascend910),batch_size设为32时,性能为178ms/step。因此在Ascend AI处理器上使用MindSpore Profiler进行性能调试,在训练脚本中添加MindSpore Profiler相关接口后,原本能正常训练的模型训练失败。【操作步骤&问题现象】1、按照官网中的性能调试修改训练脚本,加入MindSpore Profiler的相关接口,然后在Ascend910上单卡(0号卡)运行修改后的训练脚本(静态图模式),结果训练报错。2、将ViT_base模型文件换成ResNet50等CNN模型,其余的训练文件等几乎不变,再使用Profiler进行性能调试,可以成功训练并访问可视化界面。3、不添加MindSpore Profiler相关接口,在Ascend910上单卡(0号卡)训练ViT_base模型(静态图模式),可以正常训练并且loss下降正常。【日志信息】Profiler报错的log日志见profiler.txt。Ascend910上四卡并行成功训练ViT_base所得的部分训练日志见train_4p.txt。
-
本篇将介绍MindSpore团队与清华大学黄小猛老师共同开发的MindSpore版GOMO(Generalized Operator Modelling of the Ocean)海洋模型。背景介绍GOMO模型是一个区域海洋模式,最早由清华大学黄小猛老师基于OpenArray框架开发。海洋模式是指通过一组物理方程来描述海洋的气候变化,不仅可以很好地表征海面温度和高度分布,还能够实时预测台风、海啸等现象。海洋模式自1967年诞生以来发展迅速,至今已经有40多个海洋模式版本。其中比较有代表性的就有全球海洋模式(modular ocean model, MOM):POP , 以及区域海洋模式(regional ocean model system, ROMS):POM等。GOMO模型中的基本方程和实现算法便来自于POM模型。传统数值计算方法如有限差分法将海洋特定区域离散成网格点,每个网格点內可以计算出流体速度、水体温度、盐度等物理量。在过去的几十年中,研究人员已经开发了许多模型来提高仿真结果(Bonan和Doney;Collins等人;Taylor等人)。这些模型变得越来越复杂,代码量已经从几千行增加到数万甚至数百万行。在软件工程方面,代码量的增加使模型更难以开发和维护。基于此,黄小猛老师设计了OpenArray并行计算库,将复杂的并行计算与海洋模式研究解耦。基于OpenArray的GOMO海洋模式OpenArray将常用的差分算子进行了抽象:研究人员可以快速方便的将离散的PDE方程转换为相应的运算符表达形式。同时这些算子在底层可以实现自动并行,用户在上层实现的串行代码和并行代码一致,从而使得研究人员免于实现复杂的并行编程。下图是OpenArray中实现海洋海表高度求解的过程:OpenArray实现代码即方程(海表高度方程)尽管如此,OpenArray+GOMO目前仍然存在一些问题:第一个问题是计算效率。变量在计算过程中一旦被加载到处理器的寄存器或高速缓存中,则应在替换变量之前尽可能多地使用它,频繁的变量加载和不可避免的缓存丢失,会带来极高的内存消耗,导致计算性能下降。GOMO当前的效率和可扩展性已接近sbPOM(POM模行的变种)的一系列优化方法,例如内存池,图形计算,JIT编译和向量化,这些方法都是用于降低对内存带宽的需求,提升性能。但是,OpenArray目前尚未完全解决内存带宽限制问题。第二个问题是当前的OpenArray版本不支持自定义运算符。当用户尝试另一种高阶对流方程或任何其他数值方程时,OpenArray提供的12个基本运算符可能不能完全实现求解过程。第三个问题是对硬件平台的支持,目前OpenArray只支持传统CPU集群与神威太湖之光,但是不能使用GPU和Ascend等平台。第四个问题是无法采用自动微分的功能进行模型参数优化以及数据同化。MindSpore加速GOMO求解针对OpenArray+GOMO中目前存在的一些问题,可以使用深度学习框架MindSpore结合GPU对GOMO进行进一步的加速求解。算子抽象借鉴OpenArray的思想,我们在MindSpore中进行类似的算子抽象。以DYF算子为例,使用Pad、Slice算子组合实现DYF运算,如下图所示。首先对输入的A(x, y)的y轴向后扩充一维,用Pad填充为0;再使用Slice算子将y轴第一维移除,这样得到的A’(x, y)中的每个元素与原始输入中的A(i, j+1)一一对应。最后使用A’(x, y)减去A(x, y)就得到了DYF算子的运算结果。MindSpore实现差分算子抽象目前在已经在MindSpore中实现了12个用于平均和差分运算的算子,满足绝大多数偏微分方程的求解。同时如果用户需要进行额外的差分运算,可以参照上述的算子抽象方式,实现灵活的算子定义。图算融合图算融合是MindSpore独具特色的性能优化技术。通过自动分析和优化现有计算图逻辑,并结合目标硬件能力,对计算图进行计算化简和替代、算子拆分和融合、算子特例化编译等优化,实现对网络性能的整体优化。相比传统优化技术,图算融合具有多算子跨边界联合优化、与算子编译跨层协同、基于Polyhedral的算子即时编译等独特优势。另外,图算融合只需要用户打开对应配置后,整个优化过程即可自动完成,不需要网络开发人员进行其它额外感知,使得用户可以聚焦网络算法实现。如下图所示,是海洋模式中求解正压模态的海表高度方程的图算融合过程。首先,MindSpore会将用户的实现代码转换为对应的计算图,用户的输入和计算过程对应计算图中的每个节点。然后未使能图算融合的原始计算图输入到AKG(Auto Kernel Generator, 自动算子生成)模块,AKG会对输入的计算图进行扫描,自动生成对应的融合算子,减少中间变量的产生,增加指令集发射长度,提高计算效率。当前MindSpore可以自动的对add、sub、mul、div这些ElementWise算子进行融合,无需用户进行额外的操作。而我们的最终目标是将自定义的差分算子与基础算子进行更大范围的融合,最终融合成一个完整的算子。这样对于每一个求解方程来说,都是一个融合算子,没有额外的中间计算结果产生,能够极大的提升性能。MindSpore图算融合我们测试了基于MindSpore实现的海洋区域模型GOMO单机版,在开启图算前后的性能对比,如下图。从测试结果可以看出,使能图算融合之后GOMO模型的单步迭代时间提升约1倍,并且对于不同的分辨率均有效果。图算融合前后性能对比案例介绍下面将简单介绍MindSpore GOMO模型使用。实践前先确保已经正确安装MindSpore。如果没有,可以通过MindSpore安装页面安装;其次安装netCDF4pip install netCDF41. 准备数据本案例使用的是netCDF格式的Seamount文件,贝克曼和海德沃格尔提出的Seamount问题是区域海洋模型广泛使用的理想试验案例(Beckmann and Haidvogel, 1993)。2. 加载数据加载Seamount数据文件,从文件脚本中读取变量的初始化值,Seamount文件中的数据类型是双精度Float64,需要将其转成Float32进入MindSpore计算。加载处理数据的脚本在源码的src/read_var.py脚本中。import numpy as np import netCDF4 as nc # variable name list params_name = ['z', 'zz', 'dz', 'dzz', 'dx', 'dy', 'cor', 'h', 'fsm', 'dum', 'dvm', 'art', 'aru', 'arv', 'rfe', 'rfw', 'rfn', 'rfs', 'east_e', 'north_e', 'east_c', 'north_c', 'east_u', 'north_u', 'east_v', 'north_v', 'tb', 'sb', 'tclim', 'sclim', 'rot', 'vfluxf', 'wusurf', 'wvsurf', 'e_atmos', 'ub', 'vb', 'uab', 'vab', 'elb', 'etb', 'dt', 'uabw', 'uabe', 'vabs', 'vabn', 'els', 'eln', 'ele', 'elw', 'ssurf', 'tsurf', 'tbe', 'sbe', 'sbw', 'tbw', 'tbn', 'tbs', 'sbn', 'sbs', 'wtsurf', 'swrad'] def load_var(file_obj, name): """load variable from nc data file""" data = file_obj.variables[name] data = data[:] data = np.float32(np.transpose(data, (2, 1, 0))) return data def read_nc(file_path): """ put the load variable into the dict """ variable = {} file_obj = nc.Dataset(file_path) for name in params_name: variable[name] = load_var(file_obj, name) return variable3. 定义GOMO网络GOMO模型基于动量、能量和质量守恒定律,推导微分方程组和边界条件,确定需要求解的7个方程组,详细的公式推导参考论文。图1是GOMO的整体执行流程图。首先,从Seamount数据中加载数据,用于模型中变量的初始化。加载初始值和模型参数后,计算分为内模态循环和外模态循环两个部分。在外模态循环中,主要计算二维海表面高度el和二维平均风速ua、va。在内模态循环中,循环次数iend是训练的总时间步数(由用户输入设定),内模态循环的计算三维数组占主导地位,依次计算湍流动能q2和产生湍流动能的湍流长度q2l、温度t和盐度s、x和y方向的风速u和v。计算完成之后,保存所需的变量结果,结束训练。GOMO模型流程图初始化变量... from src.GOMO import GOMO_init ... if __name__ == "__main__": ... # define grid and init variable update net_init = GOMO_init(im, jm, kb, stencil_width) ...定义GOMO模型def construct(self, etf, ua, uab, va, vab, el, elb, d, u, v, w, kq, km, kh, q2, q2l, tb, t, sb, s, rho, wubot, wvbot, ub, vb, egb, etb, dt, dhb, utb, vtb, vfluxb, et): """construct""" x_d, y_d, z_d = self.x_d, self.y_d, self.z_d q2b, q2lb = self.q2b, self.q2lb dx, dy = self.dx, self.dy # surface forcing w = w * (1 - self.z_h) + self.z_h * self.vfluxf # lateral_viscosity advx, advy, drhox, drhoy, aam = self.lateral_viscosity(dx, dy, u, v, dt, self.aam, ub, vb, x_d, y_d, z_d, rho, self.rmean) # mode_interaction adx2d, ady2d, drx2d, dry2d, aam2d, advua, advva, egf, utf, vtf = self.mode_interaction(advx, advy, drhox, drhoy, aam, x_d, y_d, d, uab, vab, ua, va, el) # ===========external model=========== vamax = 0 elf = 0 for iext in range(1, 31): # external_el elf = self.external_el(x_d, y_d, d, ua, va, elb) # external_ua advua, uaf = self.external_ua(iext, x_d, y_d, elf, d, ua, va, uab, vab, el, elb, advua, aam2d, adx2d, drx2d, wubot) # external_va advva, vaf = self.external_va(iext, x_d, y_d, elf, d, ua, va, uab, vab, el, elb, advva, aam2d, ady2d, dry2d, wvbot) # external_update etf, uab, ua, vab, va, elb, el, d, egf, utf, vtf, vamax = self.external_update(iext, etf, ua, uab, va, vab, el, elb, elf, uaf, vaf, egf, utf, vtf, d) # ===========internal model=========== if self.global_step != 0: # adjust_uv u, v = self.adjust_uv(u, v, utb, vtb, utf, vtf, dt) # internal_w w = self.internal_w(x_d, y_d, dt, u, v, etf, etb, vfluxb) # internal_q dhf, a, c, gg, ee, kq, km, kh, q2b_, q2, q2lb_, q2l = self.internal_q(x_d, y_d, z_d, etf, aam, q2b, q2lb, q2, q2l, kq, km, kh, u, v, w, dt, dhb, rho, wubot, wvbot, t, s) q2b = ops.Assign()(self.q2b, q2b_) q2lb = ops.Assign()(self.q2lb, q2lb_) # internal_t_t a, c, ee, gg, tb, t = self.internal_t_(t, tb, self.wtsurf, self.tsurf, self.swrad, self.tclim, self.tbe, self.tbw, self.tbn, self.tbs, x_d, y_d, z_d, dt, u, aam, self.h, self.dum, v, self.dvm, w, dhf, etf, a, kh, self.dzz, c, self.dzz1, ee, gg, dx, self.dz, dy, self.fsm, dhb) # internal_t_s a, c, ee, gg, sb, s = self.internal_t_(s, sb, self.wssurf, self.ssurf, self.swrad0, self.sclim, self.sbe, self.sbw, self.sbn, self.sbs, x_d, y_d, z_d, dt, u, aam, self.h, self.dum, v, self.dvm, w, dhf, etf, a, kh, self.dzz, c, self.dzz1, ee, gg, dx, self.dz, dy, self.fsm, dhb) # dense rho = self.dens(s, t, self.zz, self.h, self.fsm) # internal_u uf, a, c, gg, ee, wubot = self.internal_u(x_d, z_d, dhf, u, v, w, ub, vb, egf, egb, ee, gg, self.cbc, km, advx, drhox, dt, dhb) # internal_v vf, a, c, gg, ee, wvbot = self.internal_v(y_d, z_d, dhf, u, v, w, ub, vb, egf, egb, ee, gg, self.cbc, km, advy, drhoy, dt, dhb) # adjust_ufvf u, v, ub, vb = self.adjust_ufvf(u, v, uf, vf, ub, vb) # internal_update egb, etb, dt, dhb, utb, vtb, vfluxb, et = self.internal_update(egf, etb, utf, vtf, etf, et) steps = ops.AssignAdd()(self.global_step, 1) return elf, etf, ua, uab, va, vab, el, elb, d, u, v, w, kq, km, kh, q2, q2l, tb, t, sb, s, rho, wubot, wvbot, \ ub, vb, egb, etb, dt, dhb, utb, vtb, vfluxb, et, steps, vamax, q2b, q2lb在__main__函数中调用定义好的GOMO模型:... from src.GOMO import GOMO ... if __name__ == "__main__": ... # define GOMO model Model = GOMO(im=im, jm=jm, kb=kb, stencil_width=stencil_width, variable=variable, x_d=x_d, y_d=y_d, z_d=z_d, q2b=q2b, q2lb=q2lb, aam=aam, cbc=cbc, rmean=rmean) ...4. 训练网络运行脚本训练脚本定义完成之后,调用scripts目录下的shell脚本,启动训练进程。 使用以下命令运行脚本:sh run_distribute_train.sh <im> <jm> <kb> <step> <DATASET_PATH>脚本需要传入变量im、jm、kb、step、DATASET_PATH,其中:· im,jm,kb:模拟的海洋区域分辨率,与使用的数据相关;· step:训练的时间步数(与图1中的iend对应);· DATASET_PATH:训练数据路径。训练完后,训练过程中变量的变化值保存在train/outputs目录下,每隔5个时间步保存一次数据,主要保存了4个变量值,分别是东向的风速、北向的风速(单位是m/s),位温度(单位是K),海表面高度(单位是m)。└─outputs ├─u_5.npy ├─v_5.npy ├─t_5.npy ├─et_5.npy ├─u_10.npy ├─v_10.npy ├─t_10.npy ├─et_10.npy其中,*.npy:指保存的变量。文件名称具体含义:变量名称_step数.npy。展望MindSpore版本的GOMO模型,通过Python前端完成了关键差分算子的抽象,提升了易用性;同时结合图算融合功能+GPU硬件对GOMO模型进行了加速。不仅如此,用户还可以借助MindSpore的自动微分功能实现模型参数调优以及数据同化。在此,也欢迎广大的科学计算爱好者和研究者加入我们,共同拓展和维护MindSpore版本GOMO模型。参考文献1. Huang X, Huang X, Wang D, et al. OpenArray v1. 0: a simple operator library for the decoupling of ocean modeling and parallel computing[J]. Geoscientific Model Development, 2019, 12(11).2. Blumberg A F, Mellor G L. A description of a three‐dimensional coastal ocean circulation model[J]. Three‐dimensional coastal ocean models, 1987, 4: 1-16.3. Beckmann A, Haidvogel D B. Numerical simulation of flow around a tall isolated seamount. Part I: Problem formulation and model accuracy[J]. Journal of Physical Oceanography, 1993, 23(8): 1736-1753.转自文章链接:https://zhuanlan.zhihu.com/p/404511374感谢作者的努力与分享,侵权立删!
-
【功能模块】github上ModelZoo的官方的代码DeepFM在ModelArts上执行一直报错【操作步骤&问题现象】我按照github上官方的代码DeepFM进行训练 然后在ModelArts上执行一直失败,步骤都是按照README_CN上进行的这是我的代码框架:这是配置文件:数据集下载的是:简化版:只有60万数据的报错信息:File "/home/ma-user/modelarts/user-job-dir/deepfm/src/dataset.py", line 208, in _get_mindrecord_datasetshuffle=shuffle, num_parallel_workers=8)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/engine/validators.py", line 351, in new_methodcheck_file(dataset_file)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/core/validator_helpers.py", line 277, in check_fileraise ValueError("The file {} does not exist or permission denied!".format(dataset_file))ValueError: The file r3://deepfm-rec/data/train_input_part.mindrecord00 does not exist or permission denied!【日志信息】(可选,上传日志内容或者附件)===save flag===Finish sync data from /home/ma-user/modelarts/outputs/train_url_0/ to r3://deepfm-rec/output.Workspace downloaded: []Traceback (most recent call last):File "/home/ma-user/modelarts/user-job-dir/deepfm/train.py", line 126, in <module>train_deepfm()File "/home/ma-user/modelarts/user-job-dir/deepfm/src/model_utils/moxing_adapter.py", line 108, in wrapped_funcrun_func(*args, **kwargs)File "/home/ma-user/modelarts/user-job-dir/deepfm/train.py", line 85, in train_deepfmrank_id=rank_id)File "/home/ma-user/modelarts/user-job-dir/deepfm/src/dataset.py", line 290, in create_datasetrank_size, rank_id)File "/home/ma-user/modelarts/user-job-dir/deepfm/src/dataset.py", line 208, in _get_mindrecord_datasetshuffle=shuffle, num_parallel_workers=8)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/engine/validators.py", line 351, in new_methodcheck_file(dataset_file)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/core/validator_helpers.py", line 277, in check_fileraise ValueError("The file {} does not exist or permission denied!".format(dataset_file))ValueError: The file r3://deepfm-rec/data/train_input_part.mindrecord00 does not exist or permission denied!
-
【功能模块】github上ModelZoo的官方的代码DeepFM在ModelArts上执行一直报错【操作步骤&问题现象】我按照github上官方的代码DeepFM进行训练 然后在ModelArts上执行一直失败,步骤都是按照README_CN上进行的这是我的代码框架:这是配置文件:数据集下载的是:简化版:只有60万数据的报错信息:File "/home/ma-user/modelarts/user-job-dir/deepfm/src/dataset.py", line 208, in _get_mindrecord_datasetshuffle=shuffle, num_parallel_workers=8)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/engine/validators.py", line 351, in new_methodcheck_file(dataset_file)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/core/validator_helpers.py", line 277, in check_fileraise ValueError("The file {} does not exist or permission denied!".format(dataset_file))ValueError: The file r3://deepfm-rec/data/train_input_part.mindrecord00 does not exist or permission denied!【日志信息】(可选,上传日志内容或者附件)===save flag===Finish sync data from /home/ma-user/modelarts/outputs/train_url_0/ to r3://deepfm-rec/output.Workspace downloaded: []Traceback (most recent call last):File "/home/ma-user/modelarts/user-job-dir/deepfm/train.py", line 126, in <module>train_deepfm()File "/home/ma-user/modelarts/user-job-dir/deepfm/src/model_utils/moxing_adapter.py", line 108, in wrapped_funcrun_func(*args, **kwargs)File "/home/ma-user/modelarts/user-job-dir/deepfm/train.py", line 85, in train_deepfmrank_id=rank_id)File "/home/ma-user/modelarts/user-job-dir/deepfm/src/dataset.py", line 290, in create_datasetrank_size, rank_id)File "/home/ma-user/modelarts/user-job-dir/deepfm/src/dataset.py", line 208, in _get_mindrecord_datasetshuffle=shuffle, num_parallel_workers=8)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/engine/validators.py", line 351, in new_methodcheck_file(dataset_file)File "/usr/local/ma/python3.7/lib/python3.7/site-packages/mindspore/dataset/core/validator_helpers.py", line 277, in check_fileraise ValueError("The file {} does not exist or permission denied!".format(dataset_file))ValueError: The file r3://deepfm-rec/data/train_input_part.mindrecord00 does not exist or permission denied!
-
【功能模块】使用modelzoo中的fasterrcnn模型,首先进行了训练,训练一切正常,但是在eval的时候,eval代码存在问题,无法正常eval训练出来的ckpt,得不到mAP【操作步骤&问题现象】1、训练代码:python train.py --config_path=/home/CaoY/faster_rcnn/default_config_101.yaml --coco_root=/home/CaoY/COCO2017 --mindrecord_dir=/home/CaoY/COCO2017/MindRecord_train --backbone=resnet_v1_1012、评估代码:bash run_eval_ascend.sh /home/CaoY/COCO2017/annotations/instances_val2017.json /home/CaoY/faster_rcnn/ckpt_0/faster_rcnn-6_59143.ckpt resnet_v1_101 /home/CaoY/COCO2017eval日志正常得到,但是在处理完所有数据之后,并没有得到mAP指标,经过查看,生成的results.pkl.bbox.json文件为空,下图为json文件截图【截图信息】训练正常生成ckpt文件【日志信息】(可选,上传日志内容或者附件)日志已上传
-
执行python train.py报以下错:具体信息如下:PS F:\0828\mindspore\model_zoo\official\cv\lenet> python train.py {'enable_modelarts': 'Whether training on modelarts, default: False', 'data_url': 'Dataset url for obs', 'train_url': 'Training output url for obs', 'data_path': 'Dataset path for local', 'output_path': 'Training output path for local', 'device_target': 'Target device type', 'enable_profiling': 'Whether enable profiling while training, default: False', 'file_name': 'outpu t file name.', 'file_format': 'file format', 'result_path': 'result files path.', 'img_path': 'image file path.'} {'air_name': 'lenet', 'batch_size': 32, 'buffer_size': 1000, 'checkpoint_url': '', 'ckpt_file': '/cache/train/checkpoint_lenet-10_1875.ckpt', 'ckpt_path': '/cache/train/', 'data_path': '/cache/data', 'data_url': '', 'dataset_name': 'mnist', 'dataset_sink_mode': True, 'device_id': 0, 'device_target': 'Ascend', 'enable_modelarts': False, 'enable_profiling': False, 'epoch_size': 10, 'file_format': 'MINDIR', 'file_name': 'lenet', 'image_height': 32, 'image_width': 32, 'img_path': '', 'keep_checkpoint_max': 10, 'learning_rate': 0.002, 'load_path': '/cache/checkpoint_path', 'lr': 0.01, 'model_name': 'lenet', 'momentum': 0.9, 'num_classes': 10, 'output_path': '/cache/train', 'result_path': '', 'save_checkpoint': True, 'save_checkpoint_epochs': 2, 'save_checkpoint_steps': 1875, 'sink_size': -1, 'train_url': ''} Traceback (most recent call last): File "train.py", line 23, in <module> from src.model_utils.moxing_adapter import moxing_wrapper File "F:\0828\mindspore\model_zoo\official\cv\lenet\src\model_utils\moxing_adapter.py", line 21, in <module> from mindspore.profiler import Profiler File "C:\Users\zhang\AppData\Local\Programs\Python\Python37\lib\site-packages\mindspore\profiler\__init__.py", line 25, in <module> from mindspore.profiler.profiling import Profiler File "C:\Users\zhang\AppData\Local\Programs\Python\Python37\lib\site-packages\mindspore\profiler\profiling.py", line 26, in <module> from mindspore.dataset.core.config import _stop_dataset_profiler File "C:\Users\zhang\AppData\Local\Programs\Python\Python37\lib\site-packages\mindspore\dataset\__init__.py", line 26, in <module> from .core import config File "C:\Users\zhang\AppData\Local\Programs\Python\Python37\lib\site-packages\mindspore\dataset\core\config.py", line 23, in <module> import mindspore._c_dataengine as cde ImportError: DLL load failed: 找不到指定的模块。 PS F:\0828\mindspore\model_zoo\official\cv\lenet>
-
【功能模块】mindspore train.py【操作步骤&问题现象】1、执行时报错 Generator worker process timeout. 2、PARSER(150373,ffffa7704710,python):2021-08-28-16:49:49.047.441 [mindspore/ccsrc/pipeline/jit/parse/function_block.cc:177] MakeResolveSymbol] The name 'operator' is not defined. 【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
【功能模块】【操作步骤&问题现象】1、PyNative模式能运行 但是Graph模式不能运行【截图信息】【日志信息】(可选,上传日志内容或者附件)[ERROR] CORE(888,7fe7ca090740,python):2021-08-30-14:01:16.483.196 [mindspore/core/abstract/abstract_value.cc:48] AbstractTypeJoinLogging] Type Join Failed: abstract type AbstractTensor cannot not join with AbstractTuple. For more details, please refer to the FAQ at https://www.mindspore.cn. this: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce5dd4a220, value: Tensor(shape=[], dtype=Float32, value= 1)), other: AbstractTuple(element[0]: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce0a71db00, value: AnyValue),element[1]: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce0a71db00, value: AnyValue),element[2]: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce0a71db00, value: AnyValue),). Please check the node construct.75:construct{[0]: construct, [1]: construct}. trace: In file /tmp/pycharm_project_467/cnn3d_model.py(158)/ def construct(self,flair_t2_input,t1_t1ce_input, pipe1Label, pipe2Label):/In file /tmp/pycharm_project_467/cnn3d_model.py(251)/ loss, acc_flair_t2, acc_t1_t1ce = self.network(flair_t2_input, t1_t1ce_input, flair_t2_gt_node, t1_t1ce_gt_node)/[ERROR] DEBUG(888,7fe7ca090740,python):2021-08-30-14:01:16.483.241 [mindspore/ccsrc/debug/trace.cc:118] TraceGraphEval] *******************************graph evaluate stack**********************************[ERROR] DEBUG(888,7fe7ca090740,python):2021-08-30-14:01:16.484.730 [mindspore/ccsrc/debug/trace.cc:122] TraceGraphEval] #0 graph:construct_wrapper.0 with args[flair_t2_input:<Tensor(F32)[2, 38, 38, 38, 2]>,t1_t1ce_input:<Tensor(F32)[2, 38, 38, 38, 2]>,flair_t2_gt_node:<Tensor(F32)[2, 12, 12, 12, 2]>,t1_t1ce_gt_node:<Tensor(F32)[2, 12, 12, 12, 5]>,opt.network.flair_t2_line.stage1.unit1.batchNormalization.bn2d.gamma:<Ref[Tensor(F32)][24]>,opt.network.flair_t2_line.stage1.unit1.batchNormalization.bn2d.beta:<Ref[Tensor(F32)][24]>,opt.network.flair_t2_line.stage1.unit1.conv3D.weight:<Ref[Tensor(F32)][12, 24, 3, 3, 3]>,opt.network.flair_t2_line.stage1.unit2.batchNormalization.bn2d.gamma:<Ref[Tensor(F32)][36]>,opt.network.flair_t2_line.stage1.unit2.batchNormalization.bn2d.beta:<Ref[Tensor(F32)][36]>,opt.network.flair_t2_line.stage1.unit2.conv3D.weight:<Ref[Tensor(F32)][12, 36, 3, 3, 3]>,opt.network.flair_t2_line.stage1.unit3.batchNormalization.bn2d.gamma:<Ref[Tensor(F32)][48]>,opt.network.flair_t2_line.stage1.unit3.batchNormalization.bn2d.beta:<Ref[Tensor(F32)][48]>,opt.network.flair_t2_line.stage1.unit3.conv3D.weight:<Ref[Tensor(F32)][12, 48, 3, 3, 3]>,opt.network.flair_t2_line.stage1.unit4.batchNormalization.bn2d.gamma:<Ref[Tensor(F32)][60]>,opt.network.flair_t2_line.stage1.unit4.batchNormalization.bn2d.beta:<Ref[Tensor(F32)][60]>,opt.network.flair_t2_line.stage1.unit4.conv3D.weight:<Ref[Tensor(F32)][12, 60, 3, 3, 3]>,opt.network.flair_t2_line.stage1.unit5.batchNormalization.bn2d.gamma:<Ref[Tensor(F32)][72]>,opt.network.flair_t2_line.stage1.unit5.batchNormalization.bn2d.beta:<Ref[Tensor(F32)][72]>,opt.network.flair_t2_line.stage1.unit5.conv3D.weight:<Ref[Tensor(F32)][12, 72, 3, 3, 3]>,opt.network.flair_t2_line.stage1.unit6.batchNormalization.bn2d.gamma:<Ref[Tensor(F32)][84]>,opt.network.flair_t2_line.stage1.unit6.batchNormalization.bn2d.beta:<Ref[Tensor(F32)][84]>,opt.network.flair_t2_line.stage1.unit6.conv3D.weight:<Ref[Tensor(F32)][12, 84, 3, 3, 3]>,opt.network.flair_t2_line.batchNormalization1.bn2d.gamma:<Ref[Tensor(F32)][96]>,opt.network.flair_t2_line.batchNormalization1.bn2d.beta:<Ref[Tensor(F32)][96]>,opt.network.flair_t2_line.conv3DWithBN1.conV3D.weight:<Ref[Tensor(F32)][96, 96, 1, 1, [ERROR] DEBUG(888,7fe7ca090740,python):2021-08-30-14:01:16.484.982 [mindspore/ccsrc/debug/trace.cc:123] TraceGraphEval] *************************************************************************************[WARNING] ANALYZER(888,7fe7ca090740,python):2021-08-30-14:01:16.497.925 [mindspore/ccsrc/pipeline/jit/static_analysis/static_analysis.cc:148] Run] Eval construct_wrapper.0 threw exception.[ERROR] ANALYZER(888,7fe7ca090740,python):2021-08-30-14:01:16.497.941 [mindspore/ccsrc/pipeline/jit/static_analysis/async_eval_result.cc:39] HandleException] Exception happened, check the information as below.Traceback (most recent call last): File "/tmp/pycharm_project_467/train.py", line 155, in <module> train() File "/tmp/pycharm_project_467/train.py", line 143, in train loss, acc_flair_t2, acc_t1_t1ce, grads = trainOneStepCell(flair_t2_input, t1_t1ce_input, flair_t2_gt_node, t1_t1ce_gt_node) File "/usr/local/miniconda3/envs/mindspore1.3/lib/python3.7/site-packages/mindspore/nn/cell.py", line 386, in __call__ out = self.compile_and_run(*inputs) File "/usr/local/miniconda3/envs/mindspore1.3/lib/python3.7/site-packages/mindspore/nn/cell.py", line 644, in compile_and_run self.compile(*inputs) File "/usr/local/miniconda3/envs/mindspore1.3/lib/python3.7/site-packages/mindspore/nn/cell.py", line 631, in compile _executor.compile(self, *inputs, phase=self.phase, auto_parallel_mode=self._auto_parallel_mode) File "/usr/local/miniconda3/envs/mindspore1.3/lib/python3.7/site-packages/mindspore/common/api.py", line 531, in compile result = self._executor.compile(obj, args_list, phase, use_vm, self.queue_name)TypeError: mindspore/core/abstract/abstract_value.cc:48 AbstractTypeJoinLogging] Type Join Failed: abstract type AbstractTensor cannot not join with AbstractTuple. For more details, please refer to the FAQ at https://www.mindspore.cn. this: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce5dd4a220, value: Tensor(shape=[], dtype=Float32, value= 1)), other: AbstractTuple(element[0]: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce0a71db00, value: AnyValue),element[1]: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce0a71db00, value: AnyValue),element[2]: AbstractTensor(shape: (), element: AbstractScalar(Type: Float32, Value: AnyValue, Shape: NoShape), value_ptr: 0x55ce0a71db00, value: AnyValue),). Please check the node construct.75:construct{[0]: construct, [1]: construct}. trace: In file /tmp/pycharm_project_467/cnn3d_model.py(158)/ def construct(self,flair_t2_input,t1_t1ce_input, pipe1Label, pipe2Label):/In file /tmp/pycharm_project_467/cnn3d_model.py(251)/ loss, acc_flair_t2, acc_t1_t1ce = self.network(flair_t2_input, t1_t1ce_input, flair_t2_gt_node, t1_t1ce_gt_node)/The function call stack (See file 'analyze_fail_0.dat' for more details):# 0 In file /tmp/pycharm_project_467/cnn3d_model.py(254) return loss, acc_flair_t2, acc_t1_t1ce, self.opt(grads) ^# 1 In file /tmp/pycharm_project_467/cnn3d_model.py(253) grads = self.grad(self.network,self.weights)(flair_t2_input, t1_t1ce_input, flair_t2_gt_node, t1_t1ce_gt_node,sens) ^# 2 In file /tmp/pycharm_project_467/cnn3d_model.py(158) def construct(self,flair_t2_input,t1_t1ce_input, pipe1Label, pipe2Label): ^Process finished with exit code 1
-
【功能模块】mindspore train.py【操作步骤&问题现象】1、执行train.py的时候,报错,要求更换算子。2、np.ogrid使用报错,更换mindspore.numpy.ogrid依然报错。【截图信息】代码截图【日志信息】(可选,上传日志内容或者附件)
-
【功能模块】系统是Ubuntu18.04,cuda10.1,在conda 虚拟环境中安装了Minsdpore1.3.0,已正确安装并能调用。【操作步骤&问题现象】1、git 下载了model_zoo中的YOLOV3_DARKNET53示例代码2、根据readme中的步骤下载了darknet53.conv.74并转换成了mindspore ckpt3、已下载了coco2014数据集,然后执行python train.py --data_dir=/media/yzu/0FEA19C70FEA19C7/dataset/coco2014 --pretrained_backbone=/home/yzu/lucas/yolov3_darknet53/darknet53_backbone.ckpt --is_distributed=0 --lr=0.001 --loss_scale=1024 --weight_decay=0.016 --T_max=320 --max_epoch=320 --warmup_epochs=4 --training_shape=416 --device_target=GPU --lr_scheduler=cosine_annealing > log.txt 2>&1 &4、在log.txt中看到报错警告无法加载权重【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
910上按照mindspore1.3后,import mindspore.dataset as ds 报错。
-
torch.autograd在mindspore中怎么实现
-
【功能模块】运行mindspore_serving中的add_model.py样例,https://www.mindspore.cn/serving/docs/zh-CN/master/serving_example.html出现如下图片中的错误。【操作步骤&问题现象】1、mindspore1.32、Ascend3103、CANN5.0.2【截图信息】【日志信息】(可选,上传日志内容或者附件)
-
【功能模块】【操作步骤&问题现象】1、安装mingspore2、验证mindspore就报错了,不知道如何解决,请专家帮忙分析一下。【截图信息】【日志信息】(可选,上传日志内容或者附件)
推荐直播
-
HDC深度解读系列 - Serverless与MCP融合创新,构建AI应用全新智能中枢2025/08/20 周三 16:30-18:00
张昆鹏 HCDG北京核心组代表
HDC2025期间,华为云展示了Serverless与MCP融合创新的解决方案,本期访谈直播,由华为云开发者专家(HCDE)兼华为云开发者社区组织HCDG北京核心组代表张鹏先生主持,华为云PaaS服务产品部 Serverless总监Ewen为大家深度解读华为云Serverless与MCP如何融合构建AI应用全新智能中枢
回顾中 -
关于RISC-V生态发展的思考2025/09/02 周二 17:00-18:00
中国科学院计算技术研究所副所长包云岗教授
中科院包云岗老师将在本次直播中,探讨处理器生态的关键要素及其联系,分享过去几年推动RISC-V生态建设实践过程中的经验与教训。
回顾中 -
一键搞定华为云万级资源,3步轻松管理企业成本2025/09/09 周二 15:00-16:00
阿言 华为云交易产品经理
本直播重点介绍如何一键续费万级资源,3步轻松管理成本,帮助提升日常管理效率!
回顾中
热门标签