-
MindSpore2.3.0+Ascend910A,镜像为swr.cn-north-4.myhuaweicloud.com/atelier/mindspore_2_3_ascend:mindspore_2.3.0-cann_8.0.rc2-py_3.9-euler_2.10.7-aarch64-snt9b-20240727152329-0f2c29a,运行测试样例报错RuntimeError: Call aclnnSub failed, detail:EZ9999: Inner Error!kernel没装全导致二进制算子操作报错。/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero. setattr(self, word, getattr(machar, word).flat[0])/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero. return self._float_to_str(self.smallest_subnormal)/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero. setattr(self, word, getattr(machar, word).flat[0])/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero. return self._float_to_str(self.smallest_subnormal)[ERROR] RUNTIME_FRAMEWORK(3361,ffff93dd11e0,python):2024-10-31-20:00:24.542.957 [mindspore/ccsrc/runtime/graph_scheduler/actor/actor_common.cc:327] WaitRuntimePipelineFinish] Wait runtime pipeline finish and an error occurred: Call aclnnSub failed, detail:EZ9999: Inner Error!EZ9999: 2024-10-31-20:00:24.531.850 Parse dynamic kernel config fail.[THREAD:3973] TraceBack (most recent call last): AclOpKernelInit failed opType[THREAD:3973] Op Sub does not has any binary.[THREAD:3973] Kernel Run failed. opType: 3, Sub[THREAD:3973] launch failed for Sub, errno:561000.[THREAD:3973]----------------------------------------------------- C++ Call Stack: (For framework developers)----------------------------------------------------mindspore/ccsrc/plugin/device/ascend/kernel/opapi/aclnn/sub_aclnn_kernel.h:36 RunOpTraceback (most recent call last): File "/home/ma-user/work/Test/test.py", line 36, in <module> out = net(x, y) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 703, in __call__ out = self.compile_and_run(*args, **kwargs) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1074, in compile_and_run return _cell_graph_executor(self, *new_args, phase=self.phase) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 1860, in __call__ return self.run(obj, *args, phase=phase) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 1911, in run return self._exec_pip(obj, *args, phase=phase_real) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 185, in wrapper results = fn(*arg, **kwargs) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.9/site-packages/mindspore/common/api.py", line 1891, in _exec_pip return self._graph_executor(args, phase)RuntimeError: Call aclnnSub failed, detail:EZ9999: Inner Error!EZ9999: 2024-10-31-20:00:24.531.850 Parse dynamic kernel config fail.[THREAD:3973] TraceBack (most recent call last): AclOpKernelInit failed opType[THREAD:3973] Op Sub does not has any binary.[THREAD:3973] Kernel Run failed. opType: 3, Sub[THREAD:3973] launch failed for Sub, errno:561000.[THREAD:3973]----------------------------------------------------- C++ Call Stack: (For framework developers)----------------------------------------------------mindspore/ccsrc/plugin/device/ascend/kernel/opapi/aclnn/sub_aclnn_kernel.h:36 RunOp
-
2024“数据要素×”大赛全国总决赛在北京中关村国际创新中心举行颁奖仪式,云南白药集团“中医药行业雷公大模型”斩获全国总决赛二等奖。国家数据局党委书记、局长刘烈宏,北京市委常委、常务副市长夏林茂等领导出席颁奖仪式。“数据要素×”大赛主题为“数据赋能,乘数而上”,是由国家数据局、北京市人民政府、中央网信办、工业和信息化部以及12个领域国家相关部门主办,是国内首个聚焦数据要素开发应用的全国性大赛。大赛共设置了工业制造、医疗健康、金融服务、科技创新、绿色低碳、城市治理等12个赛道,全国1.9万多支队伍参赛,全国总决赛660多支队伍共同角逐。旨在通过遴选出一批应用成效显著、创新性强、引领效应好的解决方案,推动相关技术产业发展。华为云与云南白药集团在2024年2月签署战略合作协议之后,在大模型、智慧差旅及灯塔工厂工业物联网建设等多个领域展开了全面合作,充分整合数据、技术、平台和算力资源。借助华为云盘古大模型的领先优势,构建了新的智能生产力体系,全面提升了云南白药集团的数字化能力。在双方项目合作的前期阶段,成果显著,不仅带动了销售增长,还节省了中药材的退换货成本;同时,解决了多源异构数据的应用难题,构建了高质量的数据集,推动了中医药行业的数字化转型与发展。盘古大模型,赋能智能生产;云端数据,驱动创新腾飞。华为云与云南白药集团本次联创的“中医药行业雷公大模型”,联合权威数据提供方,利用华为云先进的人工智能和大模型技术,致力于提升中医药行业全产业链的效率和质量。本项目充分响应国家推动传统中医药与现代科学相结合、相促进的政策,旨在实现中医药全行业、全产业链、全流程数据的有效贯通。通过构建中医行业高质量数据集,项目不仅推动了人工智能与中医药全产业链数据要素的深度融合,还积极参与国家数据局高质量数据整理与交易工作,充分发挥中医药数据的行业价值。华为云的技术支持为中医药行业的数字化转型与高效发展提供了强大动能,展现了其在推动产业创新中的重要作用。此次获奖不仅高度认可了云南白药集团在中医药现代化探索中的突出成就,也肯定了其利用大数据、人工智能等前沿科技赋能传统中医药行业的创新实践,展示了在中医药领域数字化转型中的深远影响和领导力。华为云将继续与云南白药集团紧密合作,进一步探索和应用大模型、人工智能等前沿技术,持续为云南省数字经济的发展注入新的活力。转自华为云公众号
-
Atlas 200I A2用于EP侧,安装了驱动,可以通过串口进入Atlas 200I A2的系统,但是每次重启Atlas 200I A2的系统,我在里面写的脚本就会被还原
-
大模型的浪潮让AI开发和应用有了全新的面貌,大家有什么想法可以分享
-
一张图来看目前AI Gallery的定位。个人感觉,它应该是在越来越找准自己的定位的状态中了....
-
AI Gallery中的免费GPU算力最多同时支持多少人在线?
-
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simpleWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable'))': /simple/gradio/WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable'))': /simple/gradio/WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable'))': /simple/gradio/WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable'))': /simple/gradio/WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable'))': /simple/gradio/ERROR: Could not find a version that satisfies the requirement gradio==4.16.0ERROR: No matching distribution found for gradio==4.16.0
-
/home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix' warnings.warn(f"Could not get documentation group for {cls}: {exc}")/home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix' warnings.warn(f"Could not get documentation group for {cls}: {exc}")Running on local URL: http://127.0.0.1:7860IMPORTANT: You are using gradio version 3.39.0, however version 4.29.0 is available, please upgrade.--------Running on public URL: https://4bf68ac779bd610ca7.gradio.liveThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)---------------------------------------------------------------------------TimeoutError Traceback (most recent call last)File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/connectionpool.py:537, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length) 536 try:--> 537 response = conn.getresponse() 538 except (BaseSSLError, OSError) as e:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/connection.py:466, in HTTPConnection.getresponse(self) 465 # Get the response from http.client.HTTPConnection--> 466 httplib_response = super().getresponse() 468 try:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/http/client.py:1374, in HTTPConnection.getresponse(self) 1373 try:-> 1374 response.begin() 1375 except ConnectionError:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/http/client.py:318, in HTTPResponse.begin(self) 317 while True:--> 318 version, status, reason = self._read_status() 319 if status != CONTINUE:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/http/client.py:279, in HTTPResponse._read_status(self) 278 def _read_status(self):--> 279 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") 280 if len(line) > _MAXLINE:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/socket.py:705, in SocketIO.readinto(self, b) 704 try:--> 705 return self._sock.recv_into(b) 706 except timeout:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/ssl.py:1274, in SSLSocket.recv_into(self, buffer, nbytes, flags) 1271 raise ValueError( 1272 "non-zero flags not allowed in calls to recv_into() on %s" % 1273 self.__class__)-> 1274 return self.read(nbytes, buffer) 1275 else:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/ssl.py:1130, in SSLSocket.read(self, len, buffer) 1129 if buffer is not None:-> 1130 return self._sslobj.read(len, buffer) 1131 else:TimeoutError: The read operation timed outThe above exception was the direct cause of the following exception:ReadTimeoutError Traceback (most recent call last)File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/requests/adapters.py:589, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies) 588 try:--> 589 resp = conn.urlopen( 590 method=request.method, 591 url=url, 592 body=request.body, 593 headers=request.headers, 594 redirect=False, 595 assert_same_host=False, 596 preload_content=False, 597 decode_content=False, 598 retries=self.max_retries, 599 timeout=timeout, 600 chunked=chunked, 601 ) 603 except (ProtocolError, OSError) as err:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/connectionpool.py:847, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw) 845 new_e = ProtocolError("Connection aborted.", new_e)--> 847 retries = retries.increment( 848 method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] 849 ) 850 retries.sleep()File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/util/retry.py:470, in Retry.increment(self, method, url, response, error, _pool, _stacktrace) 469 if read is False or method is None or not self._is_method_retryable(method):--> 470 raise reraise(type(error), error, _stacktrace) 471 elif read is not None:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/util/util.py:39, in reraise(tp, value, tb) 38 raise value.with_traceback(tb)---> 39 raise value 40 finally:File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/connectionpool.py:793, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw) 792 # Make the request on the HTTPConnection object--> 793 response = self._make_request( 794 conn, 795 method, 796 url, 797 timeout=timeout_obj, 798 body=body, 799 headers=headers, 800 chunked=chunked, 801 retries=retries, 802 response_conn=response_conn, 803 preload_content=preload_content, 804 decode_content=decode_content, 805 **response_kw, 806 ) 808 # Everything went great!File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/connectionpool.py:539, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length) 538 except (BaseSSLError, OSError) as e:--> 539 self._raise_timeout(err=e, url=url, timeout_value=read_timeout) 540 raiseFile ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/urllib3/connectionpool.py:370, in HTTPConnectionPool._raise_timeout(self, err, url, timeout_value) 369 if isinstance(err, SocketTimeout):--> 370 raise ReadTimeoutError( 371 self, url, f"Read timed out. (read timeout={timeout_value})" 372 ) from err 374 # See the above comment about EAGAIN in Python 3.ReadTimeoutError: HTTPSConnectionPool(host='4bf68ac779bd610ca7.gradio.live', port=443): Read timed out. (read timeout=3)During handling of the above exception, another exception occurred:ReadTimeout Traceback (most recent call last)Cell In[4], line 29 25 submitBtn.click(reset_user_input, [], [user_input]) 27 emptyBtn.click(reset_state, outputs=[chatbot, history, past_key_values], show_progress=True)---> 29 demo.queue().launch(share=True, inbrowser=True)File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/gradio/blocks.py:1974, in Blocks.launch(self, inline, inbrowser, share, debug, enable_queue, max_threads, auth, auth_message, prevent_thread_lock, show_error, server_name, server_port, show_tips, height, width, encrypt, favicon_path, ssl_keyfile, ssl_certfile, ssl_keyfile_password, ssl_verify, quiet, show_api, file_directories, allowed_paths, blocked_paths, root_path, _frontend, app_kwargs) 1971 from IPython.display import HTML, Javascript, display # type: ignore 1973 if self.share and self.share_url:-> 1974 while not networking.url_ok(self.share_url): 1975 time.sleep(0.25) 1976 display( 1977 HTML( 1978 f'<div><iframe src="{self.share_url}" width="{self.width}" height="{self.height}" allow="autoplay; camera; microphone; clipboard-read; clipboard-write;" frameborder="0" allowfullscreen></iframe></div>' 1979 ) 1980 )File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/gradio/networking.py:202, in url_ok(url) 200 with warnings.catch_warnings(): 201 warnings.filterwarnings("ignore")--> 202 r = requests.head(url, timeout=3, verify=False) 203 if r.status_code in (200, 401, 302): # 401 or 302 if auth is set 204 return TrueFile ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/requests/api.py:100, in head(url, **kwargs) 89 r"""Sends a HEAD request. 90 91 :param url: URL for the new :class:`Request` object. (...) 96 :rtype: requests.Response 97 """ 99 kwargs.setdefault("allow_redirects", False)--> 100 return request("head", url, **kwargs)File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/requests/api.py:59, in request(method, url, **kwargs) 55 # By using the 'with' statement we are sure the session is closed, thus we 56 # avoid leaving sockets open which can trigger a ResourceWarning in some 57 # cases, and look like a memory leak in others. 58 with sessions.Session() as session:---> 59 return session.request(method=method, url=url, **kwargs)File ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/requests/sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 584 send_kwargs = { 585 "timeout": timeout, 586 "allow_redirects": allow_redirects, 587 } 588 send_kwargs.update(settings)--> 589 resp = self.send(prep, **send_kwargs) 591 return respFile ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/requests/sessions.py:703, in Session.send(self, request, **kwargs) 700 start = preferred_clock() 702 # Send the request--> 703 r = adapter.send(request, **kwargs) 705 # Total elapsed time of the request (approximately) 706 elapsed = preferred_clock() - startFile ~/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/requests/adapters.py:635, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies) 633 raise SSLError(e, request=request) 634 elif isinstance(e, ReadTimeoutError):--> 635 raise ReadTimeout(e, request=request) 636 elif isinstance(e, _InvalidHeader): 637 raise InvalidHeader(e, request=request)ReadTimeout: HTTPSConnectionPool(host='4bf68ac779bd610ca7.gradio.live', port=443): Read timed out. (read timeout=3)
-
用yolov5训练完自己的数据集后,ai应用创建好,最后在开始部署上线时报错File "/home/mind/model/customize_service.py", line 2, inimport cv2File "/home/modelarts/.local/lib/python3.7/site-packages/cv2/__init__.py", line 5, infrom .cv2 import *ImportError: libGL.so.1: cannot open shared object file: No such file or directory[2024-05-27 13:25:47 +0000] [44] [ERROR] Exception in worker processTraceback (most recent call last):File "/home/mind/model_service/model_service.py", line 167, in load_servicespec.loader.exec_module(module)File "", line 728, in exec_moduleFile "", line 219, in _call_with_frames_removedFile "/home/mind/model/customize_service.py", line 2, inimport cv2File "/home/modelarts/.local/lib/python3.7/site-packages/cv2/__init__.py", line 5, infrom .cv2 import *ImportError: libGL.so.1: cannot open shared object file: No such file or directory好像是导入opencv的问题
-
Dreambooth:一键生成你想要的人物画像Dreambooth是谷歌发布的一种通过向模型注入自定义的主题来fine-tune diffusion model的技术,可以生成不同场景下的图片。本文将演示在AI Gallery中使用自定义数据集微调Stable Diffusion,一键生成你想要的人画图像!1. 准备工作首先下载3~10人像照片,最好是不同角度的人物图像,这里我从网上搜集了5张庄达菲的图片作为输入:2. 运行案例本案例需使用 Pytorch-2.0.1 GPU-V100 及以上规格,点击Run in ModelArts在Notebook中一键体验:3. 模型训练首先下载代码模型并配置运行环境,然后下载原始数据集wh11e.zip压缩包,替换为自己的图片并上传压缩包:模型训练配置和参数保持不变,之后启动训练,一般耗时10min:# --pretrained_model_name_or_path: 模型路径,这里使用我下载的离线权重SD1.5# --pretrained_vae_name_or_path: vae路径,这里使用我下载的离线权重# --output_dir: 输出路径# --resolution: 分辨率# --save_sample_prompt: 保存样本的提示语# --concepts_list: 配置json路径!python3 ./tools/train_dreambooth.py \ --pretrained_model_name_or_path=$model_sd \ --pretrained_vae_name_or_path="vae-ft-mse" \ --output_dir=$output_dir \ --revision="fp16" \ --with_prior_preservation --prior_loss_weight=1.0 \ --seed=777 \ --resolution=512 \ --train_batch_size=1 \ --train_text_encoder \ --mixed_precision="fp16" \ --use_8bit_adam \ --gradient_accumulation_steps=1 \ --learning_rate=$learning_rate \ --lr_scheduler="constant" \ --lr_warmup_steps=80 \ --num_class_images=$num_class_images \ --sample_batch_size=4 \ --max_train_steps=$max_num_steps \ --save_interval=10000 \ --save_sample_prompt="a photo of wh11e person" \ --concepts_list="./training_data/concepts_list.json"查看模型输出的样本:from natsort import natsortedfrom glob import glob# 查看模型输出的样本saved_weights_dir = natsorted(glob(output_dir + os.sep + '*'))[-1]saved_weights_dir'dreambooth_wh11e/500'4. 模型推理运行Gradio应用,修改输入提示词生成不同场景的人物画像:import torch import numpy as npimport gradio as grfrom diffusers import StableDiffusionPipeline# 加载模型pipe = StableDiffusionPipeline.from_pretrained(saved_weights_dir, torch_dtype=torch.float16)# 配置GPUpipe = pipe.to('cuda')pipe.enable_attention_slicing() # 开启注意力切片,节约显存pipe.enable_xformers_memory_efficient_attention() # 开启Xformers的内存优化注意力,节约显存# 更换schedulerfrom diffusers import DDIMSchedulerpipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)negative_prompt = "bad anatomy, ugly, deformed, desfigured, distorted face, poorly drawn hands, poorly drawn face, poorly drawn feet, blurry, low quality, low definition, lowres, out of frame, out of image, cropped, cut off, signature, watermark"num_samples = 1guidance_scale = 7.5num_inference_steps = 30height = 512width = 512def generate_image(prompt, steps): image = pipe(prompt, output_type='numpy', negative_prompt=negative_prompt, height=height, width=width, num_images_per_prompt=num_samples, num_inference_steps=steps, guidance_scale=guidance_scale ).images image = np.uint8(image[0] * 255) return imagewith gr.Blocks() as demo: gr.HTML("""<h1 align="center">Dreambooth</h1>""") with gr.Tab("Generate Image"): with gr.Row(): with gr.Column(): text_input = gr.Textbox(value="a photo of wh11e person", label="prompts", lines=4) steps = gr.Slider(30, 50, step=1, label="steps") gr.Examples( examples=[ ["face portrait of wh11e in the snow, realistic, hd, vivid, sunset"], ["photo of wh11e person, closeup, mountain fuji in the background, natural lighting"], ["photo of wh11e person in the desert, closeup, pyramids in the background, natural lighting, frontal face"] ], inputs=[text_input] ) image_output = gr.Image(height=400, width=400) image_button = gr.Button("submit") image_button.click(generate_image, [text_input, steps], [image_output]) demo.launch(share=True)Loading pipeline components...: 100%|██████████| 6/6 [00:01<00:00, 4.09it/s]You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .Running on local URL: http://127.0.0.1:7860IMPORTANT: You are using gradio version 4.0.2, however version 4.29.0 is available, please upgrade.--------Running on public URL: https://0706d8a2cf7260863f.gradio.liveThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
-
CNN-VIT 视频动态手势识别人工智能的发展日新月异,也深刻的影响到人机交互领域的发展。手势动作作为一种自然、快捷的交互方式,在智能驾驶、虚拟现实等领域有着广泛的应用。手势识别的任务是,当操作者做出某个手势动作后,计算机能够快速准确的判断出该手势的类型。本文将使用ModelArts开发训练一个视频动态手势识别的算法模型,对上滑、下滑、左滑、右滑、打开、关闭等动态手势类别进行检测,实现类似华为手机隔空手势的功能。算法简介CNN-VIT 视频动态手势识别算法首先使用预训练网络InceptionResNetV2逐帧提取视频动作片段特征,然后输入Transformer Encoder进行分类。我们使用动态手势识别样例数据集对算法进行测试,总共包含108段视频,数据集包含无效手势、上滑、下滑、左滑、右滑、打开、关闭等7种手势的视频,具体操作流程如下:首先我们将采集的视频文件解码抽取关键帧,每隔4帧保存一次,然后对图像进行中心裁剪和预处理,代码如下:def load_video(file_name): cap = cv2.VideoCapture(file_name) # 每隔多少帧抽取一次 frame_interval = 4 frames = [] count = 0 while True: ret, frame = cap.read() if not ret: break # 每隔frame_interval帧保存一次 if count % frame_interval == 0: # 中心裁剪 frame = crop_center_square(frame) # 缩放 frame = cv2.resize(frame, (IMG_SIZE, IMG_SIZE)) # BGR -> RGB [0,1,2] -> [2,1,0] frame = frame[:, :, [2, 1, 0]] frames.append(frame) count += 1 return np.array(frames) 然后我们创建图像特征提取器,使用预训练模型InceptionResNetV2提取图像特征,代码如下:def get_feature_extractor(): feature_extractor = keras.applications.inception_resnet_v2.InceptionResNetV2( weights = 'imagenet', include_top = False, pooling = 'avg', input_shape = (IMG_SIZE, IMG_SIZE, 3) ) preprocess_input = keras.applications.inception_resnet_v2.preprocess_input inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3)) preprocessed = preprocess_input(inputs) outputs = feature_extractor(preprocessed) model = keras.Model(inputs, outputs, name = 'feature_extractor') return model接着提取视频特征向量,如果视频不足40帧就创建全0数组进行补白:def load_data(videos, labels): video_features = [] for video in tqdm(videos): frames = load_video(video) counts = len(frames) # 如果帧数小于MAX_SEQUENCE_LENGTH if counts < MAX_SEQUENCE_LENGTH: # 补白 diff = MAX_SEQUENCE_LENGTH - counts # 创建全0的numpy数组 padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3)) # 数组拼接 frames = np.concatenate((frames, padding)) # 获取前MAX_SEQUENCE_LENGTH帧画面 frames = frames[:MAX_SEQUENCE_LENGTH, :] # 批量提取特征 video_feature = feature_extractor.predict(frames) video_features.append(video_feature) return np.array(video_features), np.array(labels)最后创建VIT Model,代码如下:# 位置编码 class PositionalEmbedding(layers.Layer): def __init__(self, seq_length, output_dim): super().__init__() # 构造从0~MAX_SEQUENCE_LENGTH的列表 self.positions = tf.range(0, limit=MAX_SEQUENCE_LENGTH) self.positional_embedding = layers.Embedding(input_dim=seq_length, output_dim=output_dim) def call(self,x): # 位置编码 positions_embedding = self.positional_embedding(self.positions) # 输入相加 return x + positions_embedding # 编码器 class TransformerEncoder(layers.Layer): def __init__(self, num_heads, embed_dim): super().__init__() self.p_embedding = PositionalEmbedding(MAX_SEQUENCE_LENGTH, NUM_FEATURES) self.attention = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim, dropout=0.1) self.layernorm = layers.LayerNormalization() def call(self,x): # positional embedding positional_embedding = self.p_embedding(x) # self attention attention_out = self.attention( query = positional_embedding, value = positional_embedding, key = positional_embedding, attention_mask = None ) # layer norm with residual connection output = self.layernorm(positional_embedding + attention_out) return output def video_cls_model(class_vocab): # 类别数量 classes_num = len(class_vocab) # 定义模型 model = keras.Sequential([ layers.InputLayer(input_shape=(MAX_SEQUENCE_LENGTH, NUM_FEATURES)), TransformerEncoder(2, NUM_FEATURES), layers.GlobalMaxPooling1D(), layers.Dropout(0.1), layers.Dense(classes_num, activation="softmax") ]) # 编译模型 model.compile(optimizer = keras.optimizers.Adam(1e-5), loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics = ['accuracy'] ) return model模型训练完整体验可以点击Run in ModelArts一键运行我发布的Notebook:最终模型在整个数据集上的准确率达到87%,即在小数据集上训练取得了较为不错的结果。视频推理首先加载VIT Model,获取视频类别索引标签:import random # 加载模型 model = tf.keras.models.load_model('saved_model') # 类别标签 label_to_name = {0:'无效手势', 1:'上滑', 2:'下滑', 3:'左滑', 4:'右滑', 5:'打开', 6:'关闭', 7:'放大', 8:'缩小'}然后使用图像特征提取器InceptionResNetV2提取视频特征:# 获取视频特征 def getVideoFeat(frames): frames_count = len(frames) # 如果帧数小于MAX_SEQUENCE_LENGTH if frames_count < MAX_SEQUENCE_LENGTH: # 补白 diff = MAX_SEQUENCE_LENGTH - frames_count # 创建全0的numpy数组 padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3)) # 数组拼接 frames = np.concatenate((frames, padding)) # 取前MAX_SEQ_LENGTH帧 frames = frames[:MAX_SEQUENCE_LENGTH,:] # 计算视频特征 N, 1536 video_feat = feature_extractor.predict(frames) return video_feat最后将视频序列的特征向量输入Transformer Encoder进行预测:# 视频预测 def testVideo(): test_file = random.sample(videos, 1)[0] label = test_file.split('_')[-2] print('文件名:{}'.format(test_file) ) print('真实类别:{}'.format(label_to_name.get(int(label))) ) # 读取视频每一帧 frames = load_video(test_file) # 挑选前帧MAX_SEQUENCE_LENGTH显示 frames = frames[:MAX_SEQUENCE_LENGTH].astype(np.uint8) # 保存为GIF imageio.mimsave('animation.gif', frames, duration=10) # 获取特征 feat = getVideoFeat(frames) # 模型推理 prob = model.predict(tf.expand_dims(feat, axis=0))[0] print('预测类别:') for i in np.argsort(prob)[::-1][:5]: print('{}: {}%'.format(label_to_name[i], round(prob[i]*100, 2))) return display(Image(open('animation.gif', 'rb').read()))模型预测结果:文件名:hand_gesture/woman_014_0_7.mp4 真实类别:无效手势 预测类别: 无效手势: 99.82% 下滑: 0.12% 关闭: 0.04% 左滑: 0.01% 打开: 0.01%
-
2024年,你最期待AI在哪个方面的革命
-
我在modelarts中的AI gallery中订阅了如下算法:根据对应算法提供的训练说明进行训练,并最后部署为在线服务(期间遇到的各种问题都通过工单方式解决👍🏻),但输出的结果始终不对,如下所示(算法为:Llama-7B-预训练-全参微调):想问各位大佬,这种情况我应该如何解决?AI gallery提供的算法准确性如何?我目前只想跑通一个开源大模型算法的训练流程并输出对应正确的结果,有推荐的算法么?
-
1.AI Gallery1.1 AI Gallery简介AI Gallery是在ModelArts的基础上构建的开发者生态社区,提供了Notebook代码样例、数据集、算法、模型、Workflow等AI数字资产的共享,为高校科研机构、AI应用开发商、解决方案集成商、企业级/个人开发者等群体,提供安全、开放的共享及交易环节,加速AI资产的开发与落地,保障AI开发生态链上各参与方高效地实现各自的商业价值。AI Gallery文档:cid:link_0AI Gallery中的代码样例平台:cid:link_11.2 AI Gallery前提准备1.2.1 输入中文电影进行样例搜索1.2.2 点击Run in ModelArts1.2.3 选择环境1.3 AI Gallery在线分析影评情感基调样例代码使用1.3.1 准备代码和数据相关代码、数据和模型都已存放在OBS中,执行下面一段代码即可将其拷贝到Notebook中import osimport moxing as moxif not os.path.exists("/home/ma-user/work/xx"): mox.file.copy_parallel('obs://modelarts-labs-bj4-v2/case_zoo/bert_ch_movie_reviews/bert_movie_ch.zip',"/home/ma-user/work/bert_movie_ch.zip") os.system("cd /home/ma-user/work;unzip bert_movie_ch.zip;rm bert_movie_ch.zip") if os.path.exists("/home/ma-user/work/bert_movie_ch"): print('Download success') else: raise Exception('Download Failed')else: print("Project already exists")执行完成之后会出现Download success。1.3.2 安装所需要的python模块1.3.3 导包及超参设置导包import numpy as npimport randomimport torchimport matplotlib.pyplot as pltfrom torch.nn.utils import clip_grad_norm_from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSamplerfrom transformers import BertTokenizer, BertForSequenceClassification, AdamWfrom transformers import get_linear_schedule_with_warmupimport warningswarnings.filterwarnings('ignore')切换路径# 进入项目路径%cd /home/ma-user/work/bert_movie_ch超参数设置# 超参数设置SEED = 123BATCH_SIZE = 16LEARNING_RATE = 2e-5WEIGHT_DECAY = 1e-2EPSILON = 1e-8random.seed(SEED)np.random.seed(SEED)torch.manual_seed(SEED)1.3.4 数据处理1.3.4.1 获取文本内容# 读取文件,返回文件内容def readfile(filename): with open(filename, encoding="utf-8") as f: content = f.readlines() return content# 正负情感语料pos_text = readfile('./data/pos.txt')neg_text = readfile('./data/neg.txt')sentences = pos_text + neg_textpos_textlen(pos_text)1.3.4.2 转换成数组长度# label encoderpos_targets = np.ones((len(pos_text))) # -->1neg_targets = np.zeros((len(neg_text))) # -->0targets = np.concatenate((pos_targets, neg_targets), axis=0).reshape(-1, 1) targets.shapepos_targetsneg_targets# 转换为tensortotal_targets = torch.tensor(targets)total_targets.shape1.3.4.3 加载模型进行分词# 从预训练模型中加载bert-base-chinese# [UNK] 特征 [CLS]起始 [SEP]结束tokenizer = BertTokenizer.from_pretrained('bert-base-chinese', cache_dir="./transformer_file/")tokenizerprint(pos_text[1])# 进行分词print(tokenizer.tokenize(pos_text[1]))# bert编码,会增加开始[CLS]--101 和 结束[SEP]--102标记print(tokenizer.encode(pos_text[1]))# 将bert编码转换为 字print(tokenizer.convert_ids_to_tokens(tokenizer.encode(pos_text[1])))tokenizer.encode("我")1.3.4.4 句子转数字进行编码#将每个句子转成数字(大于126做截断,小于126做PADDING,加上首尾两个标识,长度总共等于128)def convert_text_to_token(tokenizer, sentence, limit_size=126): tokens = tokenizer.encode(sentence[:limit_size]) # 直接截断 if len(tokens) < limit_size + 2: # 补齐(pad的索引号就是0) tokens.extend([0] * (limit_size + 2 - len(tokens))) return tokens# 对每个句子进行编码input_ids = [convert_text_to_token(tokenizer, x) for x in sentences]# 放到tensor中input_tokens = torch.tensor(input_ids)print(input_tokens.shape)input_tokens[1]1.3.4.5 建立mask# 建立maskdef attention_masks(input_ids): atten_masks = [] for seq in input_ids: # 如果有编码(>0)即为1, pad为0 seq_mask = [float(x>0) for x in seq] atten_masks.append(seq_mask) return atten_masks# 生成attention_masksatten_masks = attention_masks(input_ids)# 将atten_masks放到tensor中attention_tokens = torch.tensor(atten_masks)print(attention_tokens)print(attention_tokens.size())print('input_tokens:\n', input_tokens) # shape=[7360, 128]print('total_targets:\n', total_targets) # shape=[7360, 1]print('attention_tokens:\n', attention_tokens) # shape=[7360, 128]1.3.4.6 切分from sklearn.model_selection import train_test_split# 使用random_state固定切分方式,切分 train_inputs, train_labels, train_masks,train_inputs, test_inputs, train_labels, test_labels = train_test_split(input_tokens, total_targets, random_state=2022, test_size=0.2)train_masks, test_masks, _, _ = train_test_split(attention_tokens, input_tokens, random_state=2022, test_size=0.2)print(train_inputs.shape, test_inputs.shape) #torch.Size([8000, 128]) torch.Size([2000, 128])print(train_masks.shape, test_masks.shape) #torch.Size([8000, 128])和train_inputs形状一样print(train_inputs[0])print(train_masks[0])1.3.4.7 打包# 使用TensorDataset对tensor进行打包train_data = TensorDataset(train_inputs, train_masks, train_labels)# 无放回地随机采样样本元素train_sampler = RandomSampler(train_data)train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=BATCH_SIZE)test_data = TensorDataset(test_inputs, test_masks, test_labels)test_sampler = SequentialSampler(test_data)test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=BATCH_SIZE)# 查看dataloader内容for i, (train, mask, label) in enumerate(train_dataloader): #torch.Size([16, 128]) torch.Size([16, 128]) torch.Size([16, 1]) print(train) print(mask) print(label) print(train.shape, mask.shape, label.shape) breakprint('len(train_dataloader)=', len(train_dataloader)) #368# 二分类结果评估def binary_acc(preds, labels): #preds.shape=(16, 2) labels.shape=torch.Size([16, 1]) # eq里面的两个参数的shape=torch.Size([16]) correct = torch.eq(torch.max(preds, dim=1)[1], labels.flatten()).float() if 0: print('binary acc ********') print('preds = ', preds) print('labels = ', labels) print('correct = ', correct) acc = correct.sum().item() / len(correct) return accimport timeimport datetime# 时间格式化def format_time(elapsed): elapsed_rounded = int(round((elapsed))) return str(datetime.timedelta(seconds=elapsed_rounded)) #返回 hh:mm:ss 形式的时间1.3.5 训练和评估1.3.5.1 加载预训练模型# 加载预训练模型, num_labels表示2个分类,好评和差评model = BertForSequenceClassification.from_pretrained("bert-base-chinese", num_labels = 2)# 使用GPUdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")model.to(device)1.3.5.2 定义优化器# 定义优化器 AdamW, eps默认就为1e-8(增加分母的数值,用来提高数值稳定性)no_decay = ['bias', 'LayerNorm.weight']optimizer_grouped_parameters = [ {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': WEIGHT_DECAY}, {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]optimizer = AdamW(optimizer_grouped_parameters, lr = LEARNING_RATE, eps = EPSILON)epochs = 2 #迭代次数# training steps 的数量: [number of batches] x [number of epochs].total_steps = len(train_dataloader) * epochs# 设计 learning rate scheduler.scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = total_steps)1.3.5.3 定义模型训练# 定义模型训练def train(model, optimizer): t0 = time.time() # 记录当前时刻 avg_loss, avg_acc = [],[] # 开启训练模式 model.train() for step, batch in enumerate(train_dataloader): # 每隔40个batch 输出一下所用时间. if step % 40 == 0 and not step == 0: elapsed = format_time(time.time() - t0) print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) # 从batch中取数据,并放到GPU中 b_input_ids, b_input_mask, b_labels = batch[0].long().to(device), batch[1].long().to(device), batch[2].long().to(device) output = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss, logits = output[0], output[1] avg_loss.append(loss.item()) acc = binary_acc(logits, b_labels) avg_acc.append(acc) optimizer.zero_grad() loss.backward() clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_loss = np.array(avg_loss).mean() avg_acc = np.array(avg_acc).mean() return avg_loss, avg_acc1.3.5.4 训练 & 评估# 训练 & 评估for epoch in range(epochs): # 模型训练 train_loss, train_acc = train(model, optimizer) print('epoch={},train_acc={},loss={}'.format(epoch, train_acc, train_loss)) # 模型评估 test_acc = evaluate(model) print("epoch={},test_acc={}".format(epoch, test_acc))1.3.6 预测def predict(sen): # 将sen 转换为id input_id = convert_text_to_token(tokenizer, sen) # print(input_id) # 放到tensor中 input_token = torch.tensor(input_id).long().to(device) #torch.Size([128]) # 统计有id的部分,即为 1(mask),并且转换为float类型 atten_mask = [float(i>0) for i in input_id] # 将mask放到tensor中 attention_token = torch.tensor(atten_mask).long().to(device) #torch.Size([128]) # 转换格式 size= [1,128], torch.Size([128])->torch.Size([1, 128])否则会报错 attention_mask = attention_token.view(1, -1) output = model(input_token.view(1, -1), token_type_ids=None, attention_mask=attention_mask) return torch.max(output[0], dim=1)[1]label = predict('掏钱看这部电影,才是真正被杀猪盘了。。。')print('好评' if label==1 else '差评')label = predict('我觉得挺不错的。在中国看来算是最好的科幻大片了.毕竟国产。支持一下!')print('好评' if label==1 else '差评')label = predict('但是影片制作“略显”粗糙。包括但不限于演员演技拉胯,剧情尴尬,情节设计不合理。期待值最高的王千源完全像没睡醒,全片无论扮演什么身份或处于什么场景,表情就没变过。身体力行的告诉了我们,演员最重要不是演什么像什么,而是演什么就换什么衣服。除了人物身份的切换是靠换衣服外和张光北演对手戏完全就像是在看着提词器念台词,俩下对比尴尬到都能扣出个三室一厅。女配“陈茜”和“刘美美”一个没看到“孤身入虎穴”的作用在哪里,一个则是完全没必要出现。随着故事的递进加情节设计的不合理导致整片完全垮掉。或者你把俩女配的情节递进处理好了也行,但是很显然,导演完全不具备这种能力。不仅宏观叙事的能力差,主题把控的能力也欠缺。看似这个电影是宣传反诈,实则是披着反诈的外衣,上演了正派和反派间弱智般的“强强”对决。就以反诈题材来说做的都不如b站up主录制的几分缅北诈骗集团的小视频更有警示意义。我们搞反诈宣传的目的不就是为了避免群众上当受骗,同时告诉大家警惕国外高薪工作,不去从事诈骗活动吗?当然我要吐槽的包括但不限于以上这些,麻烦各位导演在拍偏主旋律电影的时候不要用类似于本片抓捕大boos时说的:“现在中国太强大了,怎么怎么地类似的台词了,把这份荣誉留给吴京吧!最后,本片唯一的亮点就是王迅和前三分之一节奏感还不错。哎,特价买的票以为我赚了,没想到是我被“杀猪盘了”。')print('好评' if label==1 else '差评')转载原文链接:【云驻共创】华为云AI之《情感专家》在线分析影评情感基调_云汉ai顾问 情感板块-CSDN博客
-
在modelarts里创建notebook时,可以到ai gallery里取市场里选择镜像,但是好像都是官方提供的?都是官方镜像吧:那么我的问题是,个人创建的镜像,是否能够传到ai gallery供他人使用呢?目前看到的好像是不可以?因为自定义的镜像没有发现有分享到 ai gallery的功能。
上滑加载中
推荐直播
-
TinyEngine低代码引擎系列.第1讲——低代码浪潮之下,带你走进TinyEngine
2024/11/11 周一 16:00-18:00
李老师 高级前端开发工程师
低代码浪潮之下,带你走进TinyEngine。李旭宏老师将从低代码的发展趋势、TinyEngine的项目介绍,三方物料组件的使用、跨技术栈的使用、源码生成能力的差异性对比等多个方面带大家对TinyEngine低代码引擎有一个更清晰的认知和了解。
即将直播 -
0代码智能构建AI Agent——华为云AI原生应用引擎的架构与实践
2024/11/13 周三 16:30-18:00
苏秦 华为云aPaaS DTSE技术布道师
大模型及生成式AI对应用和软件产业带来了哪些影响?从企业场景及应用开发视角,面向AI原生应用需要什么样的工具及平台能力?企业要如何选好、用好、管好大模型,使能AI原生应用快速创新?本期直播,华为云aPaaS DTSE技术布道师苏秦将基于华为云自身实践出发,深入浅出地介绍华为云AI原生应用引擎,通过分钟级智能生成Agent应用的方式帮助企业完成从传统应用到智能应用的竞争力转型,使能千行万业智能应用创新。
去报名
热门标签