-
如何在Python中调用C++版本的ByteTrack跟踪算法这个项目提供了基于ByteTrack-TensorRT的Python插件,并在原有算法基础上提供了跟踪目标的类别信息,Jetson Orin Nano在之前YOLOv5插件的基础上实现高达83 FPS的实时检测跟踪性能。⚡ 极致性能: 基于TensorRT优化,充分利用硬件加速📦 开箱即用:构建过程简单,快速部署您的跟踪应用🐍 Python 友好: 使用Pybind11提供简洁Python接口📱 边缘设备优化: 特别针对Jetson边缘设备进行适配Build plugin首先安装必要的库克隆仓库构建项目,注意JetPack 5.x版本才能正常运行:sudo apt update sudo apt install ffmpeg sudo apt install pybind11-dev sudo apt install libeigen3-dev git cone https://github.com/HouYanSong/bytetrack_pybind11.git cd bytetrack_pybind11 pip install pybind11 rm -fr build cmake -S . -B build cmake --build build[ 12%] Building CXX object CMakeFiles/bytetrack.dir/bytetrack/src/BYTETracker.cpp.o [ 25%] Building CXX object CMakeFiles/bytetrack.dir/bytetrack/src/STrack.cpp.o [ 37%] Building CXX object CMakeFiles/bytetrack.dir/bytetrack/src/kalmanFilter.cpp.o [ 50%] Building CXX object CMakeFiles/bytetrack.dir/bytetrack/src/lapjv.cpp.o [ 62%] Building CXX object CMakeFiles/bytetrack.dir/bytetrack/src/utils.cpp.o [ 75%] Linking CXX shared library libbytetrack.so [ 75%] Built target bytetrack [ 87%] Building CXX object CMakeFiles/bytetrack_trt.dir/bytetrack_trt.cpp.o [100%] Linking CXX shared module bytetrack_trt.cpython-38-aarch64-linux-gnu.so [100%] Built target bytetrack_trtRun demo我们提供了一个简单的Python示例,只需要导入C++构建的Python动态链接库就可以非常方便的调用ByteTrack跟踪算法,返回目标位置、跟踪ID和类别信息。import cv2 import time import ctypes # 加载依赖库 ctypes.CDLL("./yolov5_trt_plugin/libyolo_plugin.so", mode=ctypes.RTLD_GLOBAL) ctypes.CDLL("./yolov5_trt_plugin/libyolo_utils.so", mode=ctypes.RTLD_GLOBAL) ctypes.CDLL("./build/libbytetrack.so", mode=ctypes.RTLD_GLOBAL) # 导入YOLOv5检测器和ByteTrack跟踪器 from yolov5_trt_plugin import yolov5_trt from build import bytetrack_trt def draw_image(image, detections, tracks, fps): for track in tracks: x, y, w, h = track.tlwh track_id = track.track_id class_id = track.label x1, y1, x2, y2 = int(x), int(y), int(x+w), int(y+h) cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2) cv2.putText(image, f"C:{class_id} T:{track_id}", (x1, y1 - 10), cv2.FONT_HERSHEY_PLAIN, 1.2, (0, 0, 255), 2) cv2.putText(image, f"FPS: {fps:.2f}", (10, 30), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 0, 255), 2) return image def main(input_path, output_path): cap = cv2.VideoCapture(input_path) fps_value = int(cap.get(cv2.CAP_PROP_FPS)) width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) writer = cv2.VideoWriter(output_path, cv2.VideoWriter_fourcc(*'MJPG'), fps_value, (width, height)) detector = yolov5_trt.YOLOv5Detector("./yolov5_trt_plugin/yolov5s.engine", width, height) tracker = bytetrack_trt.BYTETracker(frame_rate = fps_value, track_buffer = 30) fps_list = [] frame_count = 0 total_time = 0.0 while cap.isOpened(): ret, frame = cap.read() if not ret: break start_time = time.time() # 目标检测 detections = detector.detect(input_image=frame, input_w=640, input_h=640, conf_thresh=0.45, nms_thresh=0.55) objects = [] for det in detections: x1, y1, x2, y2 = det['bbox'] rect = bytetrack_trt.RectFloat(x1, y1, x2-x1, y2-y1) # x, y, width, height obj = bytetrack_trt.Object() obj.rect = rect obj.label = det['class_id'] obj.prob = det['confidence'] objects.append(obj) # 目标跟踪 tracks = tracker.update(objects) process_time = time.time() - start_time current_fps = 1.0 / process_time if process_time > 0 else 0 frame_count += 1 total_time += process_time fps_list.append(current_fps) # 图像绘制 image = draw_image(frame, detections, tracks, current_fps) writer.write(image) cap.release() writer.release() if frame_count > 0: avg_fps = frame_count / total_time if total_time > 0 else 0 print(f"Processed {frame_count} frames") print(f"Average FPS: {avg_fps:.2f}") print(f"Min FPS: {min(fps_list):.2f}") print(f"Max FPS: {max(fps_list):.2f}") if __name__ == "__main__": input_video = "./media/sample_720p.mp4" output_video = "./result.avi" main(input_video, output_video) 仅需在终端中运行yolov5_bytetrack.py脚本:python yolov5_bytetrack.py[11/07/2025-17:13:10] [I] [TRT] Loaded engine size: 8 MiB Deserialize yoloLayer plugin: YoloLayer [11/07/2025-17:13:12] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +536, GPU +702, now: CPU 841, GPU 3927 (MiB) [11/07/2025-17:13:12] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +83, GPU +94, now: CPU 924, GPU 4021 (MiB) [11/07/2025-17:13:12] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +7, now: CPU 0, GPU 7 (MiB) [11/07/2025-17:13:12] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 924, GPU 4021 (MiB) [11/07/2025-17:13:12] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +1, now: CPU 924, GPU 4022 (MiB) [11/07/2025-17:13:12] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +11, now: CPU 0, GPU 18 (MiB) Init ByteTrack! Processed 1442 frames Average FPS: 83.78 Min FPS: 68.31 Max FPS: 113.35 Conclusion Remarks本文实现了ByteTrack-TensorRT跟踪算法的Python插件,并在原有算法基础上提供了跟踪目标的类别信息,Jetson Orin Nano (8GB)上的YOLOv5实时目标检测和跟踪速度高达80FPS,满足对快速运动目标的跟踪需求。
-
1/1.8" 400 万像素CMOS 传感器支持24 倍电动变焦、自动聚焦支持AIISP 图像增强,增强低噪效果支持标准ONVIF、GB28181 协议内嵌智能深度学习算力2.0Tops开放AI算法部署二次开发
Todd_Wong2010
发表于2025-08-07 11:24:06
2025-08-07 11:24:06
最后回复
Todd_Wong2010
2025-11-04 10:00:14
127 3 -
#include <cstdio>#include <cassert>#include <cstdlib>#include <string>#include <vector>#include <thread>#include <mutex>#include <condition_variable>#include <iostream>云端报错,是不能用多线程还是有不可有用的库嘛
-
针对海外项目提供了英文版等对接资料,新增《华为SDC restful 对接一站式开发指南》《SDC算法上车一站式开发指南》《IVS1800 restful 对接一站式开发指南》《实况&录像业务流程文档》《IVS1800第三方算法上车一站式开发指南》。项目类型文档明细海外英文文档名称下载链接SDCSDKHoloSens SDC SDK开发指南HoloSens SDC SDK Development GuideLINK第三方平台对接HOLOWITS摄像头TLV数据HoloSens SDC TLV Data for Third-Party Platform ConnectionLINKrestfulSDC restful 对接一站式开发指南 SDC RESTful Interconnection One-Stop Development Guide LINK南向算法HoloSens SDC APP开发指南HOLOWITS Camera App Development GuideLINKSDC算法上车一站式开发指南One-Stop Algorithm Development GuideLINK智能微边缘restfulIVS1800 restful 对接一站式开发指南IVS1800 RESTful Interconnection One-Stop Development GuideLINK实况&录像业务流程文档Obtaining URL Video Streams for Live Video and Recording PlaybackLINKIVS1800 11.1.0 接口参考(RESTFUL)HWT-IVS1800 11.1.0 Interface Reference (RESTful)LINK南向算法IVS1800-E第三方算法上车一站式开发指南HWT-IVS1800E 11.1 Algorithm Development GuideLINK
-
Httpserver测试工具
-
1、IVS1800&ITS800 restful 对接一站式开发指南2、FAQ
-
功能介绍文档名称内容简介链接产品介绍微边缘产品书架设备彩页、产品宣传视频资料中心 (huawei.com)文档资料产品文档产品文档、restful接口文档,软件版本等产品文档restful 对接一站式开发指南微边缘设备北向对接接口指南,其中包含业务流程、场景化开发指导、常见问题FAQ等开发指南restful demo微边缘设备北向对接demo相关流程demoIVS1800-E算法上车一站式开发指南介绍微边缘设备第三方算法开发流程算法开发指南常用工具postman测试集包含IVS1800 restful和NVR800 API接口测试集接口集Httpserver测试工具用于告警与元数据推送测试回调工具iClient S100客户端客户端可以对IVS1800、NVR800中接入的摄像机进行设备管理、视频、告警等业务的操作。iClient S100_华为好望商城_智能视频算法_智能识别_机器视觉_云商店-华为云 (huaweicloud.com)iClient ME客户端客户端可以对IVS1800、ITS800接入的摄像机及雷达进行管理和视频业务的操作,支持告警管理、全息路口标定、雷视轨迹拟合、人车智能分析检索等业务。iClient ME_华为好望商城_智能视频算法_智能识别_机器视觉_云商店-华为云 (huaweicloud.com)
推荐直播
-
HDC深度解读系列 - Serverless与MCP融合创新,构建AI应用全新智能中枢2025/08/20 周三 16:30-18:00
张昆鹏 HCDG北京核心组代表
HDC2025期间,华为云展示了Serverless与MCP融合创新的解决方案,本期访谈直播,由华为云开发者专家(HCDE)兼华为云开发者社区组织HCDG北京核心组代表张鹏先生主持,华为云PaaS服务产品部 Serverless总监Ewen为大家深度解读华为云Serverless与MCP如何融合构建AI应用全新智能中枢
回顾中 -
关于RISC-V生态发展的思考2025/09/02 周二 17:00-18:00
中国科学院计算技术研究所副所长包云岗教授
中科院包云岗老师将在本次直播中,探讨处理器生态的关键要素及其联系,分享过去几年推动RISC-V生态建设实践过程中的经验与教训。
回顾中 -
一键搞定华为云万级资源,3步轻松管理企业成本2025/09/09 周二 15:00-16:00
阿言 华为云交易产品经理
本直播重点介绍如何一键续费万级资源,3步轻松管理成本,帮助提升日常管理效率!
回顾中
热门标签