您对华为云开发者网站的整体评价?

非常不满意 非常满意

0

1

2

3

4

5

6

7

8

9

10

*您遇到了哪些问题?(最多选三项)
*您感到满意的原因是?(最多选三项)
*请针对您所遇到的问题给出具体的反馈
200/200

Notebook
MobileNetV2人脸关键点检测模型训练与转换
TensorFlow--ONNX--TFLite--RKNN
HouYanSong
19个月以前
7MB 222 4
  • 许可证类型 ? Unknown
  • 标签
    图像分类关键点检测TensorFlow-2.x图像分类关键点检测TensorFlow
  • 资产ID 4dd988cb-faf9-4636-a6c4-3400b7149c7e

描述

MobileNetV2人脸关键点检测模型训练与转换

本案例采用MobileNetV2作为backbone,在人脸关键点检测数据集V2上进行训练。

注意:本案例必须使用GPU运行,请查看《ModelArts JupyterLab 硬件规格使用指南》了解切换硬件规格的方法。

拉取数据和模型权重文件

import os
import moxing as mox

if not os.path.exists("datasets.zip"):
    mox.file.copy("obs://houyansong/datasets.zip", "datasets.zip")

if not os.path.exists("datasets"):
    os.system("unzip -q datasets.zip")
INFO:root:Using MoXing-v2.1.0.5d9c87c8-5d9c87c8
INFO:root:Using OBS-Python-SDK-3.20.9.1
if not os.path.exists("mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5"):
    mox.file.copy("obs://houyansong/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5", "mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5")
if not os.path.exists("/home/ma-user/.keras/models"):
    os.makedirs("/home/ma-user/.keras/models") 
!cp ~/work/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5 ~/.keras/models/
!pip install h5py==2.10.0
!pip install protobuf==3.20.2
Looking in indexes: http://repo.myhuaweicloud.com/repository/pypi/simple
Requirement already satisfied: protobuf==3.20.2 in /home/ma-user/anaconda3/envs/TensorFlow-2.1/lib/python3.7/site-packages (3.20.2)
WARNING: You are using pip version 21.0.1; however, version 23.0.1 is available.
You should consider upgrading via the '/home/ma-user/anaconda3/envs/TensorFlow-2.1/bin/python3.7 -m pip install --upgrade pip' command.

导包

import cv2
import glob
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

创建输入管道

imgs = glob.glob('datasets/img/*.jpg')
txts = glob.glob('datasets/txt/*.txt')
len(imgs), len(txts)
(24594, 24594)
imgs.sort(key=lambda x: x.split('/')[-1])
txts.sort(key=lambda x: x.split('/')[-1])

查看数据集

plt.figure(figsize=(18,10))
for i in range(15):
    x = random.randint(0, len(imgs)-1)
    img = cv2.imread(imgs[x])
    label = imgs[x].split('_')[-2]
    img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    with open(txts[x],'r') as f:
        points_list = [ t.strip() for t in f.readlines() ] 
        keypoint_features = np.array(points_list).reshape(-1,2)
        for x,y in keypoint_features:
            cv2.circle(img,(int(x),int(y)),5,(255,0,255),-1)
    
    plt.subplot(3,5,i+1)
    plt.title(label)
    plt.axis('off')
    plt.imshow(img)
plt.show()

Tensorflow

from tqdm import *
import tensorflow as tf
from matplotlib.patches import Rectangle
2023-07-21 04:18:32.811589: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2023-07-21 04:18:32.813745: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
/home/ma-user/anaconda3/envs/TensorFlow-2.1/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (5.0.0)/charset_normalizer (2.0.12) doesn't match a supported version!
  RequestsDependencyWarning)
tf.__version__
'2.1.0'
tf.test.is_gpu_available()
WARNING:tensorflow:From /tmp/ipykernel_10744/337460670.py:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-07-21 04:18:33.735621: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
True

数据预处理

scal = 224
def to_labels(img, txt):
    label = []
    clss = img.split('_')[-2]
    img = cv2.imread(img)
    h, w = img.shape[0], img.shape[1]
    with open(txt, 'r') as f:
        points_list = [t.strip() for t in f.readlines()]
        keypoint_features = np.array(points_list).reshape(-1,2)
        for x, y in keypoint_features:
            label.append(int(x)/w)
            label.append(int(y)/h)
    return clss, label
classes = []
labels = []
clss_to_index = {'others': 0, 'normal': 1}
index_to_clss = {0: 'others', 1: 'normal'}
for i in tqdm(range(len(imgs)), desc='Processing'):
    clss, label = to_labels(imgs[i], txts[i])
    index = clss_to_index.get(clss)
    classes.append(index)
    labels.append(label)
Processing: 100%|██████████| 24594/24594 [02:23<00:00, 171.56it/s]
index = np.random.permutation(len(imgs))
images = np.array(imgs)[index]
classes = np.array(classes)[index]
labels = np.array(labels)[index]

创建数据集

image_dataset = tf.data.Dataset.from_tensor_slices(images)
label_dataset = tf.data.Dataset.from_tensor_slices((classes, labels))
def read_jpg(path):
    img = tf.io.read_file(path)
    img = tf.image.decode_jpeg(img, channels=3)
    return img
def normalize(input_image):
    input_image = tf.image.resize(input_image, [scal, scal])
    input_image = tf.cast(input_image, tf.float32)/255.0
    return input_image
@tf.function
def load_image(input_image_path):
    input_image = read_jpg(input_image_path)
    input_image = normalize(input_image)
    return input_image
image_dataset = image_dataset.map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = tf.data.Dataset.zip((image_dataset, label_dataset))
dataset
<ZipDataset shapes: ((224, 224, 3), ((), (36,))), types: (tf.float32, (tf.int64, tf.float64))>
test_count = int(len(images)*0.2)
train_count = len(images) - test_count
train_count, test_count
(19676, 4918)
dataset_train = dataset.skip(test_count)
dataset_test = dataset.take(test_count)
BATCH_SIZE = 32
BUFFER_SIZE = 400
STEPS_PER_EPOCH = train_count // BATCH_SIZE
VALIDATION_STEPS = test_count // BATCH_SIZE
train_dataset = dataset_train.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
train_dataset = train_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
test_dataset = dataset_test.batch(BATCH_SIZE)
train_dataset, test_dataset
(<PrefetchDataset shapes: ((None, 224, 224, 3), ((None,), (None, 36))), types: (tf.float32, (tf.int64, tf.float64))>,
 <BatchDataset shapes: ((None, 224, 224, 3), ((None,), (None, 36))), types: (tf.float32, (tf.int64, tf.float64))>)

查看训练集或测试集

for img, label in train_dataset.take(10):
    plt.imshow(tf.keras.preprocessing.image.array_to_img(img[0]))
    clss, points_list = label
    index = clss[0].numpy()
    points = points_list[0].numpy()*scal
    x_l = []
    y_l = []
    for x, y in points.reshape(-1, 2):
        x_l.append(x)
        y_l.append(y)
    plt.xlabel(index_to_clss.get(index))
    plt.scatter(x_l, y_l, s=10, c='yellow')
    ax = plt.gca()
    plt.show()

构建多输出模型

conv_base = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet')
inputs = tf.keras.Input(shape=(224, 224, 3))
x = conv_base(inputs)
x.get_shape()
TensorShape([None, 7, 7, 1280])
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x.get_shape()
TensorShape([None, 1280])
x1 = tf.keras.layers.Dense(1024, activation='relu')(x)
out_class = tf.keras.layers.Dense(1, activation='sigmoid', name='out_class')(x1)
x2 = tf.keras.layers.Dense(1024, activation='relu')(x)
out_label = tf.keras.layers.Dense(36, name='out_label')(x2)
model = tf.keras.Model(inputs=inputs,
                       outputs=[out_class, out_label])
model.summary()
model.compile(optimizer='adam',
              loss={'out_class':'binary_crossentropy',
                    'out_label':'mean_squared_error'},
              
              metrics={'out_class':'acc', 'out_label':'mae'}
)
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
if not os.path.exists("data"):
    os.mkdir('data')

# 定期存储
checkpointer = ModelCheckpoint(filepath = './data/best_model_resnet_224.hdf5', monitor='val_out_label_mae', verbose=1, save_best_only=True, mode='min')
# 提前终止
earlyStopping = EarlyStopping(monitor='out_label_loss', patience=30, mode='min',baseline=None)
# 减少learning rate
rlp = ReduceLROnPlateau(monitor='val_out_label_loss', factor=0.7, patience=5, min_lr=1e-15, mode='min', verbose=1)

模型训练

EPOCHS = 100
history = model.fit(train_dataset, 
                    epochs=EPOCHS,
                    steps_per_epoch=STEPS_PER_EPOCH,
                    validation_steps=VALIDATION_STEPS,
                    validation_data=test_dataset,
                    callbacks=[checkpointer,earlyStopping, rlp])
Train for 614 steps, validate for 153 steps
Epoch 1/100
2023-07-21 04:21:14.070790: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2023-07-21 04:21:18.094338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7

打印指标

model.evaluate(test_dataset)
154/154 [==============================] - 5s 35ms/step - loss: 3.6775e-05 - out_class_loss: 6.8879e-09 - out_label_loss: 3.6768e-05 - out_class_acc: 1.0000 - out_label_mae: 0.0034
[3.677519639082179e-05, 6.887883e-09, 3.6768313e-05, 1.0, 0.00343829]

绘制结果

linestyle_tuple = [
('loosely dotted', (0, (1, 10))),
('dotted', (0, (1, 1))),
('densely dotted', (0, (1, 2))),
('loosely dashed', (0, (5, 10))),
('dashed', (0, (5, 5))),
('densely dashed', (0, (5, 1))),
('loosely dashdotted', (0, (3, 10, 1, 10))),
('dashdotted', (0, (3, 5, 1, 5))),
('densely dashdotted', (0, (3, 1, 1, 1))),
('dashdotdotted', (0, (3, 5, 1, 5, 1, 5))),
('loosely dashdotdotted', (0, (3, 10, 1, 10, 1, 10))),
('densely dashdotdotted', (0, (3, 1, 1, 1, 1, 1)))]
plt.plot(history.epoch, history.history.get('out_class_loss'), label='out_class_loss', linestyle=linestyle_tuple[3][1])
plt.plot(history.epoch, history.history.get('val_out_class_loss'), label='val_out_class_loss', linestyle=linestyle_tuple[1][1])
plt.xlabel('Training rounds/epoch')
plt.ylabel('Cross entropy loss')
plt.title('Image classification')
plt.legend()
<matplotlib.legend.Legend at 0x7fe6bacc28d0>
plt.plot(history.epoch, history.history.get('out_label_mae'), label='out_label_mae', linestyle=linestyle_tuple[3][1])
plt.plot(history.epoch, history.history.get('val_out_label_mae'), label='val_out_label_mae', linestyle=linestyle_tuple[1][1])
plt.xlabel('Training rounds/epoch')
plt.ylabel('Mean Absolute Error')
plt.title('Image localization')
plt.legend()
<matplotlib.legend.Legend at 0x7fe6b8562a50>
plt.plot(history.epoch, history.history.get('out_class_acc'), label='out_class_acc', linestyle=linestyle_tuple[3][1])
plt.plot(history.epoch, history.history.get('val_out_class_acc'), label='val_out_class_acc', linestyle=linestyle_tuple[1][1])
plt.xlabel('Training rounds/epoch')
plt.ylabel('Accuracy/%')
plt.title('Model precision')
plt.legend()
<matplotlib.legend.Legend at 0x7fe6c26e2810>

保存模型

model.save("./" + model.name)

导出为ONNX

!pip install keras2onnx
import keras2onnx
output_model_path = 'mbv2_224x224.onnx'
onnx_model = keras2onnx.convert_keras(model, model.name, target_opset=12)
keras2onnx.save_model(onnx_model, output_model_path) 
mox.file.copy('mbv2_224x224.onnx', 'obs://houyansong/mbv2_224x224.onnx')

ONNX模型推理

!pip install onnxruntime
import onnxruntime
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
onnx_model = onnxruntime.InferenceSession('mbv2_224x224.onnx')
def load_and_preprocess_image(path):
    image = cv2.imread(path)
    image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
    hwc = image.shape
    img = cv2.resize(image, (224, 224))
    img = img[...,::-1]/255. 
    img = img.astype(np.float32)
    return image, img, hwc
index_to_clss = {0: 'look_ground', 1: 'normal'}
plt.figure(figsize=(18,10))
for i in range(15):
    x = random.randint(0, len(imgs)-1)
    image, img, hwc = load_and_preprocess_image(imgs[x])
    img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    h, w = hwc[0], hwc[1]
    data = np.expand_dims(img, axis=0)
    onnx_input ={onnx_model.get_inputs()[0].name: data}
    score, label = onnx_model.run(None, onnx_input)
    points = label[0]
    for x, y in points.reshape(-1, 2):
        x = x * w
        y = y * h
        cv2.circle(image,(int(x),int(y)),5,(0,255,255),-1)
    plt.subplot(3,5,i+1)
    plt.title('score:'+str(score[0][0]*100)+'% clss:'+index_to_clss.get(int(score[0][0])))
    plt.axis('off')
    plt.imshow(image)
plt.show()

模型预测结果:白色、原始图像标签:绿色

plt.figure(figsize=(18,10))
for i in range(15):
    c = random.randint(0, len(imgs)-1)
    image, img, hwc = load_and_preprocess_image(imgs[c])        
    img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    h, w = hwc[0], hwc[1]
    data = np.expand_dims(img, axis=0)
    onnx_input ={onnx_model.get_inputs()[0].name: data}
    
    score, label = onnx_model.run(None, onnx_input)
    points = label[0]
    for x, y in points.reshape(-1, 2):
        x = x * w
        y = y * h
        cv2.circle(image,(int(x),int(y)),5,(0,255,255),-1)
        
    with open(txts[c],'r') as f:
        points_list = [ t.strip() for t in f.readlines() ] 
        keypoint_features = np.array(points_list).reshape(-1,2)
        for x,y in keypoint_features:
            cv2.circle(image,(int(x),int(y)),5,(0,255,0),-1)
        
    plt.subplot(3,5,i+1)
    plt.title('score:'+str(score[0][0]*100)+'% clss:'+index_to_clss.get(int(score[0][0])))
    plt.axis('off')
    plt.imshow(image)
plt.show()
mox.file.copy_parallel('model', 'obs://houyansong/model')
import tensorflow as tf

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model('model') # path to the SavedModel directory
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

模型转换 TensorFlow->RKNN

使用ModelBox Windows SDK在Windows设备上推理需要onnx模型,使用ModelBox开发板进行推理需要rknn模型,所以我们得到训练好的模型后,需要进行模型转换。

names = ["rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl", "tqdm-4.64.0-py2.py3-none-any.whl"]
for name in names:
    if not os.path.exists(name):
        mox.file.copy(f'obs://modelarts-labs-bj4-v2/course/ModelBox/datasets/{name}', name)
!/home/ma-user/anaconda3/bin/conda create -n py36 python=3.6.10 -y
!/home/ma-user/anaconda3/envs/py36/bin/pip install ipykernel
import json
import os

data = {
   "display_name": "Python36",
   "env": {
      "PATH": "/home/ma-user/anaconda3/envs/py36/bin:/home/ma-user/anaconda3/envs/python-3.7.10/bin:/modelarts/authoring/notebook-conda/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/ma-user/modelarts/ma-cli/bin:/home/ma-user/modelarts/ma-cli/bin:/home/ma-user/anaconda3/envs/PyTorch-1.4/bin"
   },
   "language": "python",
   "argv": [
      "/home/ma-user/anaconda3/envs/py36/bin/python",
      "-m",
      "ipykernel",
      "-f",
      "{connection_file}"
   ]
}

if not os.path.exists("/home/ma-user/anaconda3/share/jupyter/kernels/py36/"):
    os.mkdir("/home/ma-user/anaconda3/share/jupyter/kernels/py36/")

with open('/home/ma-user/anaconda3/share/jupyter/kernels/py36/kernel.json', 'w') as f:
    json.dump(data, f, indent=4)
conda env list
!python -V
Python 3.6.10 :: Anaconda, Inc.
!pip -V
pip 21.2.2 from /home/ma-user/anaconda3/envs/py36/lib/python3.6/site-packages/pip (python 3.6)
!pip install tqdm-4.64.0-py2.py3-none-any.whl
!pip install numpy
!pip install rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl
!pip show rknn-toolkit2
import glob
import numpy as np
images = glob.glob('datasets/img/*.jpg')
np.random.seed(2023)
index = np.random.permutation(len(images))
images[:5]
['datasets/img/night_woman_005_41_6_normal_178.jpg',
 'datasets/img/day_man_002_00_2_normal_172.jpg',
 'datasets/img/night_man_002_40_6_normal_210.jpg',
 'datasets/img/night_woman_005_41_3_normal_132.jpg',
 'datasets/img/petal_20230718_042714_others_68.jpg']
f = open('dataset.txt', 'w')
for i in range(100):
     # 图片绝对路径
    out_path = images[i]
    print(out_path)
    f.write(out_path+'\n')
f.close()
%%python

from rknn.api import RKNN

rknn = RKNN(verbose=False)

print('--> Config model')
rknn.config(mean_values=[[0., 0., 0.]], std_values=[[255., 255., 255.]], target_platform="rk3568")
print('done')

print('--> Loading model')
ret = rknn.load_tflite(model = './model.tflite')
if ret != 0:
    print('Load failed!')
    exit(ret)
print('done')

print('--> Building model')
ret = rknn.build(do_quantization=True, dataset="./dataset.txt")
if ret != 0:
    print('Build failed!')
    exit(ret)
print('done')

print('--> Accuracy analysis')
ret = rknn.accuracy_analysis(inputs=['datasets/img/day_man_002_30_1_normal_278.jpg'])
if ret!=0:
    print('Accuracy analysis failed!')
    exit(ret)
print('done')

print('--> Export RKNN model')
ret = rknn.export_rknn("mbv2_224x224.rknn")
if ret != 0:
    print('Export failed!')
    exit(ret)
print('done')

rknn.release()

rknn模型转换验证结束,可以进行工程开发。

如需下载转换好的模型,只需右键单击对应模型下载即可。

交付

华为云ModelArts

华北-北京四

限制

公开

版本

版本号
版本ID
发布时间
发布状态
版本说明
2.0.0
2.0.0
2023-07-21 00:33
已完成
--

关联资产

  • 人脸关键点检测数据集V2

    用于MobileNetV2人脸关键点检测模型的训练

    数据集
    0 0 79 0 1

    19个月以前

若您怀疑合法知识产权遭受侵犯,可以通过此链接进行投诉与建议。