本文介绍Paddle 2.1.0版本新功能Paddle.Hub API,可快速加载外部扩展模型。以用PaddleClas预训练模型实现猫的12分类为例,演示同步代码、加载模型列表与模型、预处理数据、训练模型及预测的过程,还提及该版本存在的一些问题。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

官方文档:直达链接
API 简介:
| API 名称 | API 功能 |
|---|---|
| list | 查看 Repo 支持的模型列表 |
| help | 查看指定模型的文档 |
| load | 加载指定模型 |
# 同步 PaddleClas 代码!git clone https://gitee.com/PaddlePaddle/PaddleClas -b develop --depth 1
import paddle# 加载 Repo 中的模型列表model_list = paddle.hub.list('PaddleClas', source='local', force_reload=False)print(model_list)# 查看模型帮助文档model_help = paddle.hub.help('PaddleClas', 'mobilenetv3_large_x1_25', source='local', force_reload=False)print(model_help)# 加载模型model = paddle.hub.load('PaddleClas', 'mobilenetv3_large_x1_25', source='local', force_reload=False)# 模型测试data = paddle.rand((1, 3, 224, 224))
out = model(data)print(out.shape) # [1, 1000]['alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'densenet264', 'googlenet', 'inceptionv3', 'inceptionv4', 'mobilenetv1', 'mobilenetv1_x0_25', 'mobilenetv1_x0_5', 'mobilenetv1_x0_75', 'mobilenetv2_x0_25', 'mobilenetv2_x0_5', 'mobilenetv2_x0_75', 'mobilenetv2_x1_5', 'mobilenetv2_x2_0', 'mobilenetv3_large_x0_35', 'mobilenetv3_large_x0_5', 'mobilenetv3_large_x0_75', 'mobilenetv3_large_x1_0', 'mobilenetv3_large_x1_25', 'mobilenetv3_small_x0_35', 'mobilenetv3_small_x0_5', 'mobilenetv3_small_x0_75', 'mobilenetv3_small_x1_0', 'mobilenetv3_small_x1_25', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'resnext101_32x4d', 'resnext101_64x4d', 'resnext152_32x4d', 'resnext152_64x4d', 'resnext50_32x4d', 'resnext50_64x4d', 'shufflenetv2_x0_25', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg13', 'vgg16', 'vgg19']
MobileNetV3_large_x1_25
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_large_x1_25` model depends on args.
[1, 1000]!unzip -q -d /home/aistudio/data/data10954 /home/aistudio/data/data10954/cat_12_train.zip!unzip -q -d /home/aistudio/data/data10954 /home/aistudio/data/data10954/cat_12_test.zip
对于一个数据集,首先要了解数据的组成是什么:
对于一个数据集,为了更好的衡量模型的效果,不能只有训练集和测试集,所以一般需要从中训练集中分出一部分作为验证集使用
了解了上述的内容,就可以开始使用代码来对数据集进行预处理了
import osimport paddleimport random
total = []# 读取数据标签with open('/home/aistudio/data/data10954/train_list.txt', 'r', encoding='UTF-8') as f: for line in f: # 格式转换
line = line[:-1].split('\t')
total.append(' '.join(line)+'\n')# 打乱数据顺序random.shuffle(total)'''
切分数据集
95%的数据作为训练集
5%的数据作为验证集
'''split_num = int(len(total)*0.95)
# 写入训练数据列表with open('/home/aistudio/data/data10954/train.txt', 'w', encoding='UTF-8') as f: for line in total[:split_num]:
f.write(line)# 写入验证数据列表with open('/home/aistudio/data/data10954/dev.txt', 'w', encoding='UTF-8') as f: for line in total[split_num:]:
f.write(line)# 写入测试数据列表with open('/home/aistudio/data/data10954/test.txt', 'w', encoding='UTF-8') as f: for line in ['cat_12_test/%s\n' % img for img in os.listdir('/home/aistudio/data/data10954/cat_12_test')]:
f.write(line)模型训练的一般步骤如下:
注:启动训练前请重启 Notebook 内核
注:目前只有 CPU 环境才可以正常运行如下代码
import osimport paddleimport randomimport paddle.nn as nnimport paddle.vision.transforms as T# 构建数据集class CatDataset(paddle.io.Dataset):
def __init__(self, transforms, dataset_path='/home/aistudio/data/data10954', mode='train'):
self.mode = mode
self.dataset_path = dataset_path
self.transforms = transforms
self.num_classes = 5
if self.mode == 'train':
self.file = 'train.txt'
elif self.mode == 'dev':
self.file = 'dev.txt'
else:
self.file = 'test.txt'
self.file = os.path.join(dataset_path, self.file) with open(self.file, 'r') as file:
self.data = file.read()[:-1].split('\n') def __getitem__(self, idx):
if self.mode in ['train', 'dev']:
img_path, grt = self.data[idx].split(' ')
img_path = os.path.join(self.dataset_path, img_path)
im = paddle.vision.image_load(img_path)
im = im.convert("RGB")
im = self.transforms(im) return im, int(grt) else:
img_path = self.data[idx]
img_path = os.path.join(self.dataset_path, img_path)
im = paddle.vision.image_load(img_path)
im = im.convert("RGB")
im = self.transforms(im) return im def __len__(self):
return len(self.data)# 加载数据集train_transforms = T.Compose([
T.Resize(256),
T.RandomCrop(224),
T.RandomHorizontalFlip(),
T.RandomVerticalFlip(),
T.ToTensor(),
T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
test_transforms = T.Compose([
T.Resize(256),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
train_dataset = CatDataset(train_transforms, mode='train')
dev_dataset = CatDataset(test_transforms, mode='dev')
test_dataset = CatDataset(test_transforms, mode='test')# 加载模型model = paddle.hub.load('PaddleClas', 'mobilenetv3_large_x0_5', source='local', force_reload=False, class_dim=12, pretrained=True)
model = paddle.Model(model)# 定义优化器opt = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())# 配置模型model.prepare(optimizer=opt, loss=nn.CrossEntropyLoss(), metrics=paddle.metric.Accuracy(topk=(1, 5)))
model.fit(
train_data=train_dataset,
eval_data=dev_dataset,
batch_size=32,
epochs=2,
eval_freq=1,
log_freq=1,
save_dir='save_models',
save_freq=1,
verbose=1,
drop_last=False,
shuffle=True,
num_workers=0)2021-05-18 12:43:59 INFO: unique_endpoints {''}
2021-05-18 12:43:59 INFO: Downloading MobileNetV3_large_x0_5_pretrained.pdparams from https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams
100%|██████████| 15875/15875 [00:00<00:00, 18983.36it/s]
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1297: UserWarning: Skip loading for out.weight. out.weight receives a shape [1280, 1000], but the expected shape is [1280, 12].
warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1297: UserWarning: Skip loading for out.bias. out.bias receives a shape [1000], but the expected shape is [12].
warnings.warn(("Skip loading for {}. ".format(key) + str(err)))The loss value printed in the log is the current step, and the metric is the average value of previous steps. Epoch 1/2
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working return (isinstance(seq, collections.Sequence) and
step 65/65 [==============================] - loss: 2.7684 - acc_top1: 0.6628 - acc_top5: 0.9464 - 3s/step save checkpoint at /home/aistudio/save_models/0 Eval begin... step 4/4 [==============================] - loss: 0.8948 - acc_top1: 0.7685 - acc_top5: 0.9907 - 732ms/step Eval samples: 108 Epoch 2/2 step 65/65 [==============================] - loss: 0.5738 - acc_top1: 0.8397 - acc_top5: 0.9942 - 3s/step save checkpoint at /home/aistudio/save_models/1 Eval begin... step 4/4 [==============================] - loss: 0.5484 - acc_top1: 0.8611 - acc_top5: 0.9907 - 779ms/step Eval samples: 108 save checkpoint at /home/aistudio/save_models/final
模型预测一般步骤:
import numpy as np# 模型预测results = model.predict(test_dataset, batch_size=32, num_workers=0, stack_outputs=True, callbacks=None)# 对预测结果进行后处理total = []for img, result in zip(test_dataset.data, np.argmax(results[0], 1)):
total.append('%s,%s\n' % (img.split('/')[-1], result))# 生成结果文件with open('result.csv','w') as f: for line in total:
f.write(line)Predict begin... step 8/8 [==============================] - 805ms/step Predict samples: 240
以上就是Paddle.Hub 初探:快速基于预训练模型实现猫的 12 分类的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号