
在深度学习实践中,我们经常需要训练神经网络来拟合特定的数学函数。本教程的目标是构建一个PyTorch神经网络,其输入为三维向量[x, y, 1](其中x和y是二维坐标),输出为这些坐标的平方和,即x^2 + y^2。尽管这个函数在数学上相对简单,但在神经网络的训练过程中,若不注意数据预处理和超参数设置,仍可能遇到模型难以收敛、损失值居高不下的问题。
以下是最初尝试构建该神经网络的代码片段。该实现使用了一个带有单个隐藏层的全连接网络,并尝试了标准的训练流程。
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
import torch.optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 原始特征数据,包含大量在[-15, 15]范围内的坐标
features = torch.tensor([[8.3572,-11.3008,1],[6.2795,-12.5886,1],[4.0056,-13.4958,1]
,[1.6219,-13.9933,1],[-0.8157,-14.0706,1],[-3.2280,-13.7250,1]
,[-5.5392,-12.9598,1],[-7.6952,-11.8073,1],[-9.6076,-10.3035,1],
[-11.2532,-8.4668,1],[-12.5568,-6.3425,1],[-13.4558,-4.0691,1],
[-13.9484,-1.7293,1],[-14.0218,0.7224,1],[-13.6791,3.1211,1],
[-12.9064,5.4561,1],[-11.7489,7.6081,1],[-10.2251,9.5447,1],
[5.4804,12.8044,1],[7.6332,11.6543,1],[9.5543,10.1454,1],
[11.1890,8.3117,1],[12.4705,6.2460,1],[13.3815,3.9556,1],
[13.8733,1.5884,1],[13.9509,-0.8663,1],[13.6014,-3.2793,1],
[12.8572,-5.5526,1],[11.7042,-7.7191,1],[10.1761,-9.6745,1],
[-8.4301,11.1605,1],[-6.3228,12.4433,1],[-4.0701,13.3401,1],
[-1.6816,13.8352,1],[0.7599,13.9117,1],[3.1672,13.5653,1]]).to(device)
# 计算标签:x^2 + y^2
labels = []
for i in range(features.shape[0]):
label=(features[i][0])**2+(features[i][1])**2
labels.append(label)
labels = torch.tensor(labels).to(device)
# 定义网络结构
num_input ,num_hidden,num_output = 3,64,1
net = nn.Sequential(
nn.Linear(num_input,num_hidden),
nn.Linear(num_hidden,num_output)
).to(device)
# 权重初始化(偏置初始化未被应用)
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_normal_(m.weight)
net.apply(init_weights)
loss = nn.MSELoss()
num_epochs = 10
batch_size = 6
lr=0.001
trainer = torch.optim.RAdam(net.parameters(),lr=lr)
dataset = TensorDataset(features,labels)
data_loader = DataLoader(dataset,batch_size=batch_size,shuffle=True)
# 训练循环
for i in range (num_epochs):
for X,y in data_loader:
y_hat = net(X)
l = loss(y_hat,y.reshape(y_hat.shape))
trainer.zero_grad()
l.backward()
trainer.step()
with torch.no_grad():
print(f"Epoch {i+1}, Loss: {l.item():.4f}")运行上述代码会发现,经过10个epoch的训练,损失值仍然很高,模型未能有效学习到目标函数。这通常是由以下几个原因造成的:
为了解决上述问题并提高模型的收敛性,我们可以采取以下关键优化策略:
标准化(Standardization)是将数据转换成均值为0、标准差为1的分布,是深度学习中常用的数据预处理技术。它有助于:
我们可以对features的前两列(即x和y坐标)进行标准化处理:
mean = features[:,:2].mean(dim=0) std = features[:,:2].std(dim=0) features[:,:2] = (features[:,:2] - mean) / std
注意,这里只对x和y坐标进行了标准化,因为第三列是一个常数1,它不参与计算x^2+y^2,并且作为偏置项的输入,通常不需要标准化。
根据经验,我们可以将num_epochs增加到100,并将batch_size调整为2:
num_epochs = 100 batch_size = 2
将上述优化策略整合到原始代码中,得到以下改进后的实现:
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
import torch.optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
features = torch.tensor([[8.3572,-11.3008,1],[6.2795,-12.5886,1],[4.0056,-13.4958,1]
,[1.6219,-13.9933,1],[-0.8157,-14.0706,1],[-3.2280,-13.7250,1]
,[-5.5392,-12.9598,1],[-7.6952,-11.8073,1],[-9.6076,-10.3035,1],
[-11.2532,-8.4668,1],[-12.5568,-6.3425,1],[-13.4558,-4.0691,1],
[-13.9484,-1.7293,1],[-14.0218,0.7224,1],[-13.6791,3.1211,1],
[-12.9064,5.4561,1],[-11.7489,7.6081,1],[-10.2251,9.5447,1],
[5.4804,12.8044,1],[7.6332,11.6543,1],[9.5543,10.1454,1],
[11.1890,8.3117,1],[12.4705,6.2460,1],[13.3815,3.9556,1],
[13.8733,1.5884,1],[13.9509,-0.8663,1],[13.6014,-3.2793,1],
[12.8572,-5.5526,1],[11.7042,-7.7191,1],[10.1761,-9.6745,1],
[-8.4301,11.1605,1],[-6.3228,12.4433,1],[-4.0701,13.3401,1],
[-1.6816,13.8352,1],[0.7599,13.9117,1],[3.1672,13.5653,1]]).to(device)
# --- 优化点1: 输入数据标准化 ---
mean = features[:,:2].mean(dim=0)
std = features[:,:2].std(dim=0)
features[:,:2] = (features[:,:2] - mean) / std
labels = []
for i in range(features.shape[0]):
label=(features[i][0])**2+(features[i][1])**2
labels.append(label)
labels = torch.tensor(labels).to(device)
num_input ,num_hidden,num_output = 3,64,1
net = nn.Sequential(
nn.Linear(num_input,num_hidden),
nn.Linear(num_hidden,num_output)
).to(device)
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_normal_(m.weight)
net.apply(init_weights)
loss = nn.MSELoss()
# --- 优化点2: 调整训练周期和批量大小 ---
num_epochs = 100 # 增加训练周期
batch_size = 2 # 调整批量大小
lr=0.001
trainer = torch.optim.RAdam(net.parameters(),lr=lr)
dataset = TensorDataset(features,labels)
data_loader = DataLoader(dataset,batch_size=batch_size,shuffle=True)
for i in range (num_epochs):
for X,y in data_loader:
y_hat = net(X)
l = loss(y_hat,y.reshape(y_hat.shape))
trainer.zero_grad()
l.backward()
trainer.step()
with torch.no_grad():
# 打印每个epoch结束时的损失值
print(f"Epoch {i+1}, Loss: {l.item():.4f}")运行上述优化后的代码,你会发现模型能够显著降低损失值,并最终收敛到一个较低的误差水平。
除了上述改进,在实际的神经网络训练中,还可以考虑以下优化策略:
本教程通过一个具体的例子,展示了如何使用PyTorch训练一个神经网络来拟合x^2 + y^2函数。核心 takeaway 是,成功的神经网络训练不仅仅依赖于网络架构本身,更离不开有效的数据预处理和细致的超参数调优。通过对输入数据进行标准化、增加训练周期以及调整批量大小,我们能够显著改善模型的收敛性能,使其能够有效学习并拟合目标函数。这些实践经验对于解决更广泛的深度学习问题同样具有指导意义。
以上就是使用PyTorch训练神经网络计算坐标平方和的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号