这篇教程pytorch1.7版本训练pytorch0.4版本改错记录写得很实用,希望能帮到您。 源代码: for epoch in range(start_epoch, 5): for scheduler in schedulers: scheduler.step()
# begin training _print('--' * 50) net.train() for i, data in enumerate(trainloader): img, label = data[0].cuda(), data[1].cuda()
py37) zjy@zjy-System-Product-Name:~/NTS$ python train.py
/home/zjy/anaconda3/envs/py37/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:143: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
参考官方例子:
How to adjust learning rate
torch.optim.lr_scheduler provides several methods to adjust the learning rate based on the number of epochs. torch.optim.lr_scheduler.ReduceLROnPlateau allows dynamic learning rate reducing based on some validation measurements.
Learning rate scheduling should be applied after optimizer’s update; e.g., you should write your code this way:
Example:
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler = ExponentialLR(optimizer, gamma=0.9)
for epoch in range(20):
for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
scheduler.step()
Most learning rate schedulers can be called back-to-back (also referred to as chaining schedulers). The result is that each scheduler is applied one after the other on the learning rate obtained by the one preceding it.
Example:
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler1 = ExponentialLR(optimizer, gamma=0.9)
scheduler2 = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
for epoch in range(20):
for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
scheduler1.step()
scheduler2.step()
修改源代码为: for epoch in range(start_epoch, 5):
for optimizer in optimizers: optimizer.step() for scheduler in schedulers: scheduler.step()
# begin training _print('--' * 50) net.train() for i, data in enumerate(trainloader): img, label = data[0].cuda(), data[1].cuda() 重新运行上述错误消失。 返回列表 Ubuntu安装Nvidia显卡驱动 |