基于关系推理的自监督学习无标记训练( 四 )

此时 , 只训练线性模型 , 冻结主干模型 。 首先 , 我们将看到微调Conv4的结果
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")optimizer = torch.optim.Adam(linear_layer.parameters())CE = torch.nn.CrossEntropyLoss()linear_layer.to(device)linear_layer.train()backbone_lineval.to(device)backbone_lineval.eval()print('Linear evaluation')for epoch in range(20):accuracy_list = list()for i, (data, target) in enumerate(train_loader_lineval):optimizer.zero_grad()data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).to(device).detach()output = linear_layer(output)loss = CE(output, target)loss.backward()optimizer.step()# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Epoch [{}] loss: {:.5f}; accuracy: {:.2f}%' \.format(epoch+1, loss.item(), sum(accuracy_list)/len(accuracy_list)))Linear evaluation
Epoch [1] loss: 2.68060; accuracy: 47.79%
Epoch [2] loss: 1.56714; accuracy: 58.34%
Epoch [3] loss: 1.18530; accuracy: 56.50%
Epoch [4] loss: 0.94784; accuracy: 57.91%
Epoch [5] loss: 1.48861; accuracy: 57.56%
Epoch [6] loss: 0.91673; accuracy: 57.87%
Epoch [7] loss: 0.90533; accuracy: 58.96%
Epoch [8] loss: 2.10333; accuracy: 57.40%
Epoch [9] loss: 1.58732; accuracy: 55.57%
Epoch [10] loss: 0.88780; accuracy: 57.79%
Epoch [11] loss: 0.93859; accuracy: 58.44%
Epoch [12] loss: 1.15898; accuracy: 57.32%
Epoch [13] loss: 1.25100; accuracy: 57.79%
Epoch [14] loss: 0.85337; accuracy: 59.06%
Epoch [15] loss: 1.62060; accuracy: 58.91%
【基于关系推理的自监督学习无标记训练】Epoch [16] loss: 1.30841; accuracy: 58.95%
Epoch [17] loss: 0.27441; accuracy: 58.11%
Epoch [18] loss: 1.58133; accuracy: 58.73%
Epoch [19] loss: 0.76258; accuracy: 58.81%
Epoch [20] loss: 0.62280; accuracy: 58.50%
然后检查测试集
accuracy_list = list()for i, (data, target) in enumerate(test_loader_lineval):data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).detach()output = linear_layer(output)# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Test accuracy: {:.2f}%'.format(sum(accuracy_list)/len(accuracy_list)))Test accuracy: 49.98%
Conv4在测试集上获得了49.98%的准确率 , 这意味着主干模型可以在未标记的数据集中学习有用的特征 , 只需在很少的时间段内进行微调就可以达到很好的效果 。 现在让我们检查深度模型的性能 。
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")optimizer = torch.optim.Adam(linear_layer.parameters())CE = torch.nn.CrossEntropyLoss()linear_layer.to(device)linear_layer.train()backbone_lineval.to(device)backbone_lineval.eval()print('Linear evaluation')for epoch in range(20):accuracy_list = list()for i, (data, target) in enumerate(train_loader_lineval):optimizer.zero_grad()data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).to(device).detach()output = linear_layer(output)loss = CE(output, target)loss.backward()optimizer.step()# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Epoch [{}] loss: {:.5f}; accuracy: {:.2f}%' \.format(epoch+1, loss.item(), sum(accuracy_list)/len(accuracy_list)))


推荐阅读