基于关系推理的自监督学习无标记训练( 四 )
此时 , 只训练线性模型 , 冻结主干模型 。 首先 , 我们将看到微调Conv4的结果
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")optimizer = torch.optim.Adam(linear_layer.parameters())CE = torch.nn.CrossEntropyLoss()linear_layer.to(device)linear_layer.train()backbone_lineval.to(device)backbone_lineval.eval()print('Linear evaluation')for epoch in range(20):accuracy_list = list()for i, (data, target) in enumerate(train_loader_lineval):optimizer.zero_grad()data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).to(device).detach()output = linear_layer(output)loss = CE(output, target)loss.backward()optimizer.step()# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Epoch [{}] loss: {:.5f}; accuracy: {:.2f}%' \.format(epoch+1, loss.item(), sum(accuracy_list)/len(accuracy_list)))
Linear evaluation
Epoch [1] loss: 2.68060; accuracy: 47.79%
Epoch [2] loss: 1.56714; accuracy: 58.34%
Epoch [3] loss: 1.18530; accuracy: 56.50%
Epoch [4] loss: 0.94784; accuracy: 57.91%
Epoch [5] loss: 1.48861; accuracy: 57.56%
Epoch [6] loss: 0.91673; accuracy: 57.87%
Epoch [7] loss: 0.90533; accuracy: 58.96%
Epoch [8] loss: 2.10333; accuracy: 57.40%
Epoch [9] loss: 1.58732; accuracy: 55.57%
Epoch [10] loss: 0.88780; accuracy: 57.79%
Epoch [11] loss: 0.93859; accuracy: 58.44%
Epoch [12] loss: 1.15898; accuracy: 57.32%
Epoch [13] loss: 1.25100; accuracy: 57.79%
Epoch [14] loss: 0.85337; accuracy: 59.06%
Epoch [15] loss: 1.62060; accuracy: 58.91%
【基于关系推理的自监督学习无标记训练】Epoch [16] loss: 1.30841; accuracy: 58.95%
Epoch [17] loss: 0.27441; accuracy: 58.11%
Epoch [18] loss: 1.58133; accuracy: 58.73%
Epoch [19] loss: 0.76258; accuracy: 58.81%
Epoch [20] loss: 0.62280; accuracy: 58.50%
然后检查测试集
accuracy_list = list()for i, (data, target) in enumerate(test_loader_lineval):data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).detach()output = linear_layer(output)# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Test accuracy: {:.2f}%'.format(sum(accuracy_list)/len(accuracy_list)))
Test accuracy: 49.98%
Conv4在测试集上获得了49.98%的准确率 , 这意味着主干模型可以在未标记的数据集中学习有用的特征 , 只需在很少的时间段内进行微调就可以达到很好的效果 。 现在让我们检查深度模型的性能 。
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")optimizer = torch.optim.Adam(linear_layer.parameters())CE = torch.nn.CrossEntropyLoss()linear_layer.to(device)linear_layer.train()backbone_lineval.to(device)backbone_lineval.eval()print('Linear evaluation')for epoch in range(20):accuracy_list = list()for i, (data, target) in enumerate(train_loader_lineval):optimizer.zero_grad()data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).to(device).detach()output = linear_layer(output)loss = CE(output, target)loss.backward()optimizer.step()# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Epoch [{}] loss: {:.5f}; accuracy: {:.2f}%' \.format(epoch+1, loss.item(), sum(accuracy_list)/len(accuracy_list)))
推荐阅读
- 华硕基于WRX80的主板现身 为AMD Ryzen Threadripper Pro打造
- 微软新版电子邮件客户端截图曝光:基于网页端Outlook
- 曝光 | 小鹏或春节前推送NGP更新,基于高精地图可自动变道
- 基于Spring+Angular9+MySQL开发平台
- 14款华为手机/平板公测EMUI 11:全部基于麒麟980
- AI赋能,让消防、用电更“智慧”
- 荷兰职员哭泣:中国明明说好自研光刻机,却跟日本尼康扯上关系
- 基于安卓11打造!魅族17系列将升级全新Flyme 8
- 谷歌为用户提供了基于AR的虚拟化妆体验
- 挺进云端AI训练&推理双赛道!独家对话燧原科技COO张亚林:揭秘超高效率背后的“内功”