基于关系推理的自监督学习无标记训练( 五 )

Linear evaluation
Epoch [1] loss: 2.68060; accuracy: 47.79%
Epoch [2] loss: 1.56714; accuracy: 58.34%
Epoch [3] loss: 1.18530; accuracy: 56.50%
Epoch [4] loss: 0.94784; accuracy: 57.91%
Epoch [5] loss: 1.48861; accuracy: 57.56%
Epoch [6] loss: 0.91673; accuracy: 57.87%
Epoch [7] loss: 0.90533; accuracy: 58.96%
Epoch [8] loss: 2.10333; accuracy: 57.40%
Epoch [9] loss: 1.58732; accuracy: 55.57%
Epoch [10] loss: 0.88780; accuracy: 57.79%
Epoch [11] loss: 0.93859; accuracy: 58.44%
Epoch [12] loss: 1.15898; accuracy: 57.32%
Epoch [13] loss: 1.25100; accuracy: 57.79%
Epoch [14] loss: 0.85337; accuracy: 59.06%
Epoch [15] loss: 1.62060; accuracy: 58.91%
Epoch [16] loss: 1.30841; accuracy: 58.95%
Epoch [17] loss: 0.27441; accuracy: 58.11%
Epoch [18] loss: 1.58133; accuracy: 58.73%
Epoch [19] loss: 0.76258; accuracy: 58.81%
Epoch [20] loss: 0.62280; accuracy: 58.50%
然后评估测试数据集
accuracy_list = list()for i, (data, target) in enumerate(test_loader_lineval):data = http://kandian.youth.cn/index/data.to(device)target= target.to(device)output = backbone_lineval(data).detach()output = linear_layer(output)# estimate the accuracyprediction = output.argmax(-1)correct = prediction.eq(target.view_as(prediction)).sum()accuracy = (100.0 * correct / len(target))accuracy_list.append(accuracy.item())print('Test accuracy: {:.2f}%'.format(sum(accuracy_list)/len(accuracy_list)))Test accuracy: 55.38%
这是更好的 , 我们可以在测试集上获得55.38%的精度 。 本文的主要目的是重现和评估关系推理方法论 , 以指导模型识别无标签对象 , 因此 , 这些结果是非常有前途的 。 如果你觉得不满意 , 你可以通过改变超参数来自由地做实验 , 比如增加数量 , 时期 , 或者改变模型结构 。
最后的想法自监督关系推理在定量和定性两方面都是有效的 , 并且具有从浅到深的不同大小的主干 。 通过比较学习到的表示可以很容易地从一个领域转移到另一个领域 , 它们具有细粒度和紧凑性 , 这可能是由于精度和扩充次数之间的相关性 。 在关系推理中 , 根据作者的实验 , 扩充的数量对对象簇的质量有着主要的影响[4] 。 自监督学习在许多方面都有很强的潜力成为机器学习的未来 。
参考文献[1] Carl Doersch et. al, Unsupervised Visual Representation Learning by Context Prediction, 2015.
[2] Mehdi Noroozi et. al, Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles, 2017.
[3] Zhang et. al, Colorful Image Colorization, 2016.
[4] Mehdi Noroozi et. al, Representation Learning by Learning to Count, 2017.
[5] Ting Chen et. al, A Simple Framework for Contrastive Learning of Visual Representations, 2020.
[6] Massimiliano Patacchiola et. al, Self-Supervised Relational Reasoning for Representation Learning, 2020.
[7] Adam Santoro et. al, Relational recurrent neural networks, 2018.


推荐阅读