Mengya Han, Yibing Zhan, Baosheng Yu, Yong Luo, Han Hu, Bo Du, Yonggang Wen, Dacheng Tao. Region-adaptive Concept Aggregation for Few-shot Visual Recognition. Machine Intelligence Research, vol. 20, no. 4, pp.554-568, 2023. https://doi.org/10.1007/s11633-022-1358-8
Citation: Mengya Han, Yibing Zhan, Baosheng Yu, Yong Luo, Han Hu, Bo Du, Yonggang Wen, Dacheng Tao. Region-adaptive Concept Aggregation for Few-shot Visual Recognition. Machine Intelligence Research, vol. 20, no. 4, pp.554-568, 2023. https://doi.org/10.1007/s11633-022-1358-8

Region-adaptive Concept Aggregation for Few-shot Visual Recognition

doi: 10.1007/s11633-022-1358-8
More Information
  • Author Bio:

    Mengya Han received the B. Sc. degree in computer science from Bengbu University, China in 2017, and the M. Sc. degree in computer science and technology from School of Computer Science and Information Engineering, Hefei University of Technology, China in 2020. She is currently a Ph. D. degree candidate in computer science and technology at School of Computer Science, Wuhan University, China.Her research interests include computer vision and machine learning.E-mail: myhan1996@whu.edu.cnORCID iD: 0000-0003-3499-3832

    Yibing Zhan received the B. Eng. and Ph. D. degrees in electronic and communication engineering from Information Science and Technology School, University of Science and Technology of China, China in 2012 and 2018, respectively. From 2018 to 2020, he served as an associate researcher in School of Computer Science of Hangzhou Dianzi University, China. Now, he works in the JD Explore Academy as an algorithm scientist and head of graph neural networks. He has published many scientific papers in top conferences and journals, including CVPR, ACM MM, AAAI, IJCV, and IEEE TMM.His research interests include graph models and multimodal learning tasks, such as cross-modal retrieval, scene graph generation, and graph neural networks.E-mail: zhanyibing@jd.com

    Baosheng Yu received the B. Eng. degree in computer science from University of Science and Technology of China, China in 2014, and the Ph. D. degree in computer science from The University of Sydney, Australia in 2019. He is currently a research fellow at School of Computer Science and the Faculty of Engineering, The University of Sydney, Australia. He has authored/co-authored 20+ publications on top-tier international conferences and journals, including TPAMI, IJCV, CVPR, ICCV and ECCV.His research interests include machine learning, computer vision, and deep learning. E-mail: baosheng.yu@sydney.edu.au

    Yong Luo received the B. Eng. degree in computer science from Northwestern Polytechnical University, China in 2009, and the Ph. D. degree from School of Electronics Engineering and Computer Science, Peking University, China in 2014. He is currently a professor with School of Computer Science, Wuhan University, China. He has authored or co-authored over 50 papers in top journals and prestigious conferences including IEEE T-PAMI, IEEE T-NNLS, IEEE T-IP, IEEE T-KDE, IEEE T-MM, ICCV, WWW, IJCAI and AAAI. He is serving on editorial board for IEEE T-MM. He received the IEEE Globecom 2016 Best Paper Award, and was nominated as the IJCAI 2017 Distinguished Best Paper Award. He is also a co-recipient of the IEEE ICME 2019 and IEEE VCIP 2019 Best Paper Awards.His research interests include machine learning and data mining with applications to visual information understanding and analysis.E-mail: yluo180@gmail.com (Corresponding author)ORCID iD: 0000-0002-2296-6370

    Han Hu received the B. Eng. and Ph. D. degrees in automation from University of Science and Technology of China, China in 2007 and 2012, respectively. He is currently a professor with School of Information and Electronics, Beijing Institute of Technology, China. He received several academic awards, including the Best Paper Award of the IEEE TCSVT 2019, the Best Paper Award of the IEEE Multimedia Magazine 2015, and the Best Paper Award of the IEEE Globecom 2013. He has served as an associate editor of IEEE TMM and Ad Hoc Networks, and a TPC member of Infocom, ACM MM, AAAI, and IJCAI.His research interests include multimedia networking, edge intelligence, and space-air-ground integrated network.E-mail: hhu@bit.edu.cn

    Bo Du received the Ph. D. degree in photogrammetry and remote sensing from State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China in 2010. He is currently a professor with School of Computer, Wuhan University, China. He has published more than 100 scientific papers, such as the IEEE Transactions on Geoscience and Remote Sensing, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Image Processing, IEEE Transactions on Cybernetics, AAAI, and IJCAI.His research interests include pattern recognition, hyperspectral image processing, and signal processing.E-mail: dubo@whu.edu.cn

    Yonggang Wen received the Ph. D. degree in electrical engineering and computer science (minor in Western Literature) from Massachusetts Institute of Technology (MIT), USA in 2008. He is the professor of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore. He has also served as the associate dean (Research) at College of Engineering at NTU Singapore since 2018. He served as the acting director of Nanyang Technopreneurship Centre (NTC) at NTU from 2017 to 2019, and the assistant chair (Innovation) of School of Computer Science and Engineering (SCSE) at NTU from 2016 to 2018. He is a co-recipient of multiple journal best papers awards, including IEEE Transactions on Circuits and Systems for Video Technology (2019), IEEE Multimedia (2015), and several best paper awards from international conferences, including 2020 IEEE VCIP, 2016 IEEE Globecom, 2016 IEEE Infocom MuSIC Workshop, 2015 EAI/ICST Chinacom, 2014 IEEE WCSP, 2013 IEEE Globecom and 2012 IEEE EUC. He received 2016 IEEE ComSoc MMTC Distinguished Leadership Award. He is a Fellow of IEEE.His research interests include cloud computing, green data center, distributed machine learning, blockchain, big data analytics, multimedia network and mobile computing. E-mail: ygwen@ntu.edu.sg

    Dacheng Tao received the Ph. D. degree in computer science and information system from University of London, UK in 2007. He is currently an advisor and a chief scientist of the Digital Science Institute, Faculty of Engineering, University of Sydney, Australia. He is also the director of the JD Explore Academy and a vice president of JD.com. He mainly applies statistics and mathematics to artificial intelligence and data science, and his research is detailed in one monograph and over 200 publications in prestigious journals and proceedings at leading conferences. He received the 2015 Australian Scopus Eureka Prize, the 2018 IEEE ICDM Research Contributions Award, and the 2021 IEEE Computer Society McCluskey Technical Achievement Award. He is a Fellow of the Australian Academy of Science, AAAS, and ACM.His research interests include computer vision, computational neuroscience, data science, geoinformatics, image processing, machine learning, medical informatics, multimedia, neural networks and video surveillance. E-mail: dacheng.tao@gmail.com

  • Received Date: 2022-03-07
  • Accepted Date: 2022-07-15
  • Publish Online: 2023-03-02
  • Publish Date: 2023-08-01
  • Few-shot learning (FSL) aims to learn novel concepts from very limited examples. However, most FSL methods suffer from the issue of lacking robustness in concept learning. Specifically, existing FSL methods usually ignore the diversity of region contents that may contain concept-irrelevant information such as the background, which would introduce bias/noise and degrade the performance of conceptual representation learning. To address the above-mentioned issue, we propose a novel metric-based FSL method termed region-adaptive concept aggregation network or RCA-Net. Specifically, we devise a region-adaptive concept aggregator (RCA) to model the relationships of different regions and capture the conceptual information in different regions, which are then integrated in a weighted average manner to obtain the conceptual representation. Consequently, robust concept learning can be achieved by focusing more on the concept-relevant information and less on the conceptual-irrelevant information. We perform extensive experiments on three popular visual recognition benchmarks to demonstrate the superiority of RCA-Net for robust few-shot learning. In particular, on the Caltech-UCSD Birds-200-2011 (CUB200) dataset, the proposed RCA-Net significantly improves 1-shot accuracy from 74.76% to 78.03% and 5-shot accuracy from 86.84% to 89.83% compared with the most competitive counterpart.

     

  • loading
  • [1]
    K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 770–778, 2016. DOI: 10.1109/CVPR.2016.90.
    [2]
    J. Redmon, S. Divvala, R. Girshick, A. Farhadi. You only look once: Unified, real-time object detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 779–788, 2016. DOI: 10.1109/CVPR.2016.91.
    [3]
    S. Q. Ren, K. M. He, R. Girshick, J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, Canada, pp. 91–99, 2015.
    [4]
    B. B. Jia, M. L. Zhang. Multi-dimensional classification via selective feature augmentation. Machine Intelligence Research, vol. 19, no. 1, pp. 38–51, 2022. DOI: 10.1007/s11633-022-1316-5.
    [5]
    F. T. Wang, L. Yang, J. Tang, S. B. Chen, X. Wang. DLA+: A light aggregation network for object classification and detection. International Journal of Automation and Computing, vol. 18, no. 6, pp. 963–972, 2021. DOI: 10.1007/s11633-021-1287-y.
    [6]
    J. Xie, S. Y. Liu, J. X. Chen. A framework for distributed semi-supervised learning using single-layer feedforward networks. Machine Intelligence Research, vol. 19, no. 1, pp. 63–74, 2022. DOI: 10.1007/s11633-022-1315-6.
    [7]
    L. Fei-Fei, R. Fergus, P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 594–611, 2006. DOI: 10.1109/TPAMI.2006.79.
    [8]
    B. M. Lake, R. Salakhutdinov, J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, vol. 350, no. 6266, pp. 1332–1338, 2015. DOI: 10.1126/science.aab3050.
    [9]
    A. X. Li, K. X. Zhang, L. W. Wang. Correction to: Zero-shot fine-grained classification by deep feature learning with semantics. International Journal of Automation and Computing, vol. 18, no. 6, pp. 1045–1045, 2021. DOI: 10.1007/s11633-020-1224-5.
    [10]
    C. Yang, C. Liu, X. C. Yin. Weakly correlated knowledge integration for few-shot image classification. Machine Intelligence Research, vol. 19, no. 1, pp. 24–37, 2022. DOI: 10.1007/s11633-022-1320-9.
    [11]
    S. Gidaris, N. Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 4367–4375, 2018. DOI: 10.1109/CVPR.2018.00459.
    [12]
    Z. Z. Zhang, C. L. Lan, W. J. Zeng, Z. B. Chen, S. F. Chang. Uncertainty-aware few-shot image classification. [Online], Available: https://arxiv.org/abs/2010.04525, 2020.
    [13]
    O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, D. Wierstra. Matching networks for one shot learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, pp. 3637–3645, 2016.
    [14]
    W. Y. Chen, Y. C. Liu, Z. Kira, Y. C. F. Wang, J. B. Huang. A closer look at few-shot classification. In Proceedings of International Conference on Learning Representations, New Orleans, USA, 2019.
    [15]
    C. Finn, P. Abbeel, S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, pp. 1126–1135, 2017.
    [16]
    S. Ravi, H. Larochelle. Optimization as a model for few-shot learning. In Proceedings of International Conference on Learning Representations, Toulon, France, 2017.
    [17]
    J. Snell, K. Swersky, R. Zemel. Prototypical networks for few-shot learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, pp. 4080–4090, 2017.
    [18]
    F. Sung, Y. X. Yang, L. Zhang, T. Xiang, P. H. S. Torr, T. M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 1199–1208, 2018. DOI: 10.1109/CVPR.2018.00131.
    [19]
    Y. L. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, P. Isola. Rethinking few-shot image classification: A good embedding is all you need? In Proceedings of the 16th European Conference on Computer Vision, Springer, Glasgow, UK, pp. 266–282, 2020. DOI: 10.1007/978-3-030-58568-6_16.
    [20]
    Q. X. Luo, L. F. Wang, J. G. Lv, S. M. Xiang, C. H. Pan. Few-shot learning via feature hallucination with variational inference. In Proceedings of IEEE Winter Conference on Applications of Computer Vision, Waikoloa, USA, pp. 3962–3971, 2021. DOI: 10.1109/WACV48630.021.00401.
    [21]
    K. Lee, S. Maji, A. Ravichandran, S. Soatto. Meta-learning with differentiable convex optimization. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 10649–10657, 2019. DOI: 10.1109/CVPR.2019.01091.
    [22]
    Y. B. Chen, Z. Liu, H. J. Xu, T. Darrell, X. L. Wang. Meta-Baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of IEEE/CVF International Conference on Computer Vision, IEEE, Montreal, Canada, pp. 9042–9051, 2021. DOI: 10.1109/ICCV48922.2021.00893.
    [23]
    M. Y. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, R. S. Zemel. Meta-learning for semi-supervised few-shot classification. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, USA, 2017.
    [24]
    P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, P. Perona. Caltech-UCSD Birds 200, Computation & Neural Systems, Technical Report, 2010–001, California Institute of Technology, USA, 2010.
    [25]
    Y. L. Guo, N. M. Cheung. Attentive weights generation for few shot learning via information maximization. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 13496–13505, 2020. DOI: 10.1109/CVPR42600.2020.01351.
    [26]
    B. Hariharan, R. Girshick. Low-shot visual recognition by shrinking and hallucinating features. In Proceedings of IEEE International Conference on Computer Vision, Venice, Italy, pp. 3037–3046, 2017. DOI: 10.1109/ICCV.2017.328.
    [27]
    K. Li, Y. L. Zhang, K. P. Li, Y. Fu. Adversarial feature hallucination networks for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 13467–13476, 2020. DOI: 10.1109/CVPR42600.2020.01348.
    [28]
    A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, R. Hadsell. Meta-learning with latent embedding optimization. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, USA, 2019.
    [29]
    S. Baik, S. Hong, K. M. Lee. Learning to forget for meta-learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 2376–2384, 2020. DOI: 10.1109/CVPR42600.2020.00245.
    [30]
    J. He, R. C. Hong, X. L. Liu, M. L. Xu, Q. R. Sun. Revisiting local descriptor for improved few-shot classification. ACM Transactions on Multimedia Computing, Communications, and Applications, Article number 127, 2021
    [31]
    B. N. Oreshkin, P. Rodriguez, A. Lacoste. TADAM: Task dependent adaptive metric for improved few-shot learning. In Proceedings of the 32nd Conference on Neural Information Processing Systems, Montreal, Canada, pp. 721–731, 2018.
    [32]
    C. Simon, P. Koniusz, R. Nock, M. Harandi. Adaptive subspaces for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 4135–4144, 2020. DOI: 10.1109/CVPR42600.2020.00419.
    [33]
    H. J. Ye, H. X. Hu, D. C. Zhan, F. Sha. Few-shot learning via embedding adaptation with set-to-set functions. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 8805–8814, 2020. DOI: 10.1109/CVPR42600.2020.00883.
    [34]
    C. Zhang, Y. J. Cai, G. S. Lin, C. H. Shen. DeepEMD: Few-shot image classification with differentiable earth mover′s distance and structured classifiers. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 12200–12210, 2020. DOI: 10.1109/CVPR42600.2020.01222" target="_blank">href="https://doi.org/10.1109/CVPR42600.2020.01222">10.1109/CVPR42600.2020.01222.
    [35]
    R. B. Hou, H. Chang, B. P. Ma, S. G. Shan, X. L. Chen. Cross attention network for few-shot classification. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 4003–4014, 2019.
    [36]
    H. Q. Qiu, H. L. Li, Q. B. Wu, F. M. Meng, L. F. Xu, K. N. Ngan, H. C. Shi. Hierarchical context features embedding for object detection. IEEE Transactions on Multimedia, vol. 22, no. 12, pp. 3039–3050, 2020. DOI: 10.1109/TMM.2020.2971175.
    [37]
    D. F. Xu, Y. K. Zhu, C. B. Choy, L. Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 3097–3106, 2017. DOI: 10.1109/CVPR.2017.330.
    [38]
    V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis. Human-level control through deep reinforcement learning. Nature, vol. 518, no. 7540, pp. 529–533, 2015. DOI: 10.1038/nature14236.
    [39]
    V. G. Satorras, J. B. Estrach. Few-shot learning with graph neural networks. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [40]
    J. Kim, T. Kim, S. Kim, C. D. Yoo. Edge-labeling graph neural network for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 11–20, 2019. DOI: 10.1109/CVPR.2019.00010.
    [41]
    A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, pp. 6000–6010, 2017.
    [42]
    F. Wang, M. Q. Jiang, C. Qian, S. Yang, C. Li, H. G. Zhang, X. G. Wang, X. O. Tang. Residual attention network for image classification. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 6450–458, 2017. DOI: 10.1109/CVPR.2017.683.
    [43]
    J. Xu, R. Zhao, F. Zhu, H. M. Wang, W. L. Ouyang. Attention-aware compositional network for person re-identification. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 2119–2128, 2018. DOI: 10.1109/CVPR.2018.00226.
    [44]
    J. Hu, L. Shen, G. Sun. Squeeze-and-excitation networks. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 7132–7141, 2018. DOI: 10.1109/CVPR.2018.00745.
    [45]
    S. Woo, J. Park, J. Y. Lee, I. S. Kweon. CBAM: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 3–19, 2018. DOI: 10.1007/978-3-030-01234-2_1.
    [46]
    K. Han, A. Xiao, E. H. Wu, J. Y. Guo, C. J. Xu, Y. H. Wang. Transformer in transformer. [Online], Available: https://arxiv.org/abs/2103.00112, 2021.
    [47]
    A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. H. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, N. Houlsby. An image is worth 16×16 words: Transformers for image recognition at scale. In Proceedings of the 9th International Conference on Learning Representations, 2021.
    [48]
    P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio. Graph attention networks. In Proceedings of International Conference on Learning Representations, Vancouver, Canada, 2018.
    [49]
    K. Fukushima, S. Miyake. Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recognition, vol. 15, no. 6, pp. 455–469, 1982. DOI: 10.1016/0031-3203(82)90024-3.
    [50]
    M. Riesenhuber, T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, vol. 2, no. 11, pp. 1019–1025, 1999. DOI: 10.1038/14819.
    [51]
    R. Girdhar, D. Ramanan. Attentional pooling for action recognition. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, pp. 33–44, 2017.
    [52]
    J. Lee, I. Lee, J. Kang. Self-attention graph pooling. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, USA, pp. 3734–3743, 2019.
    [53]
    I. Koo, M. Jeong, C. Kim. Improving few-shot learning with weakly-supervised object localization. [Online], Available: https://arxiv.org/abs/2105.11715, 2021.
    [54]
    W. B. Li, L. Wang, J. L. Xu, J. Huo, Y. Gao, J. B. Luo. Revisiting local descriptor based image-to-class measure for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 7253–7260, 2019. DOI: 10.1109/CVPR.2019.00743.
    [55]
    O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. H. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015. DOI: 10.1007/s11263-015-0816-y.
    [56]
    T. Elsken, B. Staffler, J. H. Metzen, F. Hutter. Meta-learning of neural architectures for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 12362–12372, 2020. DOI: 10.1109/CVPR42600.2020.01238.
    [57]
    Z. Y. Chen, J. X. Ge, H. S. Zhan, S. T. Huang, D. L. Wang. Pareto self-supervised training for few-shot learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Nashville, USA, pp. 13658–13667, 2021. DOI: 10.1109/CVPR46437.2021.01345.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(9)

    用微信扫码二维码

    分享至好友和朋友圈

    Article Metrics

    Article views (821) PDF downloads(104) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return