《计算机应用研究》|Application Research of Computers

SA-CapsNet:自注意力胶囊网络

SA-CapsNet: self-attention capsule network

免费全文下载 (已被下载 次)  
获取PDF全文
作者 刘林嵩,仝明磊,吴东亮
机构 上海电力大学 电子与信息工程学院,上海 200090
统计 摘要被查看 次,已被下载
文章编号 1001-3695(2021)10-020-3005-04
DOI 10.19734/j.issn.1001-3695.2021.03.0092
摘要 胶囊网络(CapsNet)强调对图像特征的空间关系进行编码,但是其特征提取模块难以应对复杂分类场景。为了提升CapsNet的性能,提出了一种具有自注意力(self-attention)特征提取模块的胶囊网络(self-attention capsule network,SA-CapsNet)。首先通过降低胶囊维度,并增加一个中间层来改进CapsNet;然后将SA模块映射到胶囊网络的特征提取层,增加特征提取能力。在MNIST、Fashion MNIST和CIFAR10 数据集上进行实验,分类准确率分别为99.67%、92.21%和82.51%。实验结果验证了改进网络的有效性,整体性能有较大提升。
关键词 胶囊网络; 图像分类; 自注意力; 特征提取; 深度学习
基金项目
本文URL http://www.arocmag.com/article/01-2021-10-020.html
英文标题 SA-CapsNet: self-attention capsule network
作者英文名 Liu Linsong, Tong Minglei, Wu Dongliang
机构英文名 School of Electronics & Information Engineering,Shanghai University of Electric Power,Shanghai 200090,China
英文摘要 Capsule network emphasizes the encoding of image features' spatial relationship, while its feature extraction module has difficulty in handling complex scenarios. In order to improve the performance of CapsNet, this paper proposed a capsule network with a self-attention feature extraction module(SA-CapsNet) to solve the above problem. At first, it improved CapsNet by reducing the dimension of the capsule and adding an intermediate layer. Then mapped the SA module to the feature extraction layer of the capsule network to increase feature extraction capability. The results worked on the MNIST, Fashion MNIST, and CIFAR10 datasets show that the classification accuracy rates are 99.67%, 92.21%, and 82.51%, respectively. Additionally, results demonstrate the effectiveness of this proposed network, and the overall performance has been dramati-cally improved compared with CapsNet.
英文关键词 capsule network; image classification; self-attention; feature extraction; deep learning
参考文献 查看稿件参考文献
 
收稿日期 2021/3/30
修回日期 2021/5/13
页码 3005-3008,3039
中图分类号 TP391
文献标志码 A