《计算机应用研究》|Application Research of Computers


Face gender recognition based on multi-layer feature fusion convolution neural network with adjustable supervisory function

免费全文下载 (已被下载 次)  
作者 石学超,周亚同,池越
机构 河北工业大学 电子信息工程学院 天津市电子材料与器件重点实验室,天津 300401
统计 摘要被查看 次,已被下载
文章编号 1001-3695(2019)03-060-0940-05
DOI 10.19734/j.issn.1001-3695.2017.10.1018
摘要 为了进一步提高性别识别的准确率,提出了一种基于多层特征融合与可调监督函数机制结合的卷积神经网络(L-MFCNN)模型,并将之用于人脸性别识别。与传统卷积神经网络(CNN)不同,L-MFCNN将多个浅层中间卷积层特征输出与最后卷积层特征输出相结合,融合多层卷积层的特征,不仅利用了深层卷积的整体语义信息,还考虑了浅层卷积的细节局部纹理信息,使得性别识别更加准确。此外L-MFCNN还引入具有可调目标监督函数机制的large-margin softmax loss作为输出层,利用其调节不同的间隔(margin)的机制来有效引导深层卷积网络学习,使得同种性别间的类内间距更小,不同性别间的类间距更大,获得更好的性别识别效果。在多个人脸数据集上的性别识别实验结果表明,L-MFCNN的识别准确率要高于其他传统的卷积网络模型。L-MFCNN模型也为将来的人脸性别识别研究提供了新的思路与方向。
关键词 人脸性别识别;多层特征融合;卷积神经网络;深度学习
基金项目 国家自然科学基金资助项目(61401307)
本文URL http://www.arocmag.com/article/01-2019-03-060.html
英文标题 Face gender recognition based on multi-layer feature fusion convolution neural network with adjustable supervisory function
作者英文名 Shi Xuechao, Zhou Yatong, Chi Yue
机构英文名 TianjinKeyLaboratoryofElectronicMaterials&Devices,SchoolofElectronics&InformationEngineering,HebeiUniversityofTechnology,Tianjin300401,China
英文摘要 In order to further improve the accuracy of gender recognition, this paper proposed the convolution neural network model based on multi-layer feature fusion with adjustable supervisory function, L-MFCNN, then used it for face gender recognition.Unlike the traditional convolution neural network, L-MFCNN combined the output of multiple shallow convolution layers with the final convolution layer output.Fusion the characteristics of multi-layer convolutions, not only used the high-level semantic information, but also considered the bottom of the details of the texture information, making the face gender recognition more accuracy.While using the large-margin softmax loss could adjust the margin function, it could explicitly encourages the same gender intra-class compactness and the different gender inter-class separability to get better face gender recognition.The face gender re-cognition experiment data on multiple face data sets show that the accuracy of L-MFCNN is higher than that of traditional convolution network.Besides, L-MFCNN also provides the new ideas and directions for the future gender recognition of face.
英文关键词 face gender recognition; multi-layer feature fusion; convolution neural network(CNN); deep learning
参考文献 查看稿件参考文献
  [1] LeCun Y, Bengio Y, Hinton G. Deep learning[J] . Nature, 2015, 521(7533):436-444.
[2] Golomb B A, Lawrence D T, Sejnowski T J. SexNet:a neural network identifies sex from human faces[C] //Proc of Advances in Neural Information Processing Systems. 1991:572-579.
[3] Brunelli R, Poggio T. HyberBF networks for gender classification[C] //Proc of IEEE International Conference on Acoustics, Speech, and Signal Processing. Piscataway, NJ:IEEE Press, 2007.
[4] Tamura S, Kawai H, Mitsumoto H. Male/female identification from 8×6 very low resolution face images by neural network[J] . Pattern Recognition, 1996, 29(2):331-335.
[5] Osuna E, Freund R, Girosi F. Training support vector machines:an application to face detection[C] //Proc of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2002:130-136.
[6] Farfade S S, Saberian M J, Li L J. Multi-view face detection using deep convolutional neural networks[C] //Proc of the 5th ACM International Conference on Multimedia Retrieval. New York:ACM Press, 2015:643-650.
[7] Liao Shengcai, Jain A K, Li S Z. A fast and accurate unconstrained face detector[J] . IEEE Trans on Pattern Analysis and Machine Intelligence, 2016, 38(2):211-223.
[8] Zhang Kaipeng, Zhang Zhanpeng, Li Zhifeng, et al. Joint face detection and alignment using multitask cascaded convolutional networks[J] . IEEE Signal Processing Letters, 2016, 23(10):1499-1503.
[9] Viola P, Jones M J. Robust real-time face detection[J] . International Journal of Computer Vision, 2004, 57(2):137-154.
[10] Sun Yi, Liang Ding, Wang Xiaogang, et al. DeepID3:face recognition with very deep neural networks[EB/OL] . (2015-02-03). https://arxiv. org/abs/1502. 00873.
[11] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2015:3431-3440.
[12] Levi G, Hassner T. Age and gender classification using convolutional neural networks[C] // Proc of IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ:IEEE Press, 2015:34-42.
[13] Rothe R, Timofte R, Van Gool L. Deep expectation of real and apparent age from a single image without facial landmarks[J] . International Journal of Computer Vision, 2018, 126(2-4):144-157.
[14] Verma A, Vig L. Using convolutional neural networks to discover cogntively validated features for gender classification[C] //Proc of IEEE International Conference on Soft Computing and Machine Intelligence. Piscataway, NJ:IEEE Press, 2014:33-37.
[15] 汪济民, 陆建峰. 基于卷积神经网络的人脸性别识别[J] . 现代电子技术, 2015, 38(7):81-84. (Wang Jimin, Lu Jianfeng. Face gender recognition based on convolutional neural network[J] . Modern Electronics Technique, 2015, 38(7):81-84. )
[16] 董兰芳, 张军挺. 基于深度学习和随机森林的人脸年龄和性别分类研究[J] . 计算机工程, 2018, 44(5):246-251. (Dong Lanfang, Zhang Junting. A study of face age and gender classification using deep learning and random forest[J] . Computer Engineering, 2018, 44(5):246-251. )
[17] 张婷, 李玉鑑, 胡海鹤, 等. 基于跨连卷积神经网络的性别分类模型[J] . 自动化学报, 2016, 42(6):858-865. (Zhang Ting, Li Yujian, Hu Haihe, et al. A gender classification model based on cross-connected convolutional neural networks[J] . Acta Automatica Sinica, 2016, 42(6):858-865. )
[18] Liu Weiyang, Wen Yandong, Yu Zhiding, et al. Large-margin softmax loss for convolutional neural networks[C] //Proc of International Conference on International Conference on Machine Learning. 2016:507-516.
[19] LéCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J] . Proceedings of the IEEE, 1998, 86(11):2278-2324.
[20] Shen Wei, Wang Xinggang, Wang Yan, et al. DeepContour:a deep convolutional feature learned by positive-sharing loss for contour detection[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2015:3982-3991.
[21] Mansanet J, Albiol A, Paredes R. Local deep neural networks for gender recognition[J] . Pattern Recognition Letters, 2016, 70(1):80-86.
[22] Hassner T, Harel S, Paz E, et al. Effective face frontalization in unconstrained images[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2015:4295-4304.
[23] Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks[C] //Proc of the 14th International Conference on Artificial Intelligence and Statistics. 2011:315-323.
[24] Sanguansat P. Face hallucination using bilateral-projection-based two-dimensional principal component analysis[C] //Proc of IEEE International Conference on Computer and Electrical Engineering. Piscataway, NJ:IEEE Press, 2008:876-880.
[25] Hightower J, Borriello G. Location systems for ubiquitous computing[J] . Computer, 2001, 34(8):57-66.
[26] Shen Xiaohui, Lin Zhe, Brandt J, et al. Detecting and aligning faces by image retrieval[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2013:3460-3467.
[27] Phillips P J, Wechsler H, Huang J, et al. The FERET database and evaluation procedure for face-recognition algorithms[J] . Image and Vision Computing, 1998, 16(5):295-306.
[28] Huang G B, Ramesh M, Berg T, et al. Labeled faces in the wild:a database for studying face recognition in unconstrained environments, Technical Report 07-49[R] . Amherst:University of Massachusetts, 2007.
[29] Guo Yandong, Zhang Lei, Hu Yuxiao, et al. Ms-celeb-1m:a dataset and benchmark for large-scale face recognition[C] //Proc of European Conference on Computer Vision. Berlin:Springer International Publishing, 2016:87-102.
[30] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[EB/OL] . 2013-11-12. https://arxiv. org/abs/1311. 2901.
收稿日期 2017/10/21
修回日期 2017/11/28
页码 940-944
中图分类号 TP391.41
文献标志码 A