《计算机应用研究》|Application Research of Computers

基于连通图的视觉显著区域检测研究

Visual saliency detection based on connected graph

免费全文下载 (已被下载 次)  
获取PDF全文
作者 肖云,陈新宇,汤进,张海涛
机构 安徽大学 计算机科学与技术学院,合肥 230601
统计 摘要被查看 次,已被下载
文章编号 1001-3695(2018)08-2503-03
DOI 10.3969/j.issn.1001-3695.2018.08.066
摘要 显著性区域检测是指自动识别出图像中最感兴趣、最重要的区域,目前在目标识别、图像检索等领域应用广泛。基于图的流形排序的显著区域检测算法虽然能够准确高效地检测出一幅图像中的显著性区域,但该算法中使用的K正则图描述的各顶点的空间连接性的图的结构存在局限。为解决上述局限性,研究构造一个更一般的连通图,在显著目标较大或显著目标不连续的情况下,能够更准确地检测出显著性区域。通过在CSSD、SOD、ASD和SED2四个标准数据集上进行大量验证性实验,与六种现有的代表性方法相比,实验结果在PR曲线、F值、MAE等多个指标均表明改进算法有明显的提高,有效验证了算法的有效性。
关键词 显著性目标检测;流形排序;连通图
基金项目 国家自然科学基金资助项目(61472002)
大学生创新创业训练项目(201610357036,201610357405)
安徽大学信息保障技术协同创新中心课题项目
本文URL http://www.arocmag.com/article/01-2018-08-066.html
英文标题 Visual saliency detection based on connected graph
作者英文名 Xiao Yun, Chen Xinyu, Tang Jin, Zhang Haitao
机构英文名 SchoolofComputerScience&Technology,AnhuiUniversity,Hefei230601,China
英文摘要 Visual saliency detection is automatically to detect the most interesting and important area, which is widely used to many applications, such as object recognition, image retrieval and so on. The graph based manifold ranking method could detect the salient area effectively, but the constructed K-regular graph could not represent the image completely. In order to solve the above-mentioned limitation, it constructed a new connected graph to indicate the relationship between image nodes, which could get more accurate results especially on images who had large or discontinuous saliency map. It performed extensive experiments on four benchmarks comparing with six state-of-the-art methods. The results of this method are better than other methods by the standard of the PR-curve, MAE and so on.
英文关键词 saliency detection; manifold ranking; connected graph
参考文献 查看稿件参考文献
  [1] Lempitsky V, Kohli P, Rother C, et al. Image segmentation with a bounding box prior[C] //Proc of International Conference on Computer Vision. Piscataway, NJ:IEEE Press, 2009:277-284.
[2] Itti L. Automatic foveation for video compression using a neurobiological model of visual attention[J] . IEEE Trans on Image Processing, 2004, 13(10):1304-1318.
[3] Rutishauser U, Walther D, Koch C, et al. Is bottom-up attention useful for object recognition?[C] // Proc of IEEE Computer Society Conference on Computer Vision & Pattern Recognition. Piscataway, NJ:IEEE Press, 2004:II-37-II-44.
[4] Chen Tao, Cheng Mingming, Tan Ping, et al. Sketch2Photo:Internet image montage[C] // Proc of ACM SIGGRAPH Asia. New York:ACM Press, 2009:124.
[5] Yang Jimei. Top-down visual saliency via joint CRF and dictionary learning[C] //Proc of Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2012:2296-2303.
[6] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J] . IEEE Trans on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259.
[7] Harel J, Koch C, Perona P. Graph-based visual saliency[C] //Advances in Neural Information Processing Systems. Cambridge, MA:MIT Press, 2007:545-552.
[8] 任鹏. 模式识别中的图结构描述方法综述[J] . 安徽大学学报:自然科学版, 2017, 41(1):3-9.
[9] Yang Chuan, Zhang Lihe, Lu Huchuan, et al. Saliency detection via graph-based manifold ranking[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Washington DC:IEEE Computer Society, 2013:3166-3173.
[10] Zhu Wangjiang, Liang Shuang, Wei Yichen, et al. Saliency optimization from robust background detection[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2014:2814-2821.
[11] Tu W C, He Shengfeng, Yang Qingxiong, et al. Real-time salient object detection with a minimum spanning tree[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2016:2334-2342.
[12] 吕建勇, 唐振民. 一种基于图的流形排序的显著性目标检测改进方法[J] . 电子与信息学报, 2015, 37(11):2555-2563.
[13] Wang Qiaosong, Zheng Wen, Piramuthu R. Grab:visual saliency via novel graph model and background priors[C] //Proc of the IEEE Conference on Computer Uision and Pattern Recognition. Piscatoway, NJ:IEEE Press, 2016:535-543.
[14] Achanta R, Shaji A, Smith K, et al. SLIC superpixles compared to state-of-the-art superpixel methods[J] . IEEE Trans on Pattern Analysis and Machine Intelligence, 2012, 34(11):2274-2282.
[15] Yan Qiong, Xu Li, Shi Jianping, et al. Hierarchical saliency detection[C] //Proc of Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2013:1155-1162.
[16] Movahedi V, Elder J H. Design and perceptual validation of performance measures for salient object segmentation[C] //Proc of Computer Vision and Pattern Recognition Workshops. Piscataway, NJ:IEEE Press, 2010:49-56.
[17] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2009:1597-1604.
[18] Alpert S, Galun M, Basri R, et al. Image segmentation by probabilistic bottom-up aggregation and cue integration[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE Press, 2007:1-8.
[19] Xie Yulin, Lu Huchuan. Visual saliency detection based on Bayesian model[C] //Proc of IEEE International Conference on Image Processing. Piscataway, NJ:IEEE Press, 2011:645-648.
[20] Perazzi F, Krahenbuhl P, Pritch Y, et al. Saliency filters:contrast based filtering for salient region detection[C] //Proc of IEEE Conference on Computer Vision and Pattern Recognition. Washington DC:IEEE Computer Society, 2012:733-740.
[21] Ran M, Tal A, Zelnikmanor L. What makes a patch distinct?[C] //Proc of Computer Vision and Pattern Recognition. Washington DC:IEEE Press, 2013:1139-1146.
[22] Xie Yulin, Lu Huchuan, Yang Minghsuan. Bayesian saliency via low and mid level cues[J] . IEEE Trans on Image Processing, 2013, 22(5):1689-1698.
[23] Qi Wei, Cheng Mingming, Borji A, et al. SaliencyRank:two-stage manifold ranking for salient object detection[J] . Computational Visual Media, 2015, 1(4):309-320.
[24] Ren Zhixiang, Gao Shenghua, Chia L T, et al. Region-based saliency detection and its application in object recognition[J] . IEEE Trans on Circuits and Systems for Uideo Technology, 2014, 24(5):769-779.
收稿日期 2017/4/1
修回日期 2017/5/22
页码 2503-2505,2519
中图分类号 TP391
文献标志码 A