Journal of University of Science and Technology of China ›› 2014, Vol. 44 ›› Issue (1): 67-73.DOI: 10.3969/j.issn.0253-2778.2014.01.008

• Original Paper • Previous Articles    

Image annotation by searching semantically related regions

DAI Lican, YU Nenghai   

  1. Department of Electronic Engineer and Information Science, USTC, Hefei 230027, China
  • Received:2013-03-19 Revised:2013-04-28 Accepted:2013-04-28 Online:2013-04-28 Published:2013-04-28
  • Contact: YU Nenghai
  • About author:DAI Lican, male, born in 1985, PhD student. Research filed: Information retrieval. E-mail: licand@mail.ustc.edu.cn
  • Supported by:
    National Natural Science Foundation (60933013).

Abstract: Based on abundant partially annotated images on the web, a novel framework for image annotation was proposed. By utilizing both the visual and textual knowledge of public available image database Image-Net, the proposed framework first learnt a set of weakly labeled visual concept classifiers, and then used the outputs of these learnt classifiers on image regions as descriptors to conduct the region-based search in a large scale image database for a query image. After that, search results mining and clustering was introduced to generate annotations to the query image. Compared with image-level representation, the proposed region-based semantic representation performs better at capturing images multi-objects/semantics. The proposed framework takes advantage of both traditional classification-based approaches and large scale data-driven approaches. Experimental results conducted on 24 million web images and challenging image database have demonstrated the effectiveness and efficiency of the proposed approach.

Key words: image annotation, region-based search, large scale data-driven, classeme learning

CLC Number: