Automatic learning of geospatial intelligence is challenging due to the complexity of articulating knowledge from visual patterns and to the ever-increasing quantities of image data generated on a daily basis. In this setting, human inspection and annotation is subjective and, more importantly, impractical. In this letter, we propose a knowledge-discovery algorithm that uses content-based methods to link low-level image features with high-level visual semantics in an effort to automate the process of retrieving semantically similar images. Our algorithm represents geospatial images by using a high-dimensional feature vector and generates a set of association rules that correlate semantic terms with visual patterns represented by discrete feature intervals. We also provide a mathematical model to customize the relevance of feature measurements to semantic assignments as well as methods of querying by semantics and by example.
You are here: Home / IEEE 2011 PROJECTS / Visual-Semantic Modeling in Content-Based Geospatial Information Retrieval Using Associative Mining Techniques