Loading...
 
Print

KIM Project

Project Title   Knowledge-based Information Mining (TRP project name: "Knowledge Driven Information Mining in Remote Sensing Image Archives")
Project Acronym   KIM
Contractor(s)   ACS, DLR, ETHZ

 

Project logo Context
          Objectives
             Architecture
          How it Works
Output

 


Context   Top of page

In recent years our ability to store large quantities of data has greatly surpassed our ability to access and meaningfully extract information from it. The state-of-the-art of operational systems for Remote Sensing data access, in particular for images, allows queries by geographical location, time of acquisition or type of sensor. This information is often less relevant than the content of the scene, i.e.: structures, objects or scattering properties.

Emerging needs from big applications (e.g.: change detection, global monitoring, disaster management support, etc.) and the continuous increase in archives' size and EO sensors' variety, require new methodologies and tools for information mining and management, supported by shared knowledge. The traditional interpretation of EO images and products requires to large extent the use of experts' knowledge in each application area.

Through visual inspection the experts are able to extract the information embedded in the images, and to and classify and interpret it as required, usually for a single application. The manual process performed by experts to mine information from images is currently too complex and expensive to be applied systematically on even a small subset of the acquired scenes. This limits the full exploitation of the petabytes of archived or new data. The issue might become even more challenging in future since more missions - including constellations - are being planned, with broader sensor variety, higher data rates and increasing complexity. As an example, ENVISAT alone accumulates 400 terabytes of data every year. The problematic is common also to other domains, like medicine, multimedia, and to a broad spectrum of other sensors' data.

This interpretation process takes time and it is expensive, and it cannot be used for the systematic processing and classification of large data volumes. Therefore, there is a need for automated feature recognition and classification techniques to replace the manual interpretation activity. Eventually these techniques should support the recognition of image features by automatic or semiautomatic scene analysis and classification. Results from current R&D activity might ease the access to the imagery (today mostly retrieved using spatio-temporal and a few more parameters, referred to in the following as spatio-temporal-parameter) also through their information content. The need to access information also in large volumes of image data has stimulated the research in the field of content-based image retrieval during last decade.

Many new concepts have been developed and prototyped. However the dramatic increase in volume, details, diversity and complexity, and the user demand for simultaneous access to multi-domain data urgently requires new approaches for image information mining, multi-domain information management, and knowledge management and sharing (in support to information mining and training).

 


Objectives   Top of page

KIM aims at implementing a new information mining technique, differing from traditional feature extraction methods, which analyse pixels and look for a predefined pattern. KIM extracts and stores basic characteristics (Primitive Features) of image pixels and areas, which are then selected (one or more and weighted) by users as representative of the searched high-level feature. The Primitive Features' combination resulting from such training can be associated by the user to a specific semantic meaning, closely linked to his domain and knowledge.


Semantic modelling

Figure 1 - Hierarchical modelling of image content and user semantic


This method has a number of advantages:

  Image No need to re-scan the entire image archive when searching for new features
  Image The selected feature can be closer to user expectations and perception (the same feature can have different meanings to different users: e.g. a forest for an environmentalist, a forest guard, a geologist, a urban planner, …)
  Image The system can be implemented to learn from experts' knowledge


Logical model

Figure 2 - Overall KIM system logical model


KIM is the first prototype of a new generation of advanced tools and systems for:

  Image Intelligently and effectively accessing the information content in large EO data repositories
  Image Better exploration and understanding of Earth structures and processes
  Image Decrease costs, and increase the accessibility and utility of EO data

 


Architecture   Top of page

KIM components

Figure 3 - KIM components

 


How it Works   Top of page

KIM permits interactive system learning via image examples, knowledge acquisition and reuse, and semantic querying by image content.

The interactive learning function is a valuable mining tool for exploring the unknown content of image archives. A Graphical User Interface enables the user to select, by clicking on the image, those structures of greatest interest, which then appear in red on a grey-scale visualisation of the relief, representing the Bayesian learning of the structure recognition. Collections can be created for the combinations of sensors, products and primitive features described in the following table.

KIM Interface

Figure 4 - KIM user interface

KIM permits to obtain:

  Image Identifiers of the images containing the searched feature
  Image Feature map or GIS object of the feature in the selected image

 


Output   Top of page

ESA provides a KIM environment for testing the technology. The client and the manual can be downloaded from http://kes.esrin.esa.int/kes. The server contains a few test collections, created for some of the missions listed in the following table. The table shows also the possible source products and the Primitive Features which can be extracted from them.

Sensor Product Primitive Features
Envisat MERIS RR1 Spectral, DCT, Texture S0, Texture S1
Landsat 5 TM / Landsat 7 ETM CEOS Syscorrected Spectral, DCT, Texture S0, Texture S1
ERS-1/ERS-2 SAR GEC Intensity, Texture S0, Texture S1, EMBD
Ikonos Pancromatic GeoTIFF Spectral, Texture S0, Texture S1, Area, Compactness, Spectral Mean, Spectral Variance, Hu Moments
Spot 5 HRG DIMAP Spectral, Texture S0, Texture S1, Area, Compactness, Spectral Mean, Spectral Variance, Hu Moments
Various Generic RGB image (jpg, TIFF, GeoTIFF …) Spectral, Texture S0, Texture S1, Area, Compactness, Spectral Mean, Spectral Variance, Hu Moments

Table 1 - Permitted combinations of sensors / products / primitive features


The meanings of the Primitive Features are described in Table 2.

Primitive Feature Description
Spectral Spectral signature
Texture Structural information extracted from the images by applying the stochastic auto binomial model of Gibbs Marcov Random Fields (S0: from full resolution images; S1: from sub sampled images)
DCT Discrete Cosine Transform: transforms signals and images from the spatial domain to the frequency domain
EMBD Enhanced-Model-Based-Despeckling: performs a high quality despeckling of SAR images
Area Area of the objects detected with the segmentation process
Compactness Compactness of the objects detected with the segmentation process
Spectral Mean Mean value of the radiometric information of the image inside the closed area detected by the segmenter
Spectral Variance Variance of the radiometric information of the image inside the closed area detected by the segmenter
Hu Moments Hu-Moment Invariants: shape information conveyed by the contour points. Hu moments are invariant to scale, rotation and translation (the first 4 out of 7 invariant moments as shape descriptors have been used).

Table 2 - Meanings of primitive features


The capability of KIM to handle for example the following data sets / features will be assessed:

  Image Landsat, MERIS, and NOAA AVHHR quick-looks over Europe for selecting cloud-free images and images showing snow cover
  Image Full resolution Landsat images and ERS SAR GEC products over Europe for the identification of land features
  Image Quick-look or L1b full or reduced resolution MERIS images over seas for the identification of potential algae blooms
  Image ERS SAR or ENVISAT ASAR quick-look and full resolution images over seas for the identification of sea features

Description of KIM and of its possible applications are provided in the KIM Executive Summary.

    Top of page


Contributors to this page: .

Page last modified on Wednesday 15 of December 2010 15:56:46 CET by .