Skip to main navigation Skip to search Skip to main content

High-level information fusion in visual sensor networks

  • Juan Gómez-Romero
  • , Jesús García
  • , Miguel A. Patricio
  • , José M. Molina
  • , James Llinas

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

5 Scopus citations

Abstract

Information fusion techniques combine data from multiple sensors, along with additional information and knowledge, to obtain better estimates of the observed scenario than could be achieved by the use of single sensors or information sources alone. According to the JDL fusion process model, high-level information fusion is concerned with the computation of a scene representation in terms of abstract entities such as activities and threats, as well as estimating the relationships among these entities. Recent experiences confirm that context knowledge plays a key role in the new-generation high-level fusion systems, especially in those involving complex scenarios that cause the failure of classical statistical techniques -as it happens in visual sensor networks. In this chapter, we study the architectural and functional issues of applying context information to improve high-level fusion procedures, with a particular focus on visual data applications. The use of formal knowledge representations (e.g. ontologies) is a promising advance in this direction, but there are still some unresolved questions that must be more extensively researched.

Original languageEnglish
Title of host publicationVisual Information Processing in Wireless Sensor Networks
Subtitle of host publicationTechnology, Trends and Applications
PublisherIGI Global
Pages197-224
Number of pages28
ISBN (Print)9781613501535
DOIs
StatePublished - 2011

Fingerprint

Dive into the research topics of 'High-level information fusion in visual sensor networks'. Together they form a unique fingerprint.

Cite this