SAGE

July 13th, 2016

The system contains several components including touchless speech acquisition, background noise adaptation, real time audio and vibrotactile feedback, automatic speech recognition, distributed clock synchronization, 3D annotation localization, user interaction, and interactive visualization.

The emergence of portable 3D mapping systems are revolutionizing the way we generate digital 3D models of environments. These systems are human-centric and require the user to hold or carry the device while continuously walking and mapping an environment. In this paper, we adapt this unique coexistence of man and machines to propose SAGE (Semantic Annotation of Georeferenced Environments). SAGE consists of a portable 3D mobile mapping system and a smartphone that enables the user to assign semantic content to georeferenced 3D point clouds while scanning a scene. The proposed system contains several components including touchless speech acquisition, background noise adaptation, real time audio and vibrotactile feedback, automatic speech recognition, distributed clock synchronization, 3D annotation localization, user interaction, and interactive visualization. The most crucial advantage of SAGE technology is that it can be used to infer dynamic activities within an environment. Such activities are difficult to be identified with existing post-processing semantic annotation techniques. The capability of SAGE leads to many promising applications such as intelligent scene classification, place recognition and navigational aid tasks. We conduct several experiments to demonstrate the effectiveness of the proposed system.

Scan of Trees being Mapped

Scan of Trees being Mapped

scan of forrest

scan of forrest

 

 

 

 

 

 

 

 

 

 

 Voice-Annotation-Phone-App diagram

Voice-Annotation-Phone-App

Diagram of how mapping system works

Diagram of how mapping system works