AdViS

Adaptative Visual Substitution

Developing a wearable device converting spatial information into auditory cues to assist visually impaired people with navigation and object-reaching through sensory substitution.
Research
Software Engineering
C++
Statistics
Human-Computer Interaction
Augmented Reality
Computer Vision
Consortium
Began Around

July 1, 2014

Abstract

The AdViS project aims to develop a wearable device that will assist Visually Impaired People during spatial interaction tasks (such as indoor navigation and object-reaching), using an audio-based Human-Machine Interface which converts relevant spatial metrics into ergonomic auditory cues.

AdViS aims to explore various ways to provide visuo-spatial information through auditory feedback, based on the Sensory Substitution framework. Since its inception, the project investigated visuo-auditive substitution possibilities for multiple tasks in which vision plays a crucial role, such as (1) navigating a small maze using a depth-map-to-auditory-cues transcoding, (2) finger-guided image exploration (on a touch screen), (3) eye-movement guided image exploration (on a turned off computer screen), and (4) pointing towards and reaching a virtual target in 3D space using a motion capture environment.

The AdViS system is coded in C++, uses PureData for complex sound generation, and relies on the VICON system to track participant’s movements in an augmented reality environment.


Summary

AdViS system's operating diagram

AdViS - Depth Map navigation

AdViS system's operating diagram with motion capture

AdViS - Motion Capture
NoteMy role in this project

1) Proposed a new model for image exploration relying on a touch-mediated visuo-auditive feedback loop, where a VIP explores an image by moving its finger across a screen and gets audio feedback based on the contents of the explored region.

2) Modified the existing AdViS system to include the ability to transcode grey-scale images into soundscapes, based on captured finger-motion information on a touchscreen (C++/PureData).

3) Organized experimental evaluations with blindfolded students, tasked with recognizing geometrical shapes on a touchscreen, and analyzed the results (R).

Participant exploring a geometrical shape by moving their finger on a touchscreen

4) Participated in implementing an occular-motion-to-audio-feedback loop in order to evaluate the possibility of exploring images (on a turned off screen) with eye-movements (which are still controllable by most of the non-congenital VIP) (C++/OpenCV).

Participant exploring a geometrical shape by moving their eyes across a black computer screen

Back to top