ACCESSPACE

Helping Visually Impaired People travel autonomously

Developing a wearable vibro-tactile electronic Orientation & Travel Aid for the autonomous navigation of VIP, based on Spatial Cognition models.
Research
Software Engineering
Data Science
Human-Computer Interaction
Augmented Reality
Haptic Interface
Sensory Substitution
Computer Vision
Consortium
Began Around

January 1, 2017

Abstract

The ACCESSPACE project aims to develop an electronic Orientation and Travel Aid that will provide Visually Impaired People (VIP) with a dynamic and specialized representation of their environment’s spatial topography, through egocentric vibrotactile feedback. Information on the user’s location & surroundings will be acquired using their smartphone, processed, and communicated back to them through a waist belt fitted with vibrators. With some training, this substituted spatial information will allow VIP to intuitively form mental maps of their surroundings, and navigate autonomously towards their destination.


Banner illustrating the project

1 Introduction


ACCESSPACE’s high-level goal was to allow VIP to navigate indoors or outdoors in autonomy, helping them intuitively perceive where they are, where they want to go, to choose how to get there, and avoid the incoming obstacles on their way.

ACCESSPACE had three main axes of research:
1) Devise an intuitive and efficient way to provide real-time spatial cues through tactile signals
2) Design a haptic interface that allows to communicate said spatial representation easily to the user
3) Implement the software require to support the various functions of this interface (e.g. Indoor Localisation, Obstacle Detection, Mapping, …)

The guiding theoretical principle of this project was to use the brain’s navigation system as an inspiration source for what information to provide VIP to make navigation the most intuitive. The gist of the idea is that our (biological) navigation system combine and distills information from multiple senses into a set of key spatial properties (e.g. where the center of the current room is located). This spatial representation allows us to reason on the structure of our environment, and to navigate it efficiently. For VIP, this process is impaired due to our navigation system’s heavy reliance on visual information. To make our assistive device more intuitive, instead of substituing vision in its entirety, we focus on directly providing the information that our navigation system would extract from vision.

Illustration showing the general theoretical principles behind ACCESSPACE transcoding principles
Figure 1: ACCESSPACE guiding principle

Using those principles, we devised an encoding scheme that provides the user with 3 types of information through egocentered tactile feedback:

  • The orientation and distance to the destination of the journey (as the crow flies)
  • The available path possibilities around the user (i.e. the various branching streets they could take, the current room’s center), which form a navigation graph, and will allow the VIP to mentally visualize the layout of its immediate environment
  • The closest obstacles

An illustration showcasing the navigation graph used with the TactiBelt

Illustration of the navigation graph idea

2 Outcomes


2.1 Our interface: the TactiBelt

To provide the proposed egocentric encoding scheme to the user, we designed a vibro-tactile belt, the TactiBelt: it comprises of 46 ERM motors spread into three layers, controlled by an Arduino Mega, through a specialized software written in Java:

A photography showing the first version of the TactiBelt
Figure 2: First prototype of the TactiBelt

2.2 Software tools

To capture and extract the information we need from the VIP’s environment, we devised a series of software tools relying mostly on Computer Vision:

1) Obstacle detection and indoor localisation using the ORB-SLAM algorithm:

Image showing the ORB-SLAM algorithm running inside our office

2) Depth estimation from a monocular RGB camera, using the MonoDepth algorithm:

Image showing the MonoDepth algorithm running just outside our lab

3) Generating a mobility graph of the environment during movement using Reinforcement Learning:

Applied to an artificial agent exploring a virtual maze, looking for food:

Image showing the navigation graph generated by an artificial agent exploring a virtual maze

Applied to a real agent (human pushing a cart with a camera and the computer running the algorithm around a meeting table):

Image showing the navigation graph generated by moving the camera around a metting table in a room

4) A virtual environment to test the TactiBelt and our candidate spatial encoding schemes:

A screenshot of our virtual environment where the player has to find a virtual target solely relying on tactile feedback

2.3 Project dissemination

The ACCESSPACE project was promoted in various mainstream and technical media, such as:
- On RTL, a French national radio station ()
- On PhDTalent, a platform and network for PhD Students who wish to transition to industry ()
- On Guide Néret, a specialized website on Handicap in France ()
- On Acuité, a specialized website dedicated to Opticians and news around visual impairment ()
- On Oxytude, a weekly podcast reviewing news related to visual impairment ()
- On FIRAH, the French Foundation on Applied Research for Handicap ()

This project has been warmly welcomed by the VIP community and was awarded the “Applied research on disability” award from the CCAH in 2017 🥇

3 My role in this project


ACCESSPACE was my main PhD project:

1) Managed the literature review for all axes of the project (spatial cognition, assistive devices, sensory substitution, computer vision, …).

2) Designed the TactiBelt and participated in its conception (Arduino).

3) Participated in the development of a Java application to control and test the TactiBelt.

4) Handled the preliminary experimental evaluations of the TactiBelt, and the analyses of said data.

5) Presenting the project with a journal paper, a conference paper, a talk at an international conference, and various outreach events (see the dissemination section above for more details).

Back to top