NAV-VIR / TETMOST

Virtual Image exploration for Visually Impaired People

Development of the Force Feedback Tablet (F2T) and multimodal audio-tactile interfaces for virtual exploration of images and maps by Visually Impaired People.
Research
Software Engineering
Java
Python
Human-Computer Interaction
Computer Vision
Consortium
Began Around

September 1, 2018

Abstract

The NAV-VIR and TETMOST projects developed innovative multimodal interfaces to make visual content accessible to Visually Impaired People through virtual exploration. The core innovation is the Force Feedback Tablet (F2T), a novel haptic interface providing force feedback to convey digital images and maps into tactile representations. The system integrates haptic feedback with HRTF-based 3D audio simulation to create immersive spatial experiences for two key applications: virtual map exploration for journey preparation and Art accessibility.

To accompany the F2T, we developed an application to assist in the creation of simplified tactile representations from images (Java/Arduino), implemented algorithms for real-time conversion of visual content into haptic representations (Python/OpenCV) by combining computer vision techniques (image segmentation, edge detection), and designed experimental protocols to validate the system’s effectiveness. The research was documented through multiple peer-reviewed publications.


Summary

The NAV-VIR and TETMOST projects developed a multimodal interface to help Visually Impaired People virtually explore images. The system combines the Force Feedback Tablet (F2T) for tactile map exploration with HRTF-based 3D audio simulation to create immersive spatial experiences.

The project’s high-level objectives was to allow users to build mental representations of images through a neuroscience-inspired combinations of haptic feedback and spatially accurate sound cues. A key aspect of the project involved experimenting to find the most intuitive way to transcode an image into haptic sensations whilst retaining its original meaning and peculiarities.

The system was validated through experimental evaluations where participants successfully recognized geometric shapes and navigated virtual apartment layouts, demonstrating the effectiveness of the audio-tactile approach. The project focused on two practical applications: (1) virtual exploration of maps to prepare for a journey, and (2) virtual exploration of images to make Art more accessible to Visually Impaired People (VIP).

Schema illustrating the objectives of the NAV-VIR and TETMOST projects

NoteMy role in this project

1) Software Development:
- Development of an application to control the F2T device (Java/Arduino),
- Implementation of algorithms for real-time conversion of digital maps and images into haptic representations (Python/OpenCV).

2) Experimental Design & Validation: Co-designed and implemented the experimental protocol to test the effectiveness of the system, including the development of standardized tasks for geometric shape recognition and spatial layout comprehension.

3) Research Communication:
- Co-authored a conference paper detailing the workins of the F2T (Gay et al., 2018),
- Authored a conference paper showing its use for virtual exploration of maps (Riviere et al., 2019),
- Co-authored a conference paper on the image segmentation technique used to simplify the images for the F2T (Souradi et al., 2020),
- Authored a poster, documenting the technical implementation, experimental methodology, and validation results.

Details

The Force Feedback Tablet (F2T)

The Force Feedback Tablet (F2T) is a novel haptic interface designed to make visual content accessible to Visually Impaired People through touch. The F2T allows users to explore digital images and maps by feeling varying levels of resistance and texture through a stylus-based interaction system.

The device provides force feedback with up to 1000Hz refresh rate, combining precise haptic mechanisms with real-time image processing to convert visual information into tactile sensations. Users navigate the tablet surface with an integrated joystick that provides haptic feedback corresponding to different visual elements (e.g. walls feel solid and resistant, pathways offer smooth movement, and key landmarks are marked with distinct tactile signatures). The interface supports natural gesture recognition for common exploration behaviors like wall-following and systematic scanning, with adaptive feedback that adjusts haptic intensity based on user preferences.

Photo of the F2T interface

Schema explaining the F2T interface

From images to tactile representations

Manual

We developed a GUI application to assist in creating simplified tactile representations from images, to explore using the F2T:

Screenshot of the F2T control interface

Image illustrating the F2T used to read a pre-processed map

Automated

We developed a complete processing pipeline for converting visual content into accessible tactile formats. This includes intelligent reduction of visual complexity while preserving essential spatial information, and automatic extraction of key features from architectural plans and artwork. The system combines various Computer Vision techniques such as image segmentation and edge detection:

Antique drawing of a horse passed through an edge-detection algorithm to remove unneeded details

Antique painting of a woman carrying a milk jug, passed through another type of edge-detection algorithm to remove unneeded details

Audio-Tactile Integration

For the NAV-VIR application specifically, the system integrates HRTF-based 3D audio simulation with the haptic feedback to create immersive spatial experiences. The binaural audio provides spatially accurate sound cues that correspond to tactile exploration, with audio mapping that associates spatial locations with environmental sounds and landmarks. This multimodal approach supports multi-scale navigation, allowing both detailed local exploration and broader spatial overview modes.

Back to top

References

Gay, S., Rivière, M.-A., & Pissaloux, E. (2018). Towards haptic surface devices with force feedback for visually impaired people (K. Miesenberger & G. Kouroupetroglou, Eds.; Vol. 10897, pp. 258–266). Springer International Publishing. http://link.springer.com/10.1007/978-3-319-94274-2_36
Riviere, M.-A., Gay, S., Romeo, K., Pissaloux, E., Bujacz, M., Skulimowski, P., & Strumillo, P. (2019). NAV-VIR: An audio-tactile virtual environment to assist visually impaired people*. 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), 1038–1041. https://doi.org/10.1109/NER.2019.8717086
Souradi, A., Lecomte, C., Romeo, K., Gay, S., Rivière, M.-A., El Moataz, A., & Pissaloux, E. (2020). Towards the Tactile Discovery of Cultural Heritage with Multi-approach Segmentation. In A. El Moataz, D. Mammass, A. Mansouri, & F. Nouboud (Eds.), Image and Signal Processing (Vol. 12119, pp. 14–23). Springer International Publishing. https://doi.org/10.1007/978-3-030-51935-3_2