Summary
The NAV-VIR and TETMOST projects developed a multimodal interface to help Visually Impaired People virtually explore images. The system combines the Force Feedback Tablet (F2T) for tactile map exploration with HRTF-based 3D audio simulation to create immersive spatial experiences.
The project’s high-level objectives was to allow users to build mental representations of images through a neuroscience-inspired combinations of haptic feedback and spatially accurate sound cues. A key aspect of the project involved experimenting to find the most intuitive way to transcode an image into haptic sensations whilst retaining its original meaning and peculiarities.
The system was validated through experimental evaluations where participants successfully recognized geometric shapes and navigated virtual apartment layouts, demonstrating the effectiveness of the audio-tactile approach. The project focused on two practical applications: (1) virtual exploration of maps to prepare for a journey, and (2) virtual exploration of images to make Art more accessible to Visually Impaired People (VIP).
1) Software Development:
- Development of an application to control the F2T device (Java/Arduino),
- Implementation of algorithms for real-time conversion of digital maps and images into haptic representations (Python/OpenCV).
2) Experimental Design & Validation: Co-designed and implemented the experimental protocol to test the effectiveness of the system, including the development of standardized tasks for geometric shape recognition and spatial layout comprehension.
3) Research Communication:
- Co-authored a conference paper detailing the workins of the F2T (Gay et al., 2018),
- Authored a conference paper showing its use for virtual exploration of maps (Riviere et al., 2019),
- Co-authored a conference paper on the image segmentation technique used to simplify the images for the F2T (Souradi et al., 2020),
- Authored a poster, documenting the technical implementation, experimental methodology, and validation results.
Details
The Force Feedback Tablet (F2T)
The Force Feedback Tablet (F2T) is a novel haptic interface designed to make visual content accessible to Visually Impaired People through touch. The F2T allows users to explore digital images and maps by feeling varying levels of resistance and texture through a stylus-based interaction system.
The device provides force feedback with up to 1000Hz refresh rate, combining precise haptic mechanisms with real-time image processing to convert visual information into tactile sensations. Users navigate the tablet surface with an integrated joystick that provides haptic feedback corresponding to different visual elements (e.g. walls feel solid and resistant, pathways offer smooth movement, and key landmarks are marked with distinct tactile signatures). The interface supports natural gesture recognition for common exploration behaviors like wall-following and systematic scanning, with adaptive feedback that adjusts haptic intensity based on user preferences.
From images to tactile representations
Manual
We developed a GUI application to assist in creating simplified tactile representations from images, to explore using the F2T:
Automated
We developed a complete processing pipeline for converting visual content into accessible tactile formats. This includes intelligent reduction of visual complexity while preserving essential spatial information, and automatic extraction of key features from architectural plans and artwork. The system combines various Computer Vision techniques such as image segmentation and edge detection:
Audio-Tactile Integration
For the NAV-VIR application specifically, the system integrates HRTF-based 3D audio simulation with the haptic feedback to create immersive spatial experiences. The binaural audio provides spatially accurate sound cues that correspond to tactile exploration, with audio mapping that associates spatial locations with environmental sounds and landmarks. This multimodal approach supports multi-scale navigation, allowing both detailed local exploration and broader spatial overview modes.