Sensory Substitution Devices

Definition:

Sensory Substitution Devices (SSD) are a special type of Human-Machine Interface (HMI) designed to provide additional (or no longer accessible) information about the environment, allowing their users to carry out tasks that were previously impossible or notably difficult.

SSD distinguish themselves from regular wearable devices, such as smartphones, through the way they convey information to the user. By imitating how our senses communicate with our brain, SSDs allow their users to learn and internalize new ways to interact with their environment, or to rehabilitate lost or no-longer used ones. Indeed, instead of symbolic visual or audio codes (i.e. images or language), SSDs communicate through low-level signals from which the brain can learn to extract regularities, thanks to its plasticity).

SSDs’ main properties:

SSD are usually comprised of 3 main elements:

Sensors Capture the no-longer accessible information
Mapping software Transform the gathered information into a suitable representation. This step can involve more or less processing of the sensors’ data.
Actuators Provide this representation in a code fitted for the output (substitute) modality’s properties, through an ergonomic interface

Classification of SSDs:

SSDs work by remapping inputs from the device’s sensors to a specific code delivered by the actuators through the substitute modality, which serves as communication interface with the user. For the same sensors and actuators, this mapping can be done in several ways, depending on what information is conveyed to the user (e.g. distance, colour, etc), through what modality and what channel(s) it is send, and how it is encoded.

Based on what information they provide:

Some SSD can be more or less specialized, meaning that the information they convey is aimed to assist the user with a specific subset of tasks, from all the ones that are performed with the substituted modality.

Historically, first SSDs provided a direct transposition of a camera field of view into a 2D tactile or audio image. But it evolved progressively with the rise of more advanced & specialized devices aimed at assisting specific needs, such as locating objects or navigation for blind people.

Under construction

Name Task(s) to assist with Information provided

Other examples also include vestibular substitution 1, providing balance information through tactile feedback.

This specialization implies that different types of processing is done on the sensors’ data, to extract, from the low-level information, what is required for the tasks the device aims to supplement.

Based on the substitute modality they use:

SSD can also differ by what interface and communication channel they use to interact with the user. The first choice to make is the output modality: audio or tactile.

Under construction

Pro Con
Tactile
Audio

Furthermore, the tactile modality is comprised of several possible communication channels : vibrations, pressure, temperature changes, electric stimulations, etc. Those channels correspond to different types of tactile receptors on the skin, which have different properties, such as adaptation speed, sampling rate, dynamic range of response, variable spatial density on the body, etc.

Those properties will affect how much information you can convey through their channel in a given time-frame, and how much energy is required to do so, which affects the viability of each of those channels for sensory substitution purposes.

For example, how much pressure does an actuator has to apply to be felt; or what range of frequency of vibrations will be perceived, and in that range, how many discreet steps of frequencies can be distinguished accurately by the users ?

Based on how they encode information:

Information lies in variations: for a same output channel (such as the vibro-tactile channel), an information can be encoded in different ways, through changes on how the SSD maps the changes of the features provided to changes in the properties of the selected output channel.

A given information can be encoded in very different ways, depending on how the mapping (correspondence) between the changes of the information to provide and the changes made to the output channel used for this information are handled.

Let’s take the example from the thesis of Dr Mandil 2: an helicopter pilot with a vibro-tactile vest designed to help him land his aircraft smoothly despite extremely bad visibility conditions, by providing him with altitude information.

SSDs can encode the information they provide in a more or less abstract (vs symbolic) manner 3. In our example, coding the altitude can be done in several ways:

  • Through a succession of vibrations moving downwards on the user’s back, which is symbolically linked to altitude change.
  • Through changes of the vibration’s frequency, which is more abstract, since it cannot be directly interpreted without some training and focus from the user.

  1. Alberts, B. B. G. T., Selen, L. P. J., Verhagen, W. I. M., & Medendorp, W. P. (2015). Sensory substitution in bilateral vestibular a-reflexic patients. Physiological Reports, 3(5), e12385. ^
  2. Mandil, C. (2018). Informations vibrotactiles pour l’aide à la navigation et la gestion des contacts avec l’environnement. University of Caen-Normandy. ^
  3. MacLean, K. E. (2008). Haptic Interaction Design for Everyday Interfaces. Reviews of Human Factors and Ergonomics, 4(1), 149–194. ^

Related

comments powered by Disqus