Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.
A sensory substitution system consists of three parts: a sensor, a coupling system, and a stimulator. The sensor records stimuli and gives them to a coupling system which interprets these signals and transmits them to a stimulator. In case the sensor obtains signals of a kind not originally available to the bearer it is a case of sensory augmentation. Sensory substitution concerns human perception and the plasticity of the human brain; and therefore, allows us to study these aspects of neuroscience more through neuroimaging.
It is hoped[according to whom?] that sensory substitution systems can help people by restoring their ability to perceive certain defective sensory modality by using sensory information from a functioning sensory modality.
The idea of sensory substitution was introduced in the '60s by Paul Bach-y-Rita as a means of using one sensory modality, mainly taction, to gain environmental information to be used by another sensory modality, mainly vision. Thereafter, the entire field was discussed by Chaim-Meyer Scheff in "Experimental model for the study of changes in the organization of human sensory information processing through the design and testing of non-invasive prosthetic devices for sensory impaired people". The first sensory substitution system was developed by Bach-y-Rita et al. as a means of brain plasticity in congenitally blind individuals. After this historic invention, sensory substitution has been the basis of many studies investigating perceptive and cognitive neuroscience. Since then, sensory substitution has contributed to the study of brain function, human cognition and rehabilitation.
When a person becomes blind or deaf they generally do not lose the ability to hear or see, they simply lose their ability to transmit the sensory signals from the periphery (retina for visions and cochlea for hearing) to brain. Since the vision processing pathways are still intact, a person who has lost the ability to retrieve data from the retina can still see subjective images by using data gathered from other sensory modalities such as touch or audition.
In a regular visual system, the data collected by the retina is converted into an electrical stimulus in the optic nerve and relayed to the brain, which re-creates the image and perceives it. Because it is the brain that is responsible for the final perception, sensory substitution is possible. During sensory substitution an intact sensory modality relays information to the visual perception areas of the brain so that the person can perceive to see. With sensory substitution, information gained from one sensory modality can reach brain structures physiologically related to other sensory modalities. Touch-to-visual sensory substitution transfers information from touch receptors to the visual cortex for interpretation and perception. For example, through fMRI, we can determine which parts of the brain are activated during sensory perception. In blind persons, we can see that while they are only receiving tactile information, their visual cortex is also activated as they perceive to see objects. We can also have touch to touch sensory substitution where information from touch receptors of one region can be used to perceive touch in another region. For example, in one experiment by Bach-y-Rita, he was able to restore the touch perception in a patient who lost peripheral sensation from leprosy.
In order to have sensory substitution and stimulate the brain without intact sensory organs to relay the information, it is also possible to develop machines that do the signal transduction. This brain-machine interface is where external signals are collected and transduced into electrical signals for the brain to interpret. Generally a camera or a microphone is used to collect visual or auditory stimuli that are used to replace lost sensory information. The visual or auditory data collected from the sensors is transduced into tactile stimuli that are then relayed to the brain for visual and auditory perception. This type of sensory substitution is only possible due to the plasticity of the brain.
Brain plasticity refers to the brain's ability to adapt to a changing environment, for instance to the absence or deterioration of a sense. It is conceivable that cortical remapping or reorganization in response to the loss of one sense may be an evolutionary mechanism that allows people to adapt and compensate by using other senses better. Functional imaging of congenitally blind patients showed a cross-modal recruitment of the occipital cortex during perceptual tasks such as Braille reading, tactile perception, tactual object recognition, sound localization, and sound discrimination. This may suggest that blind people can use their occipital lobe, generally used for vision, to perceive objects through the use of other sensory modalities. This cross modal plasticity may explain the often described tendency of blind people to show enhanced ability in the other senses.
While talking about the physiological aspects of sensory substitution, it is essential to distinguish between sensing and perceiving. The general question posed by this differentiation is: Are blind people seeing or perceiving to see by putting together different sensory data? While sensation comes in one modality - visual, auditory, tactile etc. - perception due to sensory substitution is not one modality but a result of cross-modal interactions. Therefore, we can say that while sensory substitution for vision induces visual-like perception in sighted individual, it induces auditory or tactile perception in blind individuals. In short, blind people perceive to see through touch and audition with sensory substitution.
Applications are not restricted to handicapped persons, but also include artistic presentations, games, and augmented reality. Some examples are substitution of visual stimuli to audio or tactile, and of audio stimuli to tactile. Some of the most popular are probably Paul Bach-y-Rita's Tactile Vision Sensory Substitution (TVSS), developed with Carter Collins at Smith-Kettlewell Institute and Peter Meijer's Seeing with Sound approach (The vOICe). Technical developments, such as miniaturization and electrical stimulation help the advance of sensory substitution devices.
In sensory substitution systems, we generally have sensors that collect the data from the external environment. This data is then relayed to a coupling system that interprets and transduces the information and then replays it to a stimulator. This stimulator ultimately stimulates a functioning sensory modality. After training, people learn to use the information gained from this stimulation to experience a perception of the sensation they lack instead of the actually stimulated sensation. For example, a leprosy patient, whose perception of peripheral touch was restored, was equipped with a glove containing artificial contact sensors coupled to skin sensory receptors on the forehead (which was stimulated). After training and acclimation, the patient was able to experience data from the glove as if it was originating in the fingertips while ignoring the sensations in the forehead.
To understand tactile sensory substitution it is essential to understand some basic physiology of the tactile receptors of the skin. There are five basic types of tactile receptors: Pacinian corpuscle, Meissner's corpuscle, Ruffini endings, Merkel nerve endings, and free nerve endings. These receptors are mainly characterized by which type of stimuli best activates them, and by their rate of adaptation to sustained stimuli. Because of the rapid adaptation of some of these receptors to sustained stimuli, those receptors require rapidly changing tactile stimulation systems in order to be optimally activated. Among all these mechanoreceptors Pacinian corpuscle offers the highest sensitivity to high frequency vibration starting from few 10s of Hz to a few kHz with the help of its specialized mechanotransduction mechanism.
There have been two different types of stimulators: electrotactile or vibrotactile. Electrotactile stimulators use direct electrical stimulation of the nerve ending in the skin to initiate the action potentials; the sensation triggered, burn, itch, pain, pressure etc. depends on the stimulating voltage. Vibrotactile stimulators use pressure and the properties of the mechanoreceptors of the skin to initiate action potentials. There are advantages and disadvantages for both these stimulation systems. With the electrotactile stimulating systems a lot of factors affect the sensation triggered: stimulating voltage, current, waveform, electrode size, material, contact force, skin location, thickness and hydration. Electrotactile stimulation may involve the direct stimulation of the nerves (percutaneous), or through the skin (transcutaneous). Percutaneous application causes additional distress to the patient, and is a major disadvantage of this approach. Furthermore, stimulation of the skin without insertion leads to the need for high voltage stimulation because of the high impedance of the dry skin, unless the tongue is used as a receptor, which requires only about 3% as much voltage. This latter technique is undergoing clinical trials for various applications, and been approved for assistance to the blind in the UK. Alternatively, the roof of the mouth has been proposed as another area where low currents can be felt.
Electrostatic arrays are explored as human-computer interaction devices for touch screens. These are based on a phenomenon called electrovibration, which allows microamperre-level currents to be felt as roughness on a surface.
Vibrotactile systems use the properties of mechanoreceptors in the skin so they have fewer parameters that need to be monitored as compared to electrotactile stimulation. However, vibrotactile stimulation systems need to account for the rapid adaptation of the tactile sense.
Another important aspect of tactile sensory substitution systems is the location of the tactile stimulation. Tactile receptors are abundant on the fingertips, face, and tongue while sparse on the back, legs and arms. It is essential to take into account the spatial resolution of the receptor as it has a major effect on the resolution of the sensory substitution. A high resolution pin-arrayed display is able to present spatial information via tactile symbols, such as city maps and obstacle maps.
Below you can find some descriptions of current tactile substitution systems.
One of the earliest and most well known form of sensory substitution devices was Paul Bach-y-Rita's TVSS that converted the image from a video camera into a tactile image and coupled it to the tactile receptors on the back of his blind subject. Recently, several new systems have been developed that interface the tactile image to tactile receptors on different areas of the body such as the on the chest, brow, fingertip, abdomen, and forehead. The tactile image is produced by hundreds of activators placed on the person. The activators are solenoids of one millimeter diameter. In experiments, blind (or blindfolded) subjects equipped with the TVSS can learn to detect shapes and to orient themselves. In the case of simple geometric shapes, it took around 50 trials to achieve 100 percent correct recognition. To identify objects in different orientations requires several hours of learning.
A system using the tongue as the human-machine interface is most practical. The tongue-machine interface is both protected by the closed mouth and the saliva in the mouth provides a good electrolytic environment that ensures good electrode contact. Results from a study by Bach-y-Rita et al. show that electrotactile stimulation of the tongue required 3% of the voltage required to stimulate the finger. Also, since it is more practical to wear an orthodontic retainer holding the stimulation system than an apparatus strapped to other parts of the body, the tongue-machine interface is more popular among TVSS systems.
This tongue TVSS system works by delivering electrotactile stimuli to the dorsum of the tongue via a flexible electrode array placed in the mouth. This electrode array is connected to a Tongue Display Unit [TDU] via a ribbon cable passing out of the mouth. A video camera records a picture, transfers it to the TDU for conversion into a tactile image. The tactile image is then projected onto the tongue via the ribbon cable where the tongue's receptors pick up the signal. After training, subjects are able to associate certain types of stimuli to certain types of visual images. In this way, tactile sensation can be used for visual perception.
Sensory substitutions have also been successful with the emergence of wearable haptic actuators like vibrotactile motors, solenoids, peltier diodes, etc. At the Center for Cognitive Ubiquitous Computing at Arizona State University, researchers have developed technologies that enable people who are blind to perceive social situational information using wearable vibrotactile belts (Haptic Belt) and gloves (VibroGlove). Both technologies use miniature cameras that are mounted on a pair of glasses worn by the user who is blind. The Haptic Belt provides vibrations that convey the direction and distance at which a person is standing in front of a user, while the VibroGlove uses spatio-temporal mapping of vibration patterns to convey facial expressions of the interaction partner. Alternatively, it has been shown that even very simple cues indicating the presence or absence of obstacles (through small vibration modules located at strategic places in the body) can be useful for navigation, gait stabilization and reduced anxiety when evolving in an unknown space. This approach, called the "Haptic Radar" has been studied since 2005 by researchers at the University of Tokyo in collaboration with the University of Rio de Janeiro.
While there are no tactile-auditory substitution systems currently available, recent experiments by Schurmann et al. show that tactile senses can activate the human auditory cortex. Currently vibrotactile stimuli can be used to facilitate hearing in normal and hearing-impaired people. To test for the auditory areas activated by touch, Schurmann et al. tested subjects while stimulating their fingers and palms with vibration bursts and their fingertips with tactile pressure. They found that tactile stimulation of the fingers lead to activation of the auditory belt area, which suggests that there is a relationship between audition and tactition. Therefore, future research can be done to investigate the likelihood of a tactile-auditory sensory substitution system. One promising invention is the 'Sense organs synthesizer' which aims at delivering a normal hearing range of nine octaves via 216 electrodes to sequential touch nerve zones, next to the spine.
Some people with balance disorders or adverse reactions to antibiotics suffer from bilateral vestibular damage (BVD). They experience difficulty maintaining posture, unstable gait, and oscillopsia. Tyler et al. studied the restitution of postural control through a tactile for vestibular sensory substitution. Because BVD patients cannot integrate visual and tactile cues, they have a lot of difficulty standing. Using a head-mounted accelerometer and a brain-machine interface that employs electrotactile stimulation on the tongue, information about head-body orientation was relayed to the patient so that a new source of data is available to orient themselves and maintain good posture.
Touch to touch sensory substitution is where information from touch receptors of one region can be used to perceive touch in another. For example, in one experiment by Bach-y-Rita, the touch perception was restored in a patient who lost peripheral sensation from leprosy. For example, this leprosy patient was equipped with a glove containing artificial contact sensors coupled to skin sensory receptors on the forehead (which was stimulated). After training and acclimation, the patient was able to experience data from the glove as if it was originating in the fingertips while ignoring the sensations in the forehead. After two days of training one of the leprosy subjects reported "the wonderful sensation of touching his wife, which he had been unable to experience for 20 years."
The development of new technologies has now made it plausible to provide patients with prosthetic arms with tactile and kinesthetic sensibilities. While this is not purely a sensory substitution system, it uses the same principles to restore perception of senses. Some tactile feedback methods of restoring a perception of touch to amputees would be direct or micro stimulation of the tactile nerve afferents.
Other applications of sensory substitution systems can be seen in function robotic prostheses for patients with high level quadriplegia. These robotic arms have several mechanisms of slip detection, vibration and texture detection that they relay to the patient through feedback. After more research and development, the information from these arms can be used by patients to perceive that they are holding and manipulating objects while their robotic arm actually accomplishes the task.
Auditory sensory substitution systems like the tactile sensory substitution systems aim to use one sensory modality to compensate for the lack of another sensory modality in order to gain a perception of one that is lacking. With auditory sensory substitution, we use visual or tactile sensors to detect and store information about the external environment. This information is then transduced by brain-machine interfaces into auditory signals that are then relayed via the auditory receptors to the brain.
Auditory vision substitution aims to use the sense of hearing to convey visual information to the blind.
This section contains content that is written like an advertisement. (January 2018) (Learn how and when to remove this template message)
The vOICe Auditory Display Technology is one of several approaches towards sensory substitution (vision substitution) for the blind that aims to provide synthetic vision to the user, analogous to seeing their surroundings using sound waves, by means of a non-invasive visual prosthesis.
The vOICe converts live camera views from a video camera into soundscapes, patterns of scores of different tones at different volumes and pitches emitted simultaneously. This system uses general video to audio mapping by associating height to pitch and brightness with loudness in a left-to-right scan of any video frame. Views are typically refreshed about once per second with a typical image resolution of up to 60 x 60 pixels as can be proven by spectrographic analysis.
The ultimate goal is to provide synthetic vision with truly visual sensations by exploiting the neural plasticity of the human brain. The system does not require any surgery. Its interface to the mind is simply headphones. Over time with practice, the processing within the brain is gradually sent down to the subconscious levels and becomes automatic.
Neuroscience research  has shown that the visual cortex of even adult blind people can become responsive to sound, and "seeing with sound" might reinforce this in a visual sense with live video from a head-mounted camera encoded in sound. The extent to which cortical plasticity indeed allows for functionally relevant rewiring or remapping of the human brain is still largely unknown and is being investigated in an open collaboration with research partners around the world.
One suggestion for increasing the relative efficiency of the resulting visual stimuli is to adjust the visual field by using an accelerometer to provide a steady image even if the head is moved, which is implemented in its Android edition. Connecting an infrared sensor to adjust the camera position to match eye movements is an option for the Windows edition (though affordable mobile eye-trackers are not yet on the market).
The technology of the vOICe was invented in the 1990s by Peter Meijer. It has been positively and widely reviewed in media and also in various high-quality peer-reviewed scientific journals, far extensive than those partially referenced in the current section, and maintained by the inventor's website itself.
Unlike other models and prototypes aiming to do this particular task, this project is developed within a budget of 4 US dollars, and is aimed for the vast majority of partially or fully blind people living in developing and underdeveloped countries, who can't access equipment worth hundreds of dollars.
Project BATEYE uses an ultrasonic sensor mounted on to a wearable pair of glasses that measures the distance to the nearest object and relays it to an Arduino board. The Arduino board then processes the measurements and then plays a tone (150-15000 Hz) for the respective distance(2 cm to 4m) till the data from the second ultrasonic pulse (distance) comes in, and then the same process gets repeated. This cycle is repeated almost every 5 milliseconds. The person hears sound that changes according to the distance to the nearest object. The head provides a 195-degree swivel angle and the ultrasonic sensor detects anything within a 15-degree angle. Using systematic, cognitive and computational approach of neuroscience, with the hypothesis that the usage of the occipital lobe of blind people goes into processing other sensory feedback, and using the brain as a computational unit, the machine relies on the brain processing the tone produced every 14 mS to its corresponding distance and producing a soundscape corresponding to the tones and the body navigating using the same.During experimentation, the test subject could detect obstacles as far away as 2 - 3m, with horizontal or vertical movements of the head the blindfolded test subject could understand the basic shape of objects without touching them, and the basic nature of the obstacles.
Primarily, this research is open sourced and published (see: [Debargha Ganguly. (2016); DEVELOPING AN ECONOMIC SYSTEM THAT CAN GIVE A BLIND PERSON BASIC SPATIAL AWARENESS AND OBJECT IDENTIFICATION. Int. J. of Adv. Res. 4 (11). 2003-2008] (ISSN 2320-5407).
This is a part of the currently undertaken research by Debargha Ganguly, known as Project Basics - an effort to improve the basic standard of living of people of developing and underdeveloped countries, which has also included Project Awaaz - developing a low cost, non-movement restrictive plugin for hand movement to speech conversion.
The EyeMusic, represents high locations on the image as high-pitched musical notes on a pentatonic scale, and low vertical locations as low-pitched musical notes on a pentatonic scale. The user wears a miniature camera connected to a small computer (or smartphone) and stereo headphones. The images are converted into "soundscapes", using a predictable algorithm, allowing the user to listen to and then interpret the visual information coming from the camera.
The EyeMusic conveys color information by using different musical instruments for each of the following five colors: white, blue, red, green, yellow; Black is represented by silence. The EyeMusic currently employs an intermediate resolution of 30×50 pixels. An auditory cue is sounded at the beginning of each left-to-right scan of the image. (1) the higher musical notes represent pixels that are located higher on the y-axis of an image, (2) the timing of the sound after the cue indicates the x-axis location of the pixel (that is, an object located on the left of the image will be "sounded" earlier on than an object located further on the right), and (3) different colors are represented by different musical instruments. In addition, subjects who used the EyeMusic dynamically were able to accurately reach for an object perceived via the SSD.
This project, presented in 2015, proposes a new versatile mobile device and a sonification method specifically designed to the pedestrian locomotion of the visually impaired. It sonifies in real-time spatial information from a video stream acquired at a standard frame rate. The device is composed of a miniature camera integrated into a glasses frame which is connected to a battery-powered minicomputer worn around the neck with a strap. The audio signal is transmitted to the user via running headphones. This system has two operating modes. With the first mode, when the user is static, only the edges of the moving objects are sonified. With the second mode, when the user is moving, the edges of both static and moving objects are sonified. Thus, the video stream is simplified by extracting only the edges of objects that can become dangerous obstacles. The system enables the localization of moving objects, the estimation of trajectories, and the detection of approaching objects.
Another successful visual-to-auditory sensory substitution device is the Prosthesis Substituting Vision for Audition (PSVA). This system utilizes a head-mounted TV camera that allows real-time, online translation of visual patterns into sound. While the patient moves around, the device captures visual frames at a high frequency and generates the corresponding complex sounds that allow recognition. Visual stimuli are transduced into auditory stimuli with the use of a system that uses pixel to frequency relationship and couples a rough model of the human retina with an inverse model of the cochlea.
The sound produced by this software is a mixture of sinusoidal sounds produced by virtual "sources", corresponding each to a "receptive field" in the image. Each receptive field is a set of localized pixels. The sound's amplitude is determined by the mean luminosity of the pixels of the corresponding receptive field. The frequency and the inter-aural disparity are determined by the center of gravity of the co-ordinates of the receptive field's pixels in the image (see "There is something out there: distal attribution in sensory substitution, twenty years later"; Auvray M., Hanneton S., Lenay C., O'Regan K. Journal of Integrative Neuroscience 4 (2005) 505-21). The Vibe is an Open Source project hosted by Sourceforge.
Other approaches to the substitution of hearing for vision use binaural directional cues, much as natural human echolocation does. An example of the latter approach is the "SeeHear" chip from Caltech.
Other visual-auditory substitution devices deviate from the vOICe's greyscale mapping of images. Zach Capalbo's Kromophone uses a basic color spectrum correlating to different sounds and timbres to give users perceptual information beyond the vOICe's capabilities.
By means of stimulating electrodes implanted into the human nervous system, it is possible to apply current pulses to be learned and reliably recognized by the recipient. It has been shown successfully in experimentation, by Kevin Warwick, that signals can be employed from force/touch indicators on a robot hand as a means of communication.
It has been argued that the term "substitution" is misleading, as it is merely an "addition" or "supplementation" not a substitution of a sensory modality.
Building upon the research conducted on sensory substitution, investigations into the possibility of augmenting the body's sensory apparatus are now beginning. The intention is to extend the body's ability to sense aspects of the environment that are not normally perceivable by the body in its natural state.
The findings of research into sensory augmentation (as well as sensory substitution in general) that investigate the emergence of perceptual experience (qualia) from the activity of neurons have implications for the understanding of consciousness.
In 2005, the feelSpace group conducted a study of sensory augmentation with a vibrotactile magnetic compass belt worn around the waist. In this study, the participants were provided with the direction of magnetic north as a vibration on their waist.
Significant performance improvements in navigational tests were observed (over and above those experienced by control subjects during the same period with the same training) and, for half of the participants, the perception of the belt's vibration underwent a profound change from simple tactile innervation to approach a genuine and direct sense of allocentric orientation: in other words, could perceive north as an entity distinct from the vibrating transducer on the waist, like one perceives a glass on a table as an entity distinct from the impact of reflected photons on the retina. Further, tests of the influence of the belt information on the rotational nystagmus effect suggested that, after training, the processing of the belt information became subcognitive.
Manage research, learning and skills at NCR Works. Create an account using LinkedIn to manage and organize your omni-channel knowledge. NCR Works is like a shopping cart for information -- helping you to save, discuss and share.