NeoSensory vest

This week I learned: rewiring our senses

Have you ever stopped and asked yourself, “How do I know the colour I call purple is actually the same colour someone else sees when they look at purple things”?

I’ve long been fascinated by the ways different animals – and different people – live in different sensory worlds. As it turns out, we already know that not everyone sees colour the same way – how do you explain the difference between red and green to someone who’s red-green colourblind? Is the dress blue or white? – but that’s just the tip of the iceberg when it comes to the different sensory worlds we can live in.

Reprogramming the brain’s computer

Like me, neuroscientist David Eagleman is intrigued by the different sensory worlds (or umwelten) inhabited by different organisms. In his 2015 TED talk, he points out that a bloodhound would be baffled by our inability to tell that there’s a cat 100 yards away (but out of sight), or that our neighbour visited the same spot six hours earlier – information the bloodhound can access easily through its superior sense of smell.

But Eagleman’s specific interest is not just in exploring different sensory experiences, but in actively expanding the senses available to us.

When the cochlear implant and the retinal implant were being developed, scientists doubted whether the human brain would be able to make sense of the data conveyed by these digital devices – and yet they worked. Our brains are remarkably skilled at taking all kinds of data inputs and learning to make sense of them. Just think of braille: if I run my finger over braille writing, I have to concentrate fiendishly hard just to tell the individual dots apart, yet an experienced braille reader doesn’t have to stop and think about what they’re feeling – their brain automatically converts tactile information into language.

The human brain, according to Eagleman, is essentially a plug-and-play system: whatever “peripherals” are used to gather the raw material of sensory data, with practice it can figure out how to turn that data into useful information.

Turning vibration into sound

And here’s where it gets really cool, because Eagleman isn’t all talk. Jumping onto the rise of wearable computing, he and PhD student Scott Novich developed a vest that can turn different frequencies of sound into patterns of vibration against the wearer’s torso – not unlike the way the inner ear in a hearing person already experiences sound as vibration.

Working with deaf volunteers, they found that after just four days of practice with the vest, the volunteers could already identify and distinguish between different spoken words as though hearing them. Their brains simply took this new source of data and learned to interpret it into usable information.

In 2015, after demonstrating the vest on the aforementioned TED Talk, Eagleman was approached by investors keen to bring the vest from prototype to product. He and Novich, through their new company, NeoSensory, are now developing wristbands that work the same way, giving deaf and partially-deaf users a new, non-invasive option for experiencing the world.

Turning sound into sight

The idea that it’s revolutionary to consider rewiring our brains to accept different sensory input might give Daniel Kish a laugh. He rewired his own brain as a child, and for years now he has been teaching other blind people to do the same.

Despite losing both eyes to cancer by the age of 13 months, Kish can describe his surroundings with ease and ride a bike with confidence. Growing up, he instinctively learned to use what he calls “flash sonar”, clicking with his tongue and using the echoes bouncing back to him to “see” the world around him in 3D. This despite having no idea that bats do the same thing – or that scientists call it echolation – until he was much older.

Studies of Kish have established that his visual cortex is very much active when he uses flash sonar. Clicking at a moving object even activates the same specific part of his brain that sighted people activate by watching movement. That doesn’t mean he sees the world exactly the same way as a sighted person – his “vision” lacks colour, but on the other hand his ears let him “see” in 360° and, of course, he doesn’t care if it’s dark. His brain has simply taught itself to create visual information from a different set of data.

Kish says that his success with flash sonar is nothing to do with him being “special” and more to do with having parents who didn’t coddle him but treated him essentially the same as a sighted child, allowing him to explore the world by himself. He believes it’s natural for blind children to experiment with seeing through sound, and that the reason so few of them grow up to echolocate is more social than physical: the result of well-meaning parents, teachers, and other carers treating blind children as if they can’t get around on their own and, in the process, never giving them the chance to try.

He now works full-time with a team of other trainers – all of whom originally learned their flash sonar skills from him – to train blind people around the world in flash sonar and in navigating the world with increased confidence.

Oh – and he has a TED talk too.

Developing new senses

Echolocation and hearing through vibrations are both ways for people to recreate recognised senses – the input is different, but the information the brain makes from it is familiar: sight and sound.

But as Eagleman points out, wearable computing like his and Novich’s vest can allow people to feed all kinds of data, not just sound, directly into the brain via patterns of vibration.* Which raises the fascinating question: what other data could our brains learn to interpret into information, and what would that “look” like to the person experiencing a profound shift in their umwelt?

If a bee can sense the electrical field of a flower, could a financial analyst develop a sense that tells them about the movements of the stock exchange? Could a pilot develop a sense of the information available to their aeroplane – air currents, weather conditions, wing-flap positions, roll, pitch, and yaw – allowing them to make piloting decisions based on literally experiencing what it’s like to be the plane? Could we learn to see in ultra-violet and infra-red?

We don’t know yet – but I for one am fascinated to find out.

 

*Fun fact: if you’ve watched Westworld season 2, you’ve already seen the vest “in action”, supposedly feeding the wearers real-time data on android movements. (return)

Image: NeoSensory VEST, used with permission
Share this post:

Leave a Reply

Your email address will not be published. Required fields are marked *