EXCLUSIVE - From the Age of Information to the Age of Experience: Taste, smell and touch through the Internet
Even with the astounding progress in technology over the past 100 years, certain ideas retain the power to surprise. We still think of them as belonging to the realm of science-fiction.
The Imagineering Institute explores precisely such ideas and engineers them into reality. Its vision is to "Invent the Future of Internet". At EmTech Asia, OpenGov had a chat with Dr. Adrian David Cheok, Director of the Imagineering Institute and a Professor of Pervasive Computing at City University London.
(The interview below has been edited by OpenGov for brevity and clarity.)
Can you tell us about the Imagineering Institute?
We conduct blue sky research. We want to invent the future of the Internet. In the future of the Internet, we will be able to communicate with all five senses, through virtual reality. We will also have communications between not just humans but also between humans and robots and humans and virtual characters.
So, we are also looking at the related topics of artificial intelligence, natural language processing. We are developing chatbots, so that you can talk to a computer as if you are talking to a real human.
We are looking at a long-time horizon of maybe 40-50 years. The Internet has already changed so much. But I think it’s only the beginning.
Now we are in the ‘Age of information’. You can access any information anywhere, anytime. The future will be the ‘Age of experience’. You can experience anything through the Internet. You want to have dinner with your grandmother. You will be able to taste the dinner through the Internet.
It is a bit difficult to visualise how these technologies will work. For instance, how would sharing a dinner with your grandmother work?
We are already working on technologies where you can stimulate the tongue, which has taste receptors using electrical signals and also thermal energy. Our taste receptors can be excited using electrical currents. And if we change temperature rapidly we can experience for example sweet. Or if you rapidly cool you can experience a kind of minty taste.
Then we are going to do similar things for the nose. We are doing experiments with an electrical nose machine, where we put the device into the nose and stimulate the olfactory receptors using electrical signals.
Why do we want to do it with electrical signals and not chemicals? Because you cannot transmit chemicals through the Internet.
It’s very much like music was. Once music became digitised, you could transmit it to anywhere in the world. That’s what we want to do with all of the senses. Audio and visual has already been digitised. Touch, smell and taste are next.
What are the challenges in digitising smell, taste and touch?
Digitising these senses is much more difficult because audio and visual are based on light and sound which are just waves. And a wave is just a frequency, a number. So, you can easily digitise a frequency.
We have made early prototypes but we are still working on finding a good way to digitise these other senses.
There are two parts to it, capture and actuation. Now you can take a photo with a digital camera. In the future, your grandmother will be able to take a snapshot of the smell of a food and send it through the Internet.
Capturing is relatively easy. You just have to measure molecules. The actuation part is the most difficult. That’s what we are trying to work on, concentrating on taste and smell.
I mentioned stimulating the olfactory receivers using electrical signals. There are other ways too. You could have something on your mobile phone and using nanotechnology, it could generate the molecules. If you capture for example the smell of coffee, which I think has a few hundred molecules, your mobile phone could have a nanotechnology device to produce those chemicals.
We have already done a device where you can output a smell from your phone but the molecules are already pre-defined. In other words, we already have the smell stored there. The next stage would be to combine the molecules in real time.
Can we produce taste with electrical signals? For basic tastes, like sweet and sour, we can already do it. But things get complicated when you move on to coffee or wine. Wine has a few thousand molecules.
Maybe the technology has to go into material science for people to make devices that can output molecules from your phone.
Which areas of science are relevant to the work you are doing?
Our research is multi-disciplinary. The core of our lab is made up of electrical and electronic engineers and computer scientists. We also have people in our lab from areas like biotechnology.
We need engineers because we need to build the systems. We also need to work very closely with scientists to understand things like how do we perceive smells. Unbelievably in 2017, scientists don’t know definitively how we smell.
So, from a scientific perspective, there is a lot of work to be done on taste and smell. These senses weren’t seen as high priority before by scientists. They focused on other senses like sight.
Why should we look into digitising these senses now? Do governments need to think about these technologies at the moment?
People have a very strong need to communicate. Look at the explosive growth in mobile phones.
I think we are essentially making devices that will help people communicate better.
You want to have dinner with someone anywhere in the world, you don’t have to get on a plane! You want to experience what it is to be like on Mt. Everest, you don’t have to go trekking on the mountain.
I don’t think at this stage governments have to worry about. Dr. David Levy predicts that by 2050 it will be legal to marry a robot. But if you think about it, we are going to have artificial intelligence, we are going to have robots, which can touch, taste and smell. They are going to be so human-like.
Then the government will have to step in and there will be changes in governance and regulations.
What do you think AI is going to look like in 40-50 years? Will it matter if the AI is still artificial, that is, if it only acts like a human, but doesn’t really think like one?
Already we are seeing very sophisticated artificial intelligence. When we say artificial intelligence, it is artificial. Even if a robot can completely simulate a human, including emotions, it’s still essentially a program. But I don’t think we need to care about that, if it looks and acts real.
Computers will be able to do everyhting humans can in the not too very far off future. AI will become such a good simulation of human intelligence that it will become indistinguishable from humans.
And then you have to think that there are things that computers can do that we cannot. Every computer, every AI can connect to the Internet. And they can talk to each other and learn from each other. You are going to have an intelligence which is far beyond human intelligence.
For example, you go to a doctor now. Doctors might not have time to read the latest textbooks or journals because they are busy treating patients. But a computer can read every single paper which has been published in the last 500 years.
I think a lot of white collar jobs, such as lawyer, doctor will be done by robots. Probably we will still need surgeons but they will not be doing the actual surgery. They will be monitoring.
How long do you think it will take for such technologies to go mainstream?
It will be a long time before many of these technologies become commercial products. Look at virtual reality. Now you can go to a shop and buy a Samsung gear for a few hundred dollars. But the concept of virtual reality came in the 1950s. It took a long time for it to come to have widespread adoption.
But technology cycles are getting shorter and shorter. It might take much less time in the future.