Scientists do not stop making technological breakthroughs that make science fiction films into scientific facts. Previously ambitious scientists made human robots, now they find a mind reader.
But this tool is not as sophisticated as science fiction films that predict the future by reading their minds. The way this tool works is simpler and it is possible that someday humans can make more sophisticated mind reader.
Previously there had been experiments in brain-to-brain communication before, but now scientists are finding more complex ways of communication.
They developed a brain network of three people that allowed participants to send thoughts to one another.
In this case, their brains will communicate with each other using the game Tetris.
Quoted from Engadget, the researchers relied on a combination of electroencephalograms (EEGs) to record electrical activity and transcranial magnetic stimulation (TMS) to send info.
They call brain-to-brain communication as BrainNet.
They created a network that allowed three people to send and receive information directly to their brains.
The technology behind the network is relatively easy. EEG measures the electrical activity of the brain.
They consist of a number of electrodes placed on the skull that can pick up electrical activity in the brain.
By focusing their attention on flashing LEDs with different frequencies, they can modify brain signals.
Three people with their brains can each rotate blocks in the Tetris game using only the brain that sends electrical signals.
The scientist who developed the mind reader said that this discovery will be developed again so that in the future we can connect many brains at once.
In 1963, at the age of 21, Stephen William Hawking was diagnosed with Amyotrophic Lateral Sclerosis (ALS). The disease attacks its motor neurons and turns off the nerve function that controls muscles. Hawking slowly paralyzed.
Hawking’s ability to move continued to decline for two decades after diagnosis, but he was still able to talk. At least it helps a little to still be able to research and work. Until 1986 he was forced to give up the ability to speak to survive the attack of pneumonia. Unable to communicate verbally while his mobility is very limited makes his life more difficult now.
“For a while, he used spelling cards to communicate, patiently showing letters and forming words by raising his eyebrows. Although this gives Hawking the ability to communicate the process is slow, “writes the VitrX page.
Hawking’s life was then sustained by technology planted in his wheelchair. David Mason, the husband of one of his nurses, designed a portable computer and voice synthesizer to help him move and communicate. All that is controlled by a switch on the arm of his wheelchair.
The system works like operating a computer with only a space key. Hawking’s computer scans each letter that appears on the screen, one at a time. When it came to the letter in question, all he had to do was click the button on the arm of his chair.
With this new device Hawking is able to write at a speed of 15 words per minute. The synthesizer then voiced the words he typed. But, once again Hawking was unable to counter the rate of paralysis. Speed mengatik continues to decline, until in 2008 he could no longer move his fingers again.
Then once again technology helped him.
The click function is now moved to his cheek, because only the muscles in that part can move. The “cheek switch” is equipped with infrared which can detect the small movements of his cheek muscles when he finds the word or computer feature he wants.
“As Hawking’s physical condition gradually deteriorated, his typing speed dropped to only one or two words per minute. Scientists at Intel provide assistance in the form of algorithms adapted to Hawking’s vocabulary and writing style, which accurately predicts which words he wants to use next, “writes the How It Works page.
Hawking is lucky. In most cases of ALS, the average patient can only last a period of five years. However, in the case of Hawking, the disease developed very slowly. Hawking died a year ago at the age of 73, after surviving paralyzed for more than 50 years.
Hawking’s other fortune was the support of technology that kept him able to communicate. While most people with paralysis due to ALS, locked-in syndrome, or spinal cord injury can not do anything.
Locked-in syndrome, for example, makes the sufferer’s entire body paralyzed. Only a few muscles can be moved, usually the muscles around the eyes. Or a spinal cord injury that is in severe condition makes parts of the body from the bottom of the neck to the legs paralyzed. But fortunately the brain and their consciousness still function well. That at least provides an opportunity for scientists to create mind reading machines and help them to communicate and survive.
The working principle of a mind reading machine is basically simple. As mentioned on the How It Works page, when we talk, the brain sends nerve signals to the throat and muscles around the mouth. Even though it only moves a small part of the talking muscle this signal can be read by a special device.
Scientists call these signals subvocal signals, which are then translated into speech through synthesizers or to give operations orders on motorized wheelchairs such as Hawking’s.
This mind reader technology was originally developed in the NASA Ames Research Center neuro-engineering laboratory, United States, in 1999. This technology was originally intended to be applied to astronauts who work at the International Space Station (ISS). The ISS environment is very noisy and astronauts often have difficulty controlling the computer system by hand. The noise made it difficult for their voice command machines to interpret the commands spoken by astronauts.
NASA scientists see subvocal signals can be the solution to this problem. Led by Chuck Jorgensen, NASA developed a sensor that can measure subvocal signals connected to a computer that can interpret them into words or phrases.
NASA research was quite successful. In 2004 the machine could recognize around ten words. Then in 2007 the number of words that could be interpreted increased by approximately 25 words. But this tool hasn’t really been able to read minds.
“There is a difference between ‘thinking in words’ and speaking subvocally. Subvocal speech requires some activation of the speech muscle. This is not a mind-reading machine at all, “Chuck Jorgensen was quoted as saying by the Future of Things science and technology page.
Bright spots emerged from research on the process of speech production in the brain conducted by Takeshi Tamura and Michiteru Kitazaki from Toyohashi University of Technology, Atsuko Gunji from the Japanese National Institute of Mental Health, and Hiroaki Shigemasu from Kochi University of Technology.
Takeshi Tamura et al found a method that can detect and monitor signals in the brain cortex associated with speech production. The research was published in the article “Audio-vocal Monitoring System Revealed by Mu-Rithm Activity”, published in the journal Frontiers in Psychology (2012).
The Japanese scientists managed to detect the frequency and distribution of speech signals related to the mouth muscles, both real speech or words that are only thought of. That is, they can really listen to the sound of one’s mind. In addition they can also detect the response signal that someone thinks of a speech.
In 2017 Takeshi Tamura et al developed a mind reader based on the results of this study. They believe this tool can help people who are paralyzed and unable to communicate due to locked-in syndrome. They plan this mind reading machine will be a smart phone application so that the public can easily use it.
But of course it still takes a number of years to reach that goal. And again, the prototype of a mind reader is still not effective. To recognize and distinguish numbers 1 to 10, the success rate is still 90 percent. Meanwhile, to recognize a single syllable in Japanese, the accuracy is still at the level of 61 percent.
This tool uses an electroencephalogram (EEG) to scan brain signals when someone thinks of something to say. The scan is then interpreted by a computer with artificial intelligence.
“This technology enables disabled people, who have lost their voice communication capabilities, to get that capability once more. Furthermore, the research group plans to develop devices with fewer electrodes and can be connected to smart phones in the next five years,” said a release from the University of Technology Toyohashi as reported by the Independent news page.
At about the same time, researchers from the Wyss Center for Bio and Neuroengineering based in Switzerland also conducted similar research. Not only scan brain signals by the EEG method, research led by neuroscientist Niels Bilbaumer also scans changes in oxygen levels in the brain.
As reported on the pages of The Conversation Niels Bilbaumer’s team conducted a series of brain scans of research on four people with ALS who lost their ability to speak. Niels utilizes infrared spectroscopy techniques to measure changes in oxygen levels in the brain when they think of something.
The researchers asked questions of a personal nature with answers that were known by participants. That is the question with the answer “yes” or “no”, such as: “Your husband’s name is Joachim?” or “Are you happy?”
“The machine records blood flow and calculates the difference in fluctuations when participants think ‘yes’ and ‘no’,” Birbaumer was quoted as saying by Reuters news page.
The same question is repeated for several weeks later so the computer can test the consistency of the pattern. The results are quite encouraging. From the question “Are you happy?” All four participants consistently answered “yes” during the repetition period. That is, this method works.
Now, the Wyss Center team is developing this technology so it can also be used in people with paralysis due to strokes and spinal cord injuries.