Fears and the future of mind reading robots

Fears and the future of mind reading robots

Fears and the future of mind reading robots

For quite some time I’ve shown interest in Artificial intelligence and robotics, reasons being that; it has become the latest trend and the robotic industry is making a wave in the global market. The global industrial robots market size was valued at over 15 billion US Dollars in the year 2021 and it projected to grow to almost 40 billion USD by the year 2029. The skyrocketing market growth of robotics is not only the major magnet pulling me towards the topic but also the fears associated with these innovations.  With the way this Robo army is growing, the degree of intelligence which has been integrated into these things, I wouldn’t say I’m not scared that one day these robots will be so much articulated and united to decide that human beings have become irrelevant to them. You might call it one wild fear of an opium smoker but I just pray that’s all it will be.

One, if not the highest breakthrough made by this industry in the recent times is the innovation of a robot that has the ability to predict the mind of its owner just by articulating so many motions and his facial expressions. Robots are designed to perform specific tasks and they can hardly go against such programming but designing a robot that can be controlled and corrected in an intuitive way is just on a whole different level. Just imagine informing a particular robot to carry out a task without speaking a word or typing in any command, and eventually you command the robot to stop that act and start another and it immediately follows your command; Note the above is achieved devoid of typing in a command or even waving a hand, if this current innovation doesn’t give you shivers, it does worry me a bit, I do wonder if we are not slightly tapping into darkness.

We thought it to be a fiction but it no longer is. At MIT’s Computer Science and Artificial Intelligent Laboratory ( CSAIL),  a robot was developed which allows a person to control  the arm of a robot just with brain waves. The Director Daniella Rus when interviewed about the essence of this rare invention spoke that the essence is to move away from a world where people have to adapt to the constraints of machines and also to develop systems that are more natural and intuitive extension of us. Prior to now scientists have failed through in trying to control a robot with brain waves due to so many complexities attached to such programming, those who succeeded did so with very poor level of accuracy, the robots couldn’t predict exactly what the human controller was thinking so such were not considered a success from the standpoint of mind reading robot technology.  In researches in the past, the person has to learn and practice each new command so as to mentally associate a graphic pattern with a robotic command like “stand up” or “sit down. It is more like telling someone to learn a new language, even when you have learnt it and generated the graphic pattern, you have to think the command in your mind very hardly and repeatedly before the robot can detect it with extremely low accuracy.  The process of the robotic detection and executing our thoughts is based on electroencephalogram (EEG) patterns generated by your brain waves. (EEG) is the depiction of the electrical activity occurring at the surface of the brain. This activity appears on the screen of the EEG machine as waveforms of varying frequency and amplitude measured in voltage (specifically microvoltages). These patterns are recorded through the use of electrodes placed on the scalp.  All the effort to achieve this mind reading process in the past resulted poorly about 70% accuracy hence such robots couldn’t function in industries and projects that needs accuracy and precision like the construction company and the medicals. The Engineer may have though in his mind that such robot should raise a beam to about 150 meters but due to the inaccuracy of the robot in interpreting exactly the brain waves it might raise the beam to 15 meters.

This new innovation which achieved extreme accuracy tried to find its way around those faults that made for failures in the former projects so instead of making use of only EEG, they had to combine it with electromyography (EMG).  The technique of EMG observes the electrical activity of your skeletal muscles and this method allows the person to communicate with the robot both by thinking and also the slight motion of the arms hence improving efficiency in control.

This spectacular actors in achieving this great feat are co-lead author Joseph DelPreto, he developed the work alongside Rus, former CSAIL postdoctoral associate Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University professor Frank H. Guenther. Co-lead author Joseph DelPreto said that its working is more like communicating with another person.

In demonstrating how this incredible invention works, the team employed the services of humanoid robot called Baxter. Baxter is a somewhat humanoid robot, but it wasn't made this way just for the sake of appealing to humans. All its parts serve a purpose. It has a computer screen head somewhat reminiscent of a tablet, on which animated cartoony eyes and eyebrows change expression to convey intuitive messages to nearby workers. These expressions include closed eyes when it is on stand-by; neutrality when it is ready to start learning (eyes wide open and eyebrows parallel to eyes), concentration when its in the middle of learning (eyes wide open and eyebrows slanted down toward the center), focus when it is working without any issues (eyes looking downward and eyebrows slanted down toward the center), surprise when a person comes nearby (wide eyes with dilated pupils, eyebrows raised and an orange screen background), confusion when it's having an issue with a task (wide eyes with one eyebrow inverted and both slanted downward toward the outside) and sadness when there is a problem and it has given up trying to work on a task (eyes looking downward with both eyes and with eyebrows inverted and slanted down toward the outside). Baxter's eyes also move in the direction one of its arms is about to move as a warning to anyone working nearby. On its torso, two arms are mounted that measure 41 inches (104 centimeters) from shoulder to end-effector plate, an area with interchangeable end-effectors, or "hands," that serve various purposes. Baxter comes with two hand types: a vacuum cup and an electric parallel gripper with something like fingers. The cups and fingers can also be swapped out for others, and it is expected that third parties will develop new types of end-effectors. Each arm has seven degrees of freedom, and they can work in tandem or independently of one another, depending upon need. This means you can do things like put Baxter in between two conveyor belts and have it work both. The arms are also complaint, a robotics term meaning that rather than being completely rigid and unable to change course, they can sense and adjust to any obstacles they encounter. If you grab, push or bump into one of its arms, it will give rather than remaining fixed. This compliance is made possible by using series elastic actuator, in which a motor and gearbox control a spring that drives the joints, rather than directly controlling the joints. It's the springs that make each arm less rigid than typical robot arms, and they are also used to measure forces acting on the arms.

Baxter was equipped with a power drill and given a task–to reach three drilling targets on a mock-up of a plane fuselage. Baxter, without any training, would select a target and go for it. If the person noticed that it had selected the wrong drilling location, they would simply think about it in order to stop Baxter, and then gesture, gently, to the correct hole. In the process, Baxter learns what kind of brain and muscle electrical activity corresponds to that target. In other words, as DelPreto explains, it’s the robot that learns from the way the user thinks rather than the user having to learn the robot’s language. The system also monitors something called error-related potentials–a type of brain signal that occurs naturally when we detect a mistake. If such a signal occurs, Baxter stops so its human pilot can correct the mistake with a quick gesture.

The difference between the success rates of the old EEG-based system and the new hybrid method was more than significant. The accuracy went from 70% to 97%, almost perfect and without training. That’s a huge success differential, and a giant step toward a future in which the user interface disappears in favor of simply thinking. The MIT team imagines that one day this type of technology could be “useful for the elderly, or workers with language disorders or limited mobility.” Also, giant manga robots.

Benefits Of These Mind Reading Robots

Medical Applications – Mind-reading robots can be very helpful for physically handicapped individuals. The robot can be mapped to their brains; this way the signals from their brain will control the robot. They can use the help from the robot to carry out their daily chores, which require physical movement that they are incapable of performing. People affected by paralysis greatly benefit by such robots.

Robot Teachers – A recent project on mind-reading robot conducted at the University of Wisconsin-Madison has developed an intelligent tutoring system that employs virtual teachers. Virtual teachers in this system are mind-reading robots that are capable of detecting a breech in the attention of the students and also incorporating techniques that bring back their attention to the online tutorial.

 Construction Purposes: By reading the mind of an engineer, these robots can carry out constructions and the Controller using its mind will do the direction, correcting the robot when errors are made, and this can be done while sitting relaxed with a can of cola in your hands. All the engineer has to do is to imagine while leaving the robot to do the implementation of his imagination.

 

 

 

Fears About The Mind Reading Robot

  1. Thought selection: The human mind is nothing but a mystery puzzle box, most times there are many thoughts all in the mind at once, it becomes a challenge for the robot to select the particular thought to carry out. Secondly you may think of doing something only to change the decisions in few seconds, the robot has no patience to wait until you consider through your thoughts to check which it should carry out. This can pose a whole lot of danger hence creating a great havoc.
  2. Safety Reasons: to control a mind reading robot one must wear a scalp electrode which interacts with the brain, the effect of long exposure to this scalp electrode has not been ascertained, there are questions as to how safe it will be to use this scalp electrode for a long period of time.
  3. This technology can also be leveraged by vicious allowing the robots carry out their harmful thoughts without getting directly involved. This leaves the robot to blame.

The pros and cons of this strange inventions have not been known in exact but all we can do is to hope that this development will be a very nice step into tackling some of the issues that plagued mankind , I wouldn’t still say that in the light of this occurrence, I do not have my fears as to what the future holds; whether SAFE or scary.

Leave your comments below, we will like to get your thoughts

Share this Article


There are no comments for now. Be the first comment
There are no comments available
Leave a Reply

Your email address will not be published. Required fields are marked

We are Social - Connect with us on Social Media

  • Follow Us on Facebook
  • Follow Us on Twitter
  • Follow Us on Instagram
  • Follow Us on Instagram
  • Follow Us on Youtube