This article first appeared in Forbes magazine in Jan. 2020.
As technology develops, especially with advancements in artificial intelligence (AI) and robotics, people can become fearful and anxious about what they don’t know or don’t understand.
Brain Corp has deployed autonomous robots globally in different workplaces and across various industries, and in that process, several common questions and misconceptions pop up when we meet customers who are deciding how to deploy and support robots in their specific environments.
These misconceptions often fall at the extremes — people either worry about the potential of AI and their robot acting on its own accord, or they expect far more from their robot than it was programmed to do. These perspectives are understandable given the celebration of advanced technology and the proliferation of robots in popular books, movies and comics.
Take Rosie the Robot from the 1960s cartoon The Jetsons, who seamlessly completed multiple tasks simultaneously around the house. This futuristic technology often featured in our favorite TV shows just isn’t possible yet. Although we’re seeing great advancements in AI in areas such as voice-command devices, robotic arms and indoor self-driving vehicles, we still have a long way to go before reality meets science fiction.
Here are a few of the questions I regularly field from customers:
Artificial intelligence is an umbrella term for a computer’s ability to perform tasks that require human intelligence, such as speech or facial recognition, language translation, visual perception or simple decision-making. This can operate at different levels, but overall, AI mimics what humans would do with a thought process, movement, computation, object recognition or decision.
Robotics falls under this term as well. Since developers must create the algorithms that make decisions to perform a task (a mobile robot moving around an object, for example), the decisions and movements are often specific and limited to the task at hand.
AI is still early in its lifespan, and right now, it’s manufactured for specific solutions for different problems, such as automated floor-cleaning robots or self-driving delivery robots in factory and warehouse environments. In healthcare, AI is beginning to help with the accuracy of medical diagnoses.
Intelligent technology (picking a movie or retail product based on your likes and viewing history) is based on vast amounts of data libraries collected from real-world experiences and requires large data sets to work well. Robots “learn” from the data you provide and the algorithms you program. As the amount of input data increases from a growing fleet of machines, the better the AI or robotic function becomes.
As our founder Eugene Izhikevich says, “Computational hardware that mimics true AI doesn’t exist.” Although we can process astounding amounts of data in small amounts of time and the size of data storage is shrinking to miniscule levels, computers still can’t process information and decisions in the same way humans do.
We can think about multiple things at one time — we can walk and chew gum and solve a math problem in our head while looking ahead and paying attention to traffic. In contrast, robots are generally focused on one task or a series of movements and decisions to complete a task.
That being said, autonomous mobile robots (AMRs) process thousands of computer vision data points to make one decision or a series of decisions in a sequence. Many are now also adding the complexity of manipulation (picking) or scanning shelves for inventory, but it still does not match what the human mind can do. However, the accuracy and consistency from robots and AI is saving businesses time and streamlining operations.
Robots are not self-learning today, although we are beginning to see promise from early “reinforcement learning systems.” AMR operating systems must tell the robots how to decide and whether to turn left or right. Robotic intelligence is not equal to human intelligence and not able to develop new capabilities that are unrelated to the already programmed task.
For changes to occur, we must make new features or enhancements to the software code or algorithm and test for safety, security and performance. New capabilities come with new versions of software, and during its lifespan, your robot will be able to do more and have more “bells and whistles,” but that must be programmed by a human.
Robotics has seen remarkable market growth in terms of sales and number of companies, particularly in the past five years. However, since robots can only perform a given set of tasks, they are intended to serve as a tool to help us do our jobs better. Customers sometimes expect a revolutionary robot that can tackle every task, but that’s simply not the case. Equipped with sensor kits, navigation software, and connected by the cloud, robots powered by BrainOS, which is Brain Corp’s indoor self-driving technology, can do an impressive amount of autonomous maneuvering. They can move items from one space to another in a warehouse environment or efficiently clean grocery store floors, but they cannot act as a personal assistant.
In short, robots can automate certain tasks and make certain processes more efficient, yet at the end of the day, they are machines with hardware that wears down eventually, parts that need to be replaced, and software that requires updates — just like smartphones.
All in all, artificial intelligence and robotics give us the tools and systems to make our lives easier and help businesses do more. These technologies also create new jobs for those who program, deploy and maintain robots at companies around the world. We are seeing a new industry emerge, enabling the “Internet of Things” to become a reality through cloud-connected devices.
Just as we saw an evolution in technology during the Industrial Revolution, we are taking another step in that direction. Although there is angst and concern about what’s to come, we can also embrace the possibility of what it means for tomorrow.