Deep learning, big data, the cloud, and the rest of the pieces of the developing AI world are becoming familiar, being visualized by marketers who understand we need images and ideas that fit into our worldview. We already love our cars and our dogs, so falling in love with an AI personal assistant who actually talks, and listens to us, seems like an easy jump. They pay attention to us, care for us, are interested and responsive, and leave us in charge. They don’t judge us or pressure us, but actually seem to like us: Robots and human proximity.
Estimated reading time: 6 minutes
The heart of AI and robotic development, real and fictional, has always been how the brain works. How do they learn from experience and continue to grow and change with interactions from the world? The ability to take the reins and continue to develop after being programmed is what separates artificial intelligence from just a very big computer. And they are not just getting smarter, but are getting more responsive, more sensitive, more altruistic. Simply, they are learning to make connections based on experience, and they are using their neural networks to identify patterns.
Big data and deep learning are the terms this market segment is using to describe how they are analyzing data. Huge piles of data, some related and some not, is being uploaded into these neural networks and the big machines go to work making connections and finding patterns. What they are doing is similar to the way human brains process and conclude, but just on a much bigger scale. The quality of the results of big data analysis suggest that, just like human brains, the AIs are using experience and knowledge to grow.
People are worried that we are all going to fall in love with robots. But that is too simplistic. What is really worrying people is that we are all going to fall in love with robots rather than other human beings.
Toggles and joysticks, inputs and outputs, control buttons: the human-machine interface (HI) should not be about technology. The current state of interaction between humans and machines, soon to be humans and AIs, is mediated by an interface. There is a power dynamic in play, mediated by the word ‘control button’. But HI/AI interactions demand more creative thinking.
We don’t want to be separated by a user interface. The HI will bring intuition, creativity, and empathy; the AI will bring big data collection and analytics and pattern recognition. It might be considered that we will enhance the other; the sum of our parts might be greater than we can imagine to increase the robots and human proximity interfaces.
We need to develop new human-machine interfaces that allow a joint consensual hallucination. (William Gibson’s phrase) The interface needs to provide both the HI and the AI unfettered access to the other’s thoughts and feelings, and both access to the control button. Let’s call it a virtual joystick, with two minds on the controls. Then give it a problem to solve, any problem – a few hours of big data analytics with human empathy and creativity – the outcome could be superior.
Human-in-the-loop learning allows machines to ask questions when they are not sure about an answer. The answer given by a human is then integrated into the system to make the machine smarter. At the center of this technology is the idea of creating systems not just from data but from human opinions about that data. Today it is very difficult to get a computer to an accuracy level of 99%, but it is relatively easy to make it 80% accurate. By allowing humans to handle this problematic 20%, computers can tackle most real-world applications.
Self-driving is an industry that uses human drivers for feedback on the road. Tesla uses an automating driving mode that drives itself; however, it insists that a human hold the steering wheel. When the computer’s learning vision system senses that there is an irregularity on the road, it hands control back to the driver. The car can drive itself in most situations, but it still relies on a human to find its way.
Facebook uses human-in-the-loop learning to improve its photo recognition algorithm. The model takes a first pass of the image and labels it. It also assigns a confidence score to that label. If this confidence score does not pass a certain threshold, the computer will ask the up-loader for his input. This information makes the algorithm smarter. Today ATMs use visual algorithms to read the information on a check. Whenever the language or handwriting is unclear, the ATM will ask the customer to key in the amount. It will also flag the check so that a human operator can look at it. Machine learning (ML) is useful in detecting forest fires in photographs. It can sift through numerous picture to present only those that are likely forest fires. In this way it saves people time, and once the human evaluates a picture, the computer learns. When there is little data, humans are specially useful in the ML process. A new restaurant that wants to present its business in a certain light on social media, will need the help of a manager to classify posts about food quality, service quality, wait times, ambience, et al. But in time the machine will learn and take over this task and increase the robots and human proximity accuracy.
Today people are using ML for filtering large number of resumes, identifying safety concerns with respect to customer support tickets, and classifying social media posts with relation to their product. In the future human-in-the-loop technology will help areas like audio processing, image processing, text processing, and IoT signal processing. In the world, this can take the form of traffic cameras that detect HOV lane violations, messaging apps that transcribe voice to text, and fitness apps that estimate your calorie count from pictures of the food you eat.
Robots and Healthcare
RoboCoach, introduced in Singapore, helps seniors complete their daily exercises and other tasks. The robot can “conduct” training sessions where the patients follow the moves made by the robot. It can even slow down the pace of the classes if the participants are falling behind. It can also respond to voice commands.
It’s being introduced to five homes for the elderly across Singapore. With a screen for a face and metal arms, it uses motion detection to help the elderly complete their exercise sessions.
Another amazing robot is RoboBear, a cute-looking robot with a bear-like face. Developed by RIKEN-SRK Collaboration Centre for Human-Interactive Robot Research and Sumitomo Riko Company, it helps lift patients from their beds and place them in their wheelchairs.
Let’s not leave out Care-O-Bot 3. It can recognize faces and even interpret expressions. It can play music and entertainment for patients and even take their blood pressure. It can also carry out tasks for patients and call for help if an elderly patient suffers a fall.
Of course, these robots are not intended as a replacement for human care. Their purpose is to take care of the smaller and more manual tasks so that the nurses can give more personalized one-on-one care to the patients. The robots merely reduce the pressure that nurses previously had to deal with, opening the way for more individualized attention.
Robotics is a playfield for several scientific areas (ML, AI, Deep Learning, Interfaces and Sensors). The main target of the developers seems to build a good assistant for humans and to increase Robots and human proximity interaction. However as these IO-driven personalities (robots) are new to this world and new to human race it could lead to some irritations and false reactions. Learning to address the expectations and talking to stakeholders about future implications is essential for acceptance. That’s where we can help: to generate acceptance: Contact us.