A research team from Stanford University’s ILIAD Labs has succeeded in developing a robot that can feed people with spinal cord injuries or other motor disabilities. The robot is able to pick up almost any food and put it into the mouth of the person in need of care.
Conventional robots for feeding people in need of care first have to be specially adjusted to the respective food so that it can be picked up with a fork or a spoon. Also, these robots are limited when it comes to shoving food directly into their mouths. The movements are often simply programmed. Most of the time, the fork stops in front of the mouth and the person has to pick up the food themselves, which some care recipients cannot do.
This is where the robot from Stanford University comes in. It is intended to make eating more pleasant while at the same time relieving the nursing staff and minimizing their risk of burnout. To do this, the research team has developed several robotic algorithms with which the feeding process for many foods can be carried out autonomously and conveniently.
As can be seen from the study “Learning Visuo-Haptic Skewering Strategies for Robot-Assisted Feeding”, which is still under review, the researchers combined computer vision and force sensors in the robot in order to be able to handle almost any food. The robot uses three robot arms. The first uses a fork to pick up the food. The second robotic arm pushes it onto a spoon of a third robotic arm. He then guides it to the patient’s mouth and then inserts the food at a speed that is perceived as comfortable and at an optimized angle.
In order to be able to handle as many foods as possible that look the same but have different consistencies, the scientists use a camera and a force sensor for haptic feedback. They trained the robot with different foods that behave differently when impaled. The food is first targeted via the visual system and the fork is brought into contact at an optimal angle. The fork then feels the food for consistency to determine how soft or firm it is. Depending on its robustness, the robot then chooses one of two impaling techniques: a quick, vertical motion for firm foods, a gentle, slanting motion for more fragile foods. In principle, the robot behaves like a human who looks at the food and pokes around in it before spearing it with a fork.
In this way, the robot was able to reliably pick up a large number of foods. However, thin foods such as snow peas or lettuce leaves are a challenge, says Priya Sundaresan, one of the study’s co-authors.
In order to be able to pick up a food better, the researchers use a second robotic arm that holds a curved pestle to push it onto a spoon. A computer vision system helps analyze when a piece of food is about to break, causing the cutlery to stop moving towards each other and instead make a shovel-like motion.
With the right power and movement
Once the food is on the spoon, it is brought to the mouth. To do this, the Stanford scientists have given the robot arm a kind of mechanical wrist so that the spoon can be inserted into the open mouth at the optimum angle. A computer vision system determines when the mouth is open. A force sensor is used to determine when the spoon touches the lips or tongue and has to stop. It also registers when the patient has taken a bite and then pulls the spoon back with the appropriate force.
Although the feeding robot is already well advanced, it still does not work perfectly. So he can’t eat all the food yet. Also, the bite-sized cutting up of large foods has not yet been envisaged. Another approach is that humans should be able to communicate with robots. For example, the person in need of care can tell the robot what it wants to eat next. The researchers are now working on this.