Getting your Trinity Audio player ready...
|
A robot chef has mastered the art of recreating recipes just by watching food videos.
The robochef was programmed with a cookbook of eight simple salad recipes.
After watching a video of a human demonstrating each recipe, the robot was able to identify which recipe was being prepared and make it.
The videos also helped the robot add to its cookbook, with the robot coming up with a ninth recipe on its own at the end of the experiment.
The experiment shows how video content can be a valuable and rich source of data for automated food production and could make it easier and cheaper to deploy robot chefs.
Robotic chefs have been featured in science fiction for decades, but in reality, cooking is a challenging problem for a robot.
Several commercial companies have built prototype robot chefs, but none of these are currently commercially available, and they lag well behind their human counterparts in terms of skill.
Human cooks can learn new recipes through observation but programming a robot to make a range of dishes is costly and time-consuming.
Study author Grzegorz Sochacki, a PhD candidate from the University of Cambridge’s Department of Engineering, said: “We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can – by identifying the ingredients and how they go together in the dish.”
The team used a publicly available neural network to train their robot chef.
The neural network had already been programmed to identify a range of different objects, including the fruits and vegetables used in the eight salad recipes.
These were broccoli, carrot, apple, banana and orange.
Using computer vision techniques, the robot analyzed each frame of video.
It was able to identify the different objects and features, such as a knife and the ingredients, as well as the human demonstrator’s arms, hands and face.
Both the recipes and the videos were converted to vectors and the robot performed mathematical operations on the vectors to find the similarity between a demonstration and a vector.
By correctly identifying the ingredients and the actions of the human chef, the robot could work out which of the recipes was being prepared.
Of the 16 videos it watched, the robot recognized the correct recipe 93 percent of the time, even though it only detected 83 percent of the human chef’s actions.
The robot was also able to detect slight variations in a recipe, such as making a double portion or normal human error.
The robot also correctly recognized the demonstration of a new, ninth salad, and added it to its cookbook and made it.
Sochacki said: “It’s amazing how much nuance the robot was able to detect.
“These recipes aren’t complex – they’re essentially chopped fruits and vegetables, but it was really effective at recognizing, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”
The videos were very clear, with the human demonstrator holding up each vegetable to make sure the robot could have a good look at each ingredient.
Sochacki added: “Our robot isn’t interested in the sorts of food videos that go viral on social media – they’re simply too hard to follow.
“But as these robot chefs get better and faster at identifying ingredients in food videos, they might be able to use sites like YouTube to learn a whole range of recipes.”
The study was published in the journal IEEE Access.
Produced in association with SWNS Talker
“What’s the latest with Florida Man?”
Get news, handpicked just for you, in your box.