Redpepper, a creative agency that has built an AI-based robot with a capability of finding Waldo is just 4.45 seconds. The robot is equipped with a rubber hand that it uses to point the Waldo.
The company with the AutoML vision service of Google has trained the AI on Waldo’s photos. The drag-and-drop functionality gives the capability to the users to give training to the robot without coding knowledge, and the training has been utilized to sub-divide anything.
Creative Technologist, Matt Reed, Redpepper, led the project, gathers 62 images of Waldos’ head and 45 images of full-body of Waldo from the Google images and then used the data to train the robot using AutoML Vision of Google.
He shared with The Verge that, “I presumed that the data is not sufficient to develop a strong working model but the robot is performing much better than the expectations.”
The AI is in combination with the metallic arm of a robot that is a uArm Swift Pro controlled by Raspberry-pi along with a camera to execute facial recognition. The camera clicks the images of the page with the help of OpenCV in Find Waldo book to find out a possible variation of the face of Waldo in the image. Then the images were stored in the AutoML of Google. By the time AI confirmed the image with an assurance of 95% or higher, the rubber hand point towards the Waldo.
However, the robot is designed in a manner that it ruins the complete fun of Where is Waldo. But Read had some different thoughts about the system. Maybe, the company has a lot more in its thoughts to do with the newly developed system that with the help of data and AI can solve the puzzle, then certainly with more work and little maturity of the algorithm can make the robot do wonders.