In this article, we will focus on the concept of using SLAM SDK and Navigations SDK for cleaning tasks done by robotic vacuum cleaners.
Familiar cleaning robots can become more intelligent and convenient in daily use with computer vision, namely:
1. Increase cleaning efficiency due to more precise navigation in the room;
2. Clean up particular rooms at a given time (instead of vacuuming all premises);
3. Specify exceptions for cleaning (e.g. if the baby is sleeping in the room).
Implementation of these and other functions is achieved by drawing a 3D floor plan, that let you program robot’s behavior with an accuracy of 1-2 centimeters and let you obtain very interesting statistics on the results of cleaning.
We decided to visualize the most evident scenario (p.2):
1. Connecting the robot to the control application on an iPad:
2. Programming of the cleaning for one room according to the floor plan, the robot created during an initial cleaning.
3. Robot vacuums the room according to a predetermined program.
My forecast is that over the next 5-10 years, the majority of domestic appliances will be robotised, and thus computer vision systems of SLAM SDK and Navigations SDK will become standard features.
The combination of computer vision techniques in AI will later cause a user to stop interacting with the robot at a physical level (pressing buttons, etc.) and shift to interaction exclusively through programming interfaces (software layer).
Hope this article was useful to you.
In the next articles, we will describe other cases of using SLAM SDK for drones and robots.