We taught robots to navigate tolerably in space, entrusted the machines to make their own routes, and turned the autopilot into a part of everyday life. Now is the time to apply these technologies to help people already – those who are deprived of the ability to see or have difficulties with orientation on the ground. American engineers have created a prototype AI guide called OpenCV Artificial Intelligence Kit with Depth (OAK-D), which fits in a backpack, can recognize surrounding objects and give its user advice on how to interact with them.
The main goals were miniaturization and simplification of the system, the developers did not want their clients to look like cyborgs. The OAK-D requires the processing power of a simple laptop or mini-PC like Raspberry Pi. It fits into a backpack along with an 8-hour battery and a GPS navigator. Attached to a belt is a 4K camera to analyze the shape and color of surrounding objects, plus a pair of stereoscopic cameras to determine distance Communication with the user is carried out through a regular Bluetooth headset.
The AI guide recognizes many types of typical objects on the streets, determines their location in relation to the user and informs him of those that may become an obstacle for him. For example, “Branch, left, top” or “Curb, low, bottom, center.” On the command “Look around” the neural network describes the objects located around – “Traffic light at 10 o’clock”, “Bush at 18 o’clock”, “Pillar at 9 o’clock”. The AI is trained to recognize road signs, signposts and information signs.
An interesting function of memorizing places, which creates marks on the map, so that AI can build and describe the optimal route for moving there for a visually impaired person. You can also ask for information about where he is now and what is located in the vicinity in order to decide where to go. The OAK-D project is non-commercial and open source code.