The visual assistant system helps people with visual impairments navigate indoors and outdoors. It is accessible to most users, includes continuous voice control and an AI image model, contains tools for outdoor navigation and spatial orientation inside buildings, works in real time, and does not overload the user with information.
The system includes: a continuous voice controller, a navigation module, an image processing module, an analytics module, and a touch module. The system analyzes data from the accelerometer and gyroscope to obtain important information about the movement and orientation of the device.
This synergistic approach plays a crucial role in implementing a number of functions, including: object avoidance, navigation assistance, and real-time spatial orientation for people with visual impairments. The developed autonomous neural network model is designed to determine when visually impaired people fall. The system is trained to accurately identify such cases and automatically ensure rapid medical assistance.