Development of Vision and Voice Assistive Technology for Visually Impaired Using Convolutional Neural Network (CNN)
DOI:
https://doi.org/10.65138/ijprse.2026.v7i04.1266Keywords:
Assistive Technology, Convolutional Neural Network (CNN), LiDAR Sensor, Object Detection, Voice Feedback, Wearable Device, Real-Time, Processing.Abstract
Vision impairment affects approximately 2.2 billion people globally, presenting significant challenges to autonomous movement and spatial awareness. This study developed a wearable vision and voice assistive device designed to improve the mobility, safety, and independence of visually impaired individuals. The system integrates a Raspberry Pi 5 central processing unit with a Pi Camera Module 3 for real-time visual capture and a TF-Luna LiDAR sensor for precise distance measurement and obstacle avoidance. The research followed an Evolutionary Prototyping Model, incorporating descriptive investigation and laboratory testing. The software architecture utilizes a YOLOv11 Convolutional Neural Network (CNN) model capable of detecting over 80 object classes, the Vosk speech engine for offline voice commands, and Pyttsx3 for real-time text-to-speech feedback.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Ralph Aaron M. Panahon, Jayem M. Navarro, Frenly S. Castro, Bryan Joseph C. Feliciano, Ronel Q. David

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.