Development of Vision and Voice Assistive Technology for Visually Impaired Using Convolutional Neural Network (CNN)

Authors

  • Ralph Aaron M. Panahon
  • Jayem M. Navarro
  • Frenly S. Castro
  • Bryan Joseph C. Feliciano
  • Ronel Q. David

DOI:

https://doi.org/10.65138/ijprse.2026.v7i04.1266

Keywords:

Assistive Technology, Convolutional Neural Network (CNN), LiDAR Sensor, Object Detection, Voice Feedback, Wearable Device, Real-Time, Processing.

Abstract

Vision impairment affects approximately 2.2 billion people globally, presenting significant challenges to autonomous movement and spatial awareness. This study developed a wearable vision and voice assistive device designed to improve the mobility, safety, and independence of visually impaired individuals. The system integrates a Raspberry Pi 5 central processing unit with a Pi Camera Module 3 for real-time visual capture and a TF-Luna LiDAR sensor for precise distance measurement and obstacle avoidance. The research followed an Evolutionary Prototyping Model, incorporating descriptive investigation and laboratory testing.  The software architecture utilizes a YOLOv11 Convolutional Neural Network (CNN) model capable of detecting over 80 object classes, the Vosk speech engine for offline voice commands, and Pyttsx3 for real-time text-to-speech feedback.

 

Downloads

Download data is not yet available.

Downloads

Published

2026-04-25

How to Cite

Panahon, R. A. M., Navarro, J. M., Castro, F. S., Feliciano, B. J. C., & David, R. Q. (2026). Development of Vision and Voice Assistive Technology for Visually Impaired Using Convolutional Neural Network (CNN). International Journal of Progressive Research in Science and Engineering, 7(04), 83–86. https://doi.org/10.65138/ijprse.2026.v7i04.1266

Issue

Section

Articles

Most read articles by the same author(s)