Chat

Shahanur Islam Shagor

<Back to portfolio
System Architect & Developer5 views0 impressions0 comments

Wireless Vision-Aid for the Blind

A low-cost, real-time assistive system designed for visually impaired users that combines YOLOv8n-based object detection, ultra-low latency UDP streaming, and multilingual audio feedback. WVAB delivers directional navigation cues with high accuracy, operating fully offline with hardware costs under $15.

Project Details

Built with practical delivery decisions, product clarity, and room to scale

This page presents the project more like a modern case-study layout, with a stronger visual intro, clearer CTA placement, and better supporting context in the sidebar.

Visual impairment continues to be a major global challenge, affecting over 285 million people and limiting independent mobility in daily life. While existing assistive technologies offer partial solutions, they often come with high costs, language limitations, and dependency on cloud-based services. In this research, I developed WVAB (Wireless Vision-Aid for the Blind), a cost-effective, real-time navigation system designed to address these limitations through edge computing and intelligent system design.

The system is built around a YOLOv8n-based object detection pipeline, optimized for lightweight performance and real-time inference. Instead of relying on cloud processing, WVAB performs all computations locally, ensuring faster response times and complete offline functionality. A custom UDP-based streaming protocol enables ultra-low latency communication between the camera module and the processing unit, allowing continuous video transmission with minimal delay.

To convert visual data into meaningful guidance, the system integrates a C++-based NavigationPlanner engine that analyzes detected objects and assigns risk levels based on proximity and position. This allows the system to generate simple yet effective navigation instructions such as Left, Center, or Right, combined with proximity levels like Immediate, Near, or Far. These instructions are delivered through multilingual audio feedback, making the system accessible to users across different language backgrounds.

A key strength of WVAB is its performance efficiency. Experimental results show end-to-end latency below 50 milliseconds while maintaining 15–30 FPS, ensuring real-time responsiveness. The system achieves a mAP@0.5 score of 0.472 on the COCO dataset and demonstrates over 91% accuracy in navigation decision-making within complex indoor environments. Despite these capabilities, the entire hardware setup costs less than 15 USD, making it significantly more accessible compared to existing commercial solutions.

By combining affordability, real-time performance, and multilingual accessibility, WVAB represents a practical step toward inclusive assistive technology. It demonstrates how edge AI and optimized system design can create impactful solutions without relying on expensive infrastructure.

Tools & Technologies

Computer Vision: YOLOv8n (object detection)

Programming Languages: Python, C++

Streaming Protocol: Custom UDP-based low-latency video streaming

AI Processing: Edge-based real-time inference (offline)

Navigation Engine: C++ Risk-weighted NavigationPlanner

Audio System: Multilingual Text-to-Speech (TTS)

Data Handling: JSON-based label architecture

Dataset: COCO dataset (for model evaluation)

Hardware:

  • ESP32-CAM (video capture)

  • Raspberry Pi (processing unit)

  • Audio output (Bluetooth / wired headset)

Core Features:

  • Real-time object detection

  • Directional navigation (Left / Center / Right)

  • Proximity estimation (Immediate → Far)

  • Multilingual voice feedback

  • Ultra-low latency (<50ms)

WVAB demonstrates that advanced assistive technology does not need to be expensive or cloud-dependent to be effective. By combining real-time object detection, low-latency communication, and intelligent navigation logic, the system delivers reliable guidance in a lightweight and affordable form. Its ability to operate بالكامل offline while maintaining high accuracy and responsiveness makes it especially valuable for users in low-resource environments.

Ultimately, this project highlights how thoughtful integration of edge AI, efficient system design, and accessibility-focused features can create meaningful impact. WVAB is not just a prototype it represents a scalable direction for building inclusive technologies that empower visually impaired individuals with greater independence and mobility.

Want a similar build? Contact smshagor.ru@gmail.com.

Project Discussion

Comments and replies

0 comments
0 replies
No comments added yet. Be the first person to share feedback on this project.

Leave a Comment

Share your thoughts

© 2026 Md Shahanur Islam Shagor. All Rights Reserved.