The purpose of this project is to (1) develop a smartphone-based mashup that transforms a smartphone into an assisted vision aid and (2) measure its effectiveness.
A prototype was developed and tested using an Android smartphone with gesture recognition, image capture, orientation sensors, text-to-speech software and an image recognition service.
Test subjects were taken to an outdoor test environment, blindfolded, guided to a random starting location and orientation, and instructed to identify the location of a given object.
Four trials runs using the prototype and four control runs were performed. Success was measured in terms of whether the test subjects could identify the location of the target object within a given time limit of 5 minutes.
75% of trial runs with the prototype system were successful while 100% of the control runs failed. The average time it took for test subjects using the prototype to find the target object was 3.02 minutes. Since all control runs failed, the average time of the control runs was the 5.00 minute maximum test time.
This experiment has shown that the sensors available on commonly available smartphones can be integrated with new image recognition services to assist the blind and visually impaired.
The degree of assistance (75% success rate) is significant in light of the fact that the control group failed 100% of the time given the same objective of finding a large object in a relatively uncluttered outdoor environment within 5 minutes.
A smartphone-based mashup was developed and tested to serve as an assisted vision aid for the blind and visually impaired.
Science Fair Project done By Rahul Sridhar