CamIO

Camera Input-Output

Smart pen providing real-time audio-feedback on objects using the smartphone’s sensors.
Research
Software Engineering
Data Science
Human-Computer Interaction
Augmented Reality
Auditory Interface
Sensory Substitution
Computer Vision
Consortium
Began Around

March 1, 2018

Abstract

CamIO is a system to make physical objects (such as documents, maps, devices and 3D models) accessible to blind and visually impaired persons, by providing real-time audio feedback in response to the location on an object that the user is touching. CamIO currently works on iOS using the built-in camera and an inexpensive hand-held stylus made out of paper and cardboard or wood.


Image illustrating the project

1 Introduction


2 My role in this project


1) Explore new solutions to improve the localisation & tracking capabilities of CamIO:

Their existing solution, iLocalize (Fusco & Coughlan, 2018) (Swift / iOS), used a combination of Visuo-Inertial Odometry (VIO) through Apple’s ARKit, particle filtering based on a simplified map of the environment, and drift-correction through visual identification of known landmarks (using a gradient boosting algorithm).

Screenshot showcasing the iLocalize app

I developed a web app to send the live camera stream from a mobile phone (JavaScript / socket.io) to a backend server (Python / Flask). The goal of the application was to facilitate the exploration of new Computer Vision algorithms to process the captured video and IMU data, which would send back location or navigational information.

I also explored existing 3rd-party services for indoor localization, such as Indoor Atlas (which combines VIO, GPS data, WiFi & geomagnetic fingerprinting, dead-reckoning, and barometric readings for altitude changes), for which I made a small demo.

Illustration of the Indoor Atlas service at work
(a) Indoor Atlas’ localization
Illustration of the Indoor Atlas navigation graph
(b) Indoor Atlas’ navigation graph
Figure 1: Indoor Atlas

2) Assist in analyzing the data and writing a scientific paper presenting the project.

Back to top

References

Fusco, G., & Coughlan, J. M. (2018). Indoor localization using computer vision and visual-inertial odometry (pp. 86–93). Springer International Publishing. https://doi.org/10.1007/978-3-319-94274-2_13