System for Wearable Audio Navigation

Research and development of an IOT system for wearable navigation and spatial awareness using sound.

swan demo day spring 2018

For my two years as a graduate student at GIT and continuing as a research tech, I have been involved with the research for SWAN, or the System for Wearable Audio Navigation. This is a 2.0 reboot of the original research that began in this field several years ago, as we leverage new technological breakthroughs to design a system for wearable audio navigation.

Background

SWAN was originally created through the work and research of Bruce Walker, Jeff Lindsay, Jeff Wilson, and others in the Sonification Lab at Georgia Tech. The goal of SWAN was to provide an audio navigation system that could provide guidance to a person who is visually impaired, either situationally or permanently, so that they may better navigate complicated environments. This first iteration of SWAN included a system of navigation devices and sensors worn in a backpack and on the body. A virtual reality prototype was also used to rapidly test for navigation questions and concepts in a safe and configurable environment. This was pre “iPhone” era, and innovated some of the concepts that we take for granted, such as a person level system combining GPS navigation, dead reckoning, and SLAM we now take for granted.

swan demo day spring 2018

SWAN 2.0

For the 2.0 version of SWAN, we first created a room-scale virtual reality prototype of a research protocol. This enables us to rapidly test and modify our prototype free of the hazards of the real-world environment. We have designed the prototype with the input of visually impaired users, but can also simulate various visual conditions on users enabling us to gather data from a variety of participants for easier testing. We have also developed an augmented reality “real world” implementation of SWAN, using an advanced in-building positioning system in an area we have mapped in the Psychology building.

Current Iteration

For our current research, we are experimenting with a sort of “sonified flashlight” which allows the user to point to an object and receive an detailed voice-to-text readout of relevant information about that object. We are hoping that we can leverage the latest advancements in machine learning to create a sort-of “ai visual guide” that could be used in various scenarios.