During my time as a graduate student at the Georgia Institute of Technology, I worked as a volunteer at the Atlanta VA Medical Center with the Center for Visual and Neurocognitive Rehabilitation. As part of a grant, we were tasked with developing a take-home training application for veterans who have lost their vision. This app would help aid them in learning the cognitive tasks associated with allocentric navigation.
There were many unanswered design questions for this type of task and how the user would interface with the training application without their vision. We based our initial designs on the some of the interactions with iOS VoiceOver, on the principle that our users would already be someone trained and familiar with VoiceOver. However, we needed also needed a method to prevent the user from aimlessly moving their finger away from their desired path. For this requirement, we prototyped a couple of different tactile overlays and gathered feedback from another study group of visually impaired people we had on hand.
We gained a number of interesting insights from our prototype iterations. One of the largest insights into the design is to remain consistent in size, shape, and direction of the physical indicator. For example, we initially experimented with having the “rays” of the compass widen as one moves from the center to the edges. We thought this would increase the chances of a user finding the ray. However, users responded that they perceived the indicator at the edge as a completely different indicator, rather than part of a continuous whole.
In this iteration, the combinations of lines and dots resulted in a lot of noise for our users. They would transition from feeling the dots to tracing the lines, and this context switch resulted in users feeling disoriented.
For our final iteration, we completed used dots laid out in a grid for the navigation task, and used cut lines for our pointing task. This combination was found to be the easiest for our participants to use.