Three-dimensional models are valuable learning tools, especially for blind people who can not access 2D visualizations. Various studies have been performed to show that 3D models largely facilitate understanding of abstract concepts. However, creating 3D models specifically for blind students with various tactile components is expensive and laborious. And, for blind students, a sighted person’s help is required to understand the various components of the model. Further, the kind of information that can be effectively represented(including specific tactile components) using 3D models is limited.
Recent developments in consumer-grade 3D printers and increased ability to access them motivated us to think about enriching the 3D printed models with audio annotations. This led us design Markit and Talkit, components of a toolkit that augments 3D models with audio annotations.
Markit and Talkit
Markit is the tool that allows the makers to modify and augment the 3D model with audio annotations. Once the creator downloads or designs the required 3D model, Markit allows the user to mark certain elements and add textual information, which would be read out loud by Talkit.
Talkit is an application that allows the users interact with the model and understand its elements. It uses a tracker attached to the model to interact with the application. As the user taps a certain element in the model, Talkit responds with the audio annotation that has been fed via Markit.
We started by conducting a study with several legally blind people on how they would explore 3D printed models and exploring how would they like to interact with it to extract more details about it. They suggested that audio input/output and touch were the most convenient and intuitive(refer our paper for detailed user research). With feedback collected from this research, we designed Talkit which takes audio and touch(placement of finger) as inputs and outputs audio.