#HTE
Stanford Researchers Develop Tactile Display to Make 3D Modeling More Accessible for Visually Impaired Users
A team of engineers at Stanford University is working on a “2.5D” display system to make 3D modeling and printing more accessible for visually-impaired and blind users. The project aims to increase access to making by providing a touchable system for evaluating works-in-progress. Like a pin art toy, the display forms shapes from a field of pegs that move up and down to create real-time representations of forms created in an accompanying 3D modeling software.
“Design tools empower users to create and contribute to society but, with every design choice, they also limit who can and cannot participate,” said Alexa Siu, a PhD student at Stanford who developed the system, in a press release. “This project is about empowering a blind user to be able to design and create independently without relying on sighted mediators because that reduces creativity, agency, and availability.”
“It opens up the possibility of blind people being, not just consumers of the benefits of fabrication technology, but agents in it, creating our own tools from 3D modeling environments that we would want or need – and having some hope of doing it in a timely manner,” added Joshua Miele, a blind scientist, designer, and educator who helped develop the system.
The team sought input from the blind community throughout the research and design process. “We used a participatory design process to co-design an accessible 3D modeling workflow,” Siu explained over email. “This allowed us to understand what interactions helped support a non-visual understanding of 3D geometry for both exploration and ideation. With these insights, we kept refining and iterating before conducting a formal evaluation with blind and low-vision users who used the prototype to ideate and 3D model a variety of new objects from scratch.”
“What really is so awesome is that I can view various perspectives of the object and not just the object in its single state,” said Son Kim, an assistive technology specialist for the Vista Center for the Blind. “That offers greater dimension to understanding the object that you’re attempting to make. And that’s the same opportunity that a sighted peer would have, where they too would be able to view various perspectives of their target object.”
Advances in tactile displays are promising, though many would say they are coming too slowly. The team has a well-developed working prototype but is working to improve the resolution of the display, which is currently incapable of capturing much detail. In parallel, one of Siu’s colleagues, Kai Zhang, is working on a smaller-scale, more affordable model that uses smaller pins.
The team is also exploring ways of creating more feedback between the display and the software program, with one idea being to create a way for users to physically adjust the pins and having the computer model change to match.
https://www.core77.com/posts/91482/Stanford-Researchers-Develop-Tactile-Display-to-Make-3D-Modeling-More-Accessible-for-Visually-Impaired-Users