OpenIDEO is an open innovation platform. Join our global community to solve big challenges for social good. Sign up, Login or Learn more
Layne Jackson Hubbard
I am passionate about:
four-dimensional geometry, platonic solids, REM dreams, cloud patterns, animal tracks
Human-Computer Interaction Fellow
University of Colorado Boulder
"Nothing about us without us."
I'm a PhD student in computer science at the University of Colorado Boulder, as well as a Human-Computer Interaction Fellow with the National Science Foundation.
I taught preschool for four years, hold a Certificate in Early Childhood Education, and continue to collaborate with local preschools.
In undergrad, I worked in several research labs spanning neuroscience, cognitive science, and computer science. Afterward, I worked as a data engineer at a technology startup, which was acquired by Amazon to power Alexa's voice interactions.
I love to swim, draw, and dance.
We've been collaborating with one of our research colleagues, who is blind, to gain her insight and feedback on how to develop our prototypes for blind and visually impaired learners. She has given us several key takeaways:
(1) Texture is to the blind what color is to the sighted—it helps us distinguish between elements, pieces, and features. So while a stuffed animal might be tactile, if it lacks rich textural contrast then it will feel more like a soft mass than a specific animal or creature. So, we've been working with our sewing interns to design new stuffed animals that exhibit more contrast in their features.
(2) We must support children's agency in their interactions. While this is important for all children, it is especially important for blind and visually impaired learners (and is a principle of inclusive design). Thus, we are working to extend our voice interaction with multimodal tactile inputs to give children more agency in directing the flow of their stories. And yes, we use the tenets of universal and inclusive design to guide our process—meaning we apply the lessons we learn from our diverse populations and integrate those insights into our prototypes for all.
Good questions. In early pilots, we actually used the Wizard-of-Oz style of prototyping in order to quickly gain insights about the interaction experience before sinking time and resources into costly development. This allowed us to test our interaction model and rapidly iterate on our design.
To implement this interaction model in our mobile application technology, we first needed to analyze audio and video recordings of the storytelling interactions to determine the architecture. It was a lot of data!
Now, we've been working on calibrating our voice interaction to handle the turn taking. It's not easy. However, we've been inspired by our interviews with several speech language pathologists (therapists) who work with bilingual children. Our tenets for development is that the robot shouldn't interrupt the child, and should give them enough time to think through their responses.
Thanks for the feedback, Hilary! We've been following LENA's work too, and are impressed with how they've scaled their mission. I also checked out your submission for the Mobile Curiosity Lab; keep in touch as it develops, I think there could be an opportunity for us to collaborate as well.
Speaking of, I met Dr. Roy Pea last year at the conference on Interaction Design and Children in Palo Alto. He told me about a mobile lab from the past wherein his team travelled with a laser cutting machine and helped children create their own designs. It sounded so intriguing!