School: ArtCenter College of Design, Human Computer Interaction
Contributor: Julian S.(Instructor), Jiachen F., Munchy W., Daniel G.
Focus: Artificial Intelligence, Creative Ideation, Input Design, HCI Research
We conceptualized a way to break screen borders to use the system everywhere in the room by projection as an interface medium. Meanwhile, combining voice inputs, users is able to use smart home applications without physical contact to expand use cases and use everywhere.
In this project, we focused on designing conversational AI and the gesture-voice input methods. My main contributions included gesture research, prototyping animations, and the design of AI visualizations.
We considered the home ai is an extension of user's eyes and hands, where it sees and proceed decisions for the users. In this part, I contributed to the story script and ai dialogue writing.
Projected interfaces enables to use without touching a screen — which is already present in some home entertainment systems and in-car experience. This technology expands use cases comparing to current smart home devices.
For instance, imagine you're cooking and don't need to worry about dirtying a screen by washing your hands. Instead, you can use gestures to wake up the interface and answer a video call from a friend. Or, you could even send a PIN to the delivery person at your door while taking a shower.
With these scenarios in mind, we aim to design a seamless and contactless interface activation method that allows you to stay engaged in your tasks without interruptions.
Wake up interface
Open room setting/app deck
Open an app
Move an app card
In the case of designing for home AI, I look to what users will be naturally drawn to: connection and interaction. When interacting with technology in private living space, especially a butler-like systems, the home AI is designed to be neutral and less human, to be less harmful and aggresive.
When interacting with technology, especially AI-driven systems, users often want a sense of familiarity and relatability. Thus the motion of AI should be intuitive and understandable. Made in After Effects.
Idle
Loading
Generating/Responding
Speaking