One of six visual and technical studies for the series Everyday Experiments by IKEA research lab SPACE10 – exploring how new technologies will redefine how we live at home.
Digital Buddy is a two-fold experiment about Living with AI
Can we make AI transparent + trustworthy through embodied virtual avatars + human-like interaction?
Can these avatars become independent, trusted companions that help protect our interests + privacy when we interact with digital services?
An experiment in service design for embodied virtual avatars, connecting AR, OpenAI Natural Language Processing, and Text to Speech.
Embodiment for AI
We tend to trust things whose behaviour and interaction appears familiar, consistent and transparent to us:
Digital Buddy mimics human behaviour, and communicates its activity through a morphing, expressive skin and body shape. Conveying information – verbally, visually and with nuanced emotion, and allowing its owner to intuitively understand what is going on – and when it’s off for a snooze.
Once the behaviour + communication of virtual avatars is no longer made from pre-scripted building blocks, but genuine responses generated from everything an AI has learnt – will we share our homes with them like with a virtual pet?
An online Companion
We programmed OpenAI’s natural language processing AI “GPT-3” to tackle an area that we as users of the internet can really use some help with: understanding the Terms & Conditions we sign online.
Scene 1: Age Appropriate Summary
We programmed OpenAI’s natural language processing AI “GPT-3” to summarise Facebook’s Terms and Conditions – and asked it to explain to our 8-year old daughter.
Scene 2: AI Bias
Some responses from GPT-3’s text analysis reveal the fact that any AI will always be as biased as its training data is.
Huge training data sets promise a more balanced “opinion”. But when AI’s quote from anything they’ve read on the internet, how can we trust, or even know, their sources?
Scene 3: Message Privacy