The Future of Design, Pt. 1: Body as Input

With machine learning, our bodies may be the next step in the way we interact with the internet

Lisa Jamhoury
Modus
6 min readSep 17, 2019

--

Photo: PeopleImages/Getty Images

This is the first of two articles about designing for machine learning on the web. This article discusses the body as input; the second addresses machine intelligence as a creative collaborator. This series of articles was originally published by the Machine Intelligence Design team at Adobe Design.

II recently attended the final presentations of Live Web, a graduate course conceived and taught by Shawn Van Every at New York University’s Interactive Telecommunications Program (ITP). I was struck by how rapidly machine learning is shifting the way people can interact online — and what the change may mean for digital design and tools.

If ITP is, as it calls itself, “The Center for the Recently Possible,” then Live Web finals are experiments exploring the “recently possible online.” Over the years, the course’s syllabi have read like a history of live networked interaction. A reading from Clay Shirky’s 2008 book “Here Comes Everybody” kicks it off, and the course lectures cover a decade’s evolution from ActionScript, Flash, and PHP to today’s HTML5, WebRTC, and JavaScript.

The new kid on the block? Yes, you guessed it: machine learning.

The 2018 student projects vary widely, yet many incorporate machine learning for the web, courtesy of Yining Shi’s class of the same name, Google’s TensorFlow.js, and ITP’s own high-level machine learning JavaScript library, ml5.js. Two underlying themes stood out: the use of the body as an input device and machine intelligence as a creative participant in interaction. As the two phenomena continue to take hold, they will reshape the way people interact across networks and change the work of designers and their tools.

The body as input

As the internet has progressed, users communicated first through keyboard and mouse, then webcam, then multitouch. All of these devices require two things of the user: to be physically in contact with their computing devices and to communicate on the machine’s terms. Today, voice is changing this historic dynamic and moving interaction design (literally) away from the computer. The body is the next input in this evolution, moving toward human-computer communication that’s more similar to the human-to-human variety.

Body-tracking functionality for the web is rudimentary today. Nonetheless, it’s now possible to envision a path forward to creating websites controlled by a hand wave or finger snap, and connected experiences in which a user can high-five a friend or family member from across the world. Indeed, an internet that incorporates full body detection and interaction moves us much closer to Kevin Kelly’s 2016 prediction of the internet’s next phase: the internet of experiences.

Video from “hyper-bodies” by Roland Arnoldt. The project, a real-time multi-user web experience that allows participants to visually stream raw video code through their digital bodies, was presented in the Fall 2018 Live Web Finals at the Interactive Telecommunications Program (ITP).

So how do designers and their tools respond to this change? The first consideration will be finding intuitive ways to use the body as a control mechanism for existing web content — for example, hand gestures for navigation and selection (think Minority Report). But the body brings new challenges to designers. For example, aside from a few accepted signals, like waving and clapping, there isn’t a universally accepted mapping of body gestures to specific meanings, and there is debate as to whether such a mapping could ever exist.

Nonetheless, there are places to look for guidance. Motion-capture practitioners have been digitizing body movement into skeletal rigs and body parts for several decades. Designers can also look to how humans interact with physical objects in the real world: How do we bring them closer or move them away for a better look? Another interesting starting place is signed languages, which effectively use spatial awareness in communication.

Such starting places will help designers build a body language for controlling computers — and websites, specifically — much as we do with today’s keyboard and touch inputs. But the body opens up a new realm of web interaction, as well. Imagine, for example, walking into a FaceTime call to play hide-and-go-seek with a niece or nephew, or shopping online by virtually trying on clothes draped for your body.

Such interactions move away from gesture control and into the realm of “presence,” similar to today’s audio or video streams. Whereas audio and video streams have auditory and visual understandings of space, body presence requires spatial understanding. To picture the difference between traditional video and spatial media, imagine watching actors in a film versus being on set during the film shoot. When watching the film, the scene and the actors are flat and cropped. Actors and objects appear two-dimensional and are only distinguishable visibly — the medium doesn’t make the distinction among objects. Walking around the set, on the other hand, one would immediately notice the individual, multidimensional actors and how they are spatially related to each other and to the objects in the scene, including the boundaries of the space.

Skeleton functionality available from Tensorflow.js PoseNet model. Pictured from left to right Dan Oved, Mohammad Rahmani, Aidan Nelson, Yining Shi, and Hayeon Hwang. Gif by Dan Oved.

Creating web experiences with real-time spatial understanding of people and their environments will require robust training data sets and models; designers will decide how users interface with such models, and vice versa. For example, how does body movement scale on screens and across networks? How do users know what physical interactions are possible and not possible on a given website (an unsolved challenge for voice interfaces)? What are the rules of engagement for two or more virtual bodies in the same space? How do designers account for different physical abilities, shapes, and sizes?

In his 2016 book The End of Average: How We Succeed in a World That Values Sameness, Todd Rose references Adolphe Quetelet’s 1835 mapping of physical and behavioral human traits to a normal curve. After mapping multiple characteristics — from chest circumference to arm length to height — Quetelet proposed that the person who fit all the measurements at the center of the curve was the “Average Man,” the perfect human. Anyone who deviated from the template (which is every person) was flawed in some way. Rather than reference points, Quetelet saw the averages as targets.

Quetelet’s thinking was greatly celebrated at the time and has had sweeping influence across sectors, including design. Despite growing celebration of diversity, the majority of design remains focused on creating one canonical output that will work well enough for most people. Today we have one-size-fits-most keyboards, mice, webpages, and software, no matter that bodies, hands, visual and cognitive abilities, and technical skills come in many varieties.

As designers chart new territory in working with the body as input, they have an exciting opportunity to not just define a new interaction mechanism but to reenvision the understanding and treatment of human variability. Rather than averages as targets, what if designers focus on individuality and diversity? Doing so begins with robust datasets as well as inclusive design teams and research efforts. Success depends on vigilantly attending to bias and exclusion.

DDesigners have many questions to answer — none with simple solutions. Their tools must evolve to provide space to experiment in this new design realm. An initial consideration for toolmakers will be moving beyond today’s two-dimensional screen design and adding a third, interaction dimension for human gestures and movement. Additional prototyping functionality will also be a must. For example, today’s clicks, swipes, and key presses allow for designers to prototype and test from the comfort of a desk chair. But designing for the body requires much more movement, making demo interactions essential for early-stage prototypes and testing. Imagine, for example, designing with libraries of body movements and gestures — navigation movements, three-dimensional brush strokes, dance or sports moves — just like today’s UI packs.

The body as input is bringing a new wave of possibilities to the web, and a host of new challenges to designers and design tool builders. Meeting the challenge with creativity and inclusivity will ensure the intelligent web is as dimensional and diverse as the human body itself.

Continue to part two of this article, focused on machine intelligence as a creative collaborator.

Originally published by the Machine Intelligence Design team at Adobe Design.

--

--

Modus
Modus

Published in Modus

A former Medium publication about UX/UI design. Currently inactive and not taking submissions.

Lisa Jamhoury
Lisa Jamhoury

Written by Lisa Jamhoury

Artist & researcher working with computation and the body • Teaching @ITP_NYU • Formerly Machine Intel @Adobe Design and Digital Lead @BlueChalkMedia

Responses (2)