The Future of Design, Pt. 2: Machines as Collaborators

Machine learning is paving the way for the internet to be a source of creativity and collaboration

Lisa Jamhoury


Photo: Gunther Kleinert/EyeEm/Getty Images

This is the second of two articles about designing for machine learning on the web. This article addresses machine intelligence as a creative collaborator; the first discusses the body as input. This series of articles was originally published by the Machine Intelligence Design team at Adobe Design.

AAmong the final presentations I recently witnessed of Live Web, a graduate course at New York University’s Interactive Telecommunications Program (ITP), machine learning on the web played an important role. Two underlying themes stood out to me: the use of the body as an input device and machine intelligence as a creative participant in interaction.

These two phenomena are going to reshape the way people interact across networks and change the work of designers and their tools — and thus are important issues for designers to consider.

Websites: Our new creative partners

Today, the internet relays information. Whether a user is looking for a fact, an opinion, or inspiration, they can usually count on the internet to find something of interest provided by another person.

While we’ve grown accustomed to the internet serving as an intermediary between humans, networked machine intelligence is paving the way for a website to become a source itself. For illustration, consider a project I saw at ITP: “Into My Eye” by Tong Wu. In this piece, two or more users enter keywords on a website, and the site returns a line of poetry. The users can repeat this process, eventually creating a poem. Watching the process, it becomes clear that the computer isn’t just relaying information between the users but is a live participant in the conversation.

Tong Wu is playfully experimenting with the current limitations of a machine’s creative intelligence. Creative algorithms, however, will become more sophisticated, and human trust in them will evolve. Eventually, the intelligent websites we consult regularly will evolve from content aggregators, like Google and Facebook, to subjective sources in their own right, like a website that generates original photorealistic (or not so realistic) images or one that helps write and research a Medium post (help, please!). As a result, designers will increasingly design for subjectivity.

Screenshot from “Into My Eye” by Tong Wu.

Let’s compare a weather website with the poetry example. Imagine a user wants to know today’s weather. She goes to a website or app and gets a response like, “Today’s weather is 40 degrees Fahrenheit and sunny.” Now imagine she visits Wu’s website to help her write a poem. She enters the prompt “blue,” and the website responds, “To me, blue is flower.”

While we’ve grown accustomed to the internet serving as an intermediary between humans, networked machine intelligence is paving the way for a website to become a source itself.

In the weather example, the declarative response is apt because the question has an objective answer. In the poetry example, the response’s appropriateness, or “correctness,” depends on the user’s perspective. In this case, it would be the designer’s responsibility to decide how and how much to relay the machine’s confidence in its answer. Does the program offer reasons for associating blue with flower, for example? Does it give other options? Does it only give answers, without explanations and feedback? More choice and explanation to the user would certainly come off as more collaborative, but a designer may have good reason to create an intelligent personality that displays more confidence and less doubt.

TToday’s search functionality is a good starting place for understanding how design can accommodate subjective ideas. But rather than showing previously made content that matches search queries (or answers to factual questions based on curated external data as with Wolfram|Alpha), future intelligent websites will be creating novel work and solutions on the fly and refining in the moment, based on user feedback.

Moving one step further, remember that although humans have a full plate when considering ideas contributed by five to 10 people simultaneously, machines can process inputs much faster. Imagine, for example, an intelligent website negotiating one creative project with 10,000 simultaneous human co-creators. How does it prioritize one user’s content over another’s? Can it intelligently parse and incorporate all ideas? Does it rank them?

NVidia’s GauGAN demo shows how simple sketches are turned into photorealistic images.

The above questions highlight a common trait of intelligent systems: Algorithms often create answers in real time, so it isn’t possible to design responses for every situation. Thus, designers will increasingly need to create systems to guide a website’s or application’s behavior, signaling a continued shift away from design’s focus on one canonical graphic output. Whereas today, designers and engineers create responsive layouts to adjust screen design for different mobile and desktop devices, looking forward, designers will also create pre-defined voice, tone, confidence, and other behavioral parameters.

Design tools must respond with robust systems design functionality. Just as tools today are beginning to provide the ability for designers to create systems to build responsive layouts, future tools will allow designers to define behavioral parameters to guide interactions with intelligent systems. To balance a designer’s workload, tools will include increasingly intelligent media creation functionality, minimizing “grunt work” and aiding in the iteration and variation processes.

Design tools for intelligent websites and apps will also need to include robust prototyping functionality. Today, many designers rely on Wizard of Oz testing to prototype intelligent functionality because it’s difficult to interact easily with such algorithms. This type of testing will be increasingly impractical as intelligence becomes more ubiquitous and more difficult to implement as algorithms advance. Access to open source and proprietary algorithms within tools will therefore be indispensable to designers as they flesh out their ideas.

AsAs machine learning rapidly changes the web, and the design discipline with it, designers will continue to guide businesses and end users through technology’s pitfalls and vistas. But the tools they use and the skills required to master them will shift. The new demands of these evolving phenomena will require less technical skill mastery and more of the intangible qualities that have historically set great designers apart: the ability to think differently, see with a discerning eye, and care equally for each component part as much as the whole. If they are successful, rather than one neatly displayed final output, designers will create intelligent systems that surprise, delight, confuse, and hopefully inspire even the system’s own creators.

If you haven’t already, please read part one, about the body as input.

Originally published by the Machine Intelligence Design team at Adobe Design.



Lisa Jamhoury
Writer for

Artist & researcher working with computation and the body • Teaching @ITP_NYU • Formerly Machine Intel @Adobe Design and Digital Lead @BlueChalkMedia