In the future, generative UI will dynamically create customized user interfaces in real-time. This shift will force an outcome-oriented design approach where designers prioritize user goals and define constraints (for AI to operate within), rather than design discrete interface elements.

Defining Generative UI

Within the past year, the design community has begun discussing how generative user interfaces could impact our field. 

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context.

Currently, interfaces must be designed to satisfy as many people as possible. Any experienced design professional knows the major downsides with this approach — you never make anyone perfectly happy. Personalization and customization can help, but in minor ways.

While the timeline for this change is still unclear, we anticipate that genUI will allow for highly personalized, tailor-made interfaces that suit the needs of each individual.

Today = the same interface for everyone. Future with GenUI = Personalized interface for each user
GenUI offers the potential to shift from single-experience design to personalized experiences for each individual.

Generative UI vs. AI-Assisted Design

There is an important distinction between generative UI (as we’ve defined it) and using generative AI as a tool throughout the design process.

 

Generative UI

AI-Assisted Design

Who benefits? 

End users

Designers and product teams

What is the output?

A dynamic, custom interface generated in real time for a specific end user 

AI-generated UI designs and code 

What is the impact? 

Every end user interacts with an interface built just for them and their needs in that moment.

Product teams can significantly accelerate the ideation, design, and implementation of interfaces.

AI-assisted design tools are currently growing in popularity because they speed up the design and prototyping process. For example: 

  • UIzard converts text prompts and hand-drawn sketches into mockups.
  • Canonic offers the ability to create AI-generated full-stack applications without requiring any code knowledge at all. 
  • v0 by Vercel can turn text prompts into simple coded prototypes.

While the tools and platforms for AI-assisted design are still fairly rudimentary, we do believe they will eventually vastly accelerate the design process, which is exciting in its own right. However, generative UI will have an even greater impact on our field in the long term.

Example Concept for Future Gen UI: Browsing Flights

A user named Alex opens up her Delta Airlines app to book a flight to Chicago for a client visit. She’s a frequent flyer with Delta. 

Alex has dyslexia, which is documented in her user profile. Her personalized Delta app uses a special font and color contrast to make the content easier for her to read. 

Speaking aloud to the Delta AI agent, Alex requests to see flights to Chicago from May 6 to 10. Since she hasn’t specified a different origin airport, the app assumes she’s leaving from her home airport in Miami and begins searching for flight options.

As the system pulls up flight options, it also checks for any weather or upcoming events that could impact Alex’s trip. A warning message appears on the screen, alerting Alex that there’s a major event over those dates that will make her travel more expensive. The system advises Alex to book her flight and hotel as early as possible.

The presentation of Alex’s flight options is entirely determined by her past behavior and preferences. She cares most about cost and travel time, so those are displayed more prominently in her flight results. The results are ranked based on those preferences.

The first option in the flight list would fit her needs best, but there’s a warning message next to it — no window seats left. Alex always prefers a window seat, so she moves on to the next option.

Alex never takes red-eye flights, so those are collapsed and placed at the very bottom of the list.

This individual example may be plausible without genUI, but not at scale. GenUI makes it feasible for Delta to deliver an equally personalized experience to each of its 190 million yearly flyers. 

A Shift to Outcome-Oriented Design

Generative AI systems have established a new interaction paradigm, intent-based outcome specification. This is already shifting how we think about digital design.

UX design has traditionally involved a heavy focus on the interface. While interfaces will always be important to UX design, AI-powered automation and generative UI will lead to a rise in outcome-oriented design.

Outcome-oriented design involves orchestrating experience design with a greater focus on user goals and final outcomes, while strategically automating aspects of interaction and interface design.

The Outcome-Oriented Designer

Because AI systems can shortcut the information-seeking process, the human design of microinteractions will become substantially less important. They’ll either be inexistent (because the AI system makes them unnecessary), or they’ll be dynamically designed through generative UI to fit the user’s exact context and needs.

Consider the earlier flight-booking example. A designer working on a traditional, single-experience airline website would need to focus on designing specific components (filters, search fields, results pages) in a way that is most usable to the majority of customers.

With genUI, that same designer may have freedom to focus on the vast array of details and facets that can impact a specific experience (outcome). As a field, we will shift from designing for the average to designing for the individual. For example, designers will identify different sets of requirements (we’ve been thinking about these as guard rails that the AI must abide by when generating an interface) for different types of users.  Like we now turn on certain features or interface elements for certain users, we will give the AI constraints that it should satisfy when generating an interface.

Eventually, designers will need to change the way they work.

We’ll need to shift from designing interfaces to designing outcomes. Especially for those of us who have been working in the field for years, this will be a challenging change. While these shifts will be significant, many of our fundamental skill sets will be more important than ever: user-centric problem solving, critical thinking, curiosity, and a holistic point of view.

Humans will need to provide guidance and constraints for generative UI. We must guide the generative UIs, even if we aren’t making minute decisions about individual components. This will make our job more complex and less tangible. There will be more variables and potential outputs to think of, ultimately making us designers of parameters and constraints. For example, we might prioritize different types of user actions or categorizing information into must show, should show, never show). We may not need to specify individual design details, but we’ll need to help a genUI system understand our user and business goals.

Our understanding of personas and customer journeys will change. With generative UI, we’ll have the ability to accommodate a wider variety of user profiles, needs, and experiences. Our design artifacts and documentation will likely become much more complex, but AI-powered research and design tools will help us.

Research will become even more vital. Studying users will become even more crucial as traditional design principles and assumptions are challenged and user behavior shifts. Testing will ensure that the dynamically generated interfaces effectively meet diverse user needs and preferences. 

Challenges of GenUI

We’re hopeful that, in the long run, genUI will have a positive impact on digital experiences. For example, the prospect of UIs tailored to the individual has immense potential to improve accessibility and inclusivity in design.

However, in the short-term, we anticipate quite a few problems and challenges.

Generative AI’s problems are GenUI’s problems. Current issues with generative AI models (like hallucinations and biases) would result in the same problems in a generative UI. Additionally, current software and hardware limitations will slow down genUI. For example, immense processing power will be required to generate each unique interface, live, for billions of users at any given moment. If this processing power is local to the device (as some predict it will be), then it will be years, maybe decades, before the majority of users around the world have hardware that delivers genUIs. Thus, it’s quite unclear when genUIs become widely available.

Significant contextual and intent information will be needed to personalize experiences. AI may help us better analyze and utilize the user data we currently have. However, to produce the example flight-booking experience above, a genUI system would need a deep understanding of the individual user. This will involve substantial risks to individual privacy and security.

Constantly changing UIs will cause usability problems. Much of users’ understanding of modern web interfaces is rooted in design standards (for example, logos are often in the top left). The more you use a website, the more familiar (and thus efficient) you become. As Gen UI alters the interface based on your needs, you could be shown a different UI every time you use a website. This constant relearning of the interface might cause frustration, especially in the beginning, as users transition from the old ways. Designers will need to figure out how to balance the gains from a completely customized experience with the losses incurred by the lack of UI consistency and predictability.

References

Troiano, Luigi, and Birtolo, Cosimo. 2014. Genetic algorithms supporting generative design of user interfaces: Examples. Information Sciences, 259 (February 2014), 433-451. https://doi.org/10.1016/j.ins.2012.01.006