Defining experience quality in large language models

A look at the design strategy behind Adobe Acrobat’s newest generative AI features

Random pink, blue, red, green, and yellow dots, of varying sizes and resolution on a black background.

Digital image by Karan Singh

Reading is fundamental to gaining knowledge and sharing ideas. Adobe, through its Readability Initiative and its partnerships with educators, nonprofits, and technologists, is committed to the democratization of knowledge and helping people with reading and comprehension.

Even before the popularity of Large Language Models (LLMs), the Document Cloud Design team had been exploring concepts—such as allowing users to simplify a document’s vocabulary—that would democratize access to knowledge by making it easier for people to read. Then last year, with the increased presence of LLMs, our vision and commitment to reading and comprehension was accelerated and enhanced. Generative AI provided the technology for two new features in Adobe Acrobat and Adobe Acrobat Reader to unlock information in PDFs: AI Assistant, answers questions about documents to help people understand the information, get insights quickly, and instantly generate content for deliverables. It’s supported by Generative summary, a capability of AI Assistant, that builds a one-click summary of a document's key sections.

A blurry screenshot of a document open in Adobe Acrobat's viewer. Superimosed over it is exploded view of the right side panel showing AI Assistant's overview and sample questions.
AI Assistant in Adobe Acrobat.
A blurry screenshot of a document open in Adobe Acrobat's viewer. Superimosed over it is exploded view of the right side panel showing Generative summary's one-click summary of the document's key sections
Generative summary in Adobe Acrobat.


As we began developing our design strategy for the experience of these features, there was a lot to consider, particularly the sheer volume of information processed in Acrobat—in 2023, more than 400 billion documents were opened in the app. We focused on one question: How might generative AI features make it easier for people to comprehend and act on documents? Answering that question would help us define (and then evaluate) what quality means for an LLM experience.

Establish theses, involve research, and confirm user needs

As we began work on designing and defining the quality of the experience, we also focused on how we would help users discover the features, meaningfully interact with them, and want to return to use them. As our designers began to consider how these experiences would be used in the real world, we laid out a set of theses:


Adobe Design Research & Strategy began external concept testing to collect early feedback about how people were using the features and the value they saw in them, and to gather insights on key design elements.

Then, through an internal beta, we turned to our employees who were using AI Assistant to read and navigate information in long PDFs and to synthesize and generate insights from documents to use in other types of communication like reports and email. Generative summary was being used to quickly understand a document and decide if reading further was necessary. As an internal audience helped us test our theories, they also helped us shape the features for Acrobat’s hundreds of millions of monthly active users.

A blurry screenshot of a document open in Adobe Acrobat's viewer. Superimosed over it is exploded view of the right side panel showing how to use prompts in AI Assistant to create a brief abstraction of the document. .
Prompting an abstraction of the document in AI Assistant.


As part of that internal beta process, researchers also set up an Employee Advisory Board of nearly 80 employees that represented a diverse set of roles including finance, legal, human resources, marketing, strategy, research, and sales. Since every job has a unique set of use cases, needs, and behaviors that impacted how they used the beta, the specific job functions helped us uncover usability issues that might occur for people trying to use the features for specific needs.

Define quality by building a framework

With Adobe Design Research & Strategy, we began to measure the quality of features using a holistic assessment of how people perceived their overall experience, how the features met their needs, and how it aligned with their expectations. We also turned to the Adobe Firefly team to learn how they were measuring quality and kept a watchful eye on the market landscape so we could better understand the needs that generative AI fulfills and learn which tools were most valuable to people.

That information, and the qualitative research, led to a quality framework that would keep us honest about whether we were meeting people's needs. It was based on three parameters:

Use the quality framework to drive decisions and meet user needs

Informed by research and landscape analysis, this framework was extremely helpful when trying to align cross-functional teams on the dimensions of quality we were aiming for.

Adobe Design Research & Strategy used it to benchmark and measure the quality of the experience in interviews and surveys. Our designers used it to evaluate the experience through dogfooding (testing the features ourselves through real-world usage) and end-to-end audits, as a framework to prioritize the design bugs found for engineering teams, and as a framework for landscape analysis, so we could better understand how Acrobat compared to other products. All these activities ultimately contributed to a star rating that was used to inform our release readiness. It also created a set of heuristics that simplified decision-making because it made it easy for everyone to understand and align on the why behind the experience improvements we wanted to make. We could simply say “XYZ capability is needed to improve usability” and it would help drive the backlog and roadmap for features. Examples of how that changed parts of the experience:

A blurry screenshot of a document open in Adobe Acrobat's viewer. Superimosed over it is exploded view of the right side panel showing the upfront overview of AI Assistant.
The upfront overview in AI Assistant.

The road ahead for generative AI in Acrobat and Acrobat Reader

This public beta is Acrobat’s first foray into using generative AI to fulfill Adobe’s commitment to the democratization of knowledge. As the design team kept its focus on designing a quality experience, our internal customers offered their insights, preferences, and needs about how these features should work. We have a deep and rich feature roadmap ahead but as we go, we’ll continue to focus on making the experience a quality one. Try the new generative AI features in Acrobat.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs