The Post-Mac Interface

20 years later, has the Anti-Mac Interface unseated the original Macintosh design principles?

adam baker
42 min readJul 28, 2015

“We reverse all of the core design principles behind the Macintosh human interface guidelines to arrive at the characteristics of the Internet desktop.”

— Don Gentner and Jakob Nielsen, The Anti-Mac Interface

More about me at my website.

In 1996 Don Gentner and Jakob Nielsen published a thought experiment, The Anti-Mac Interface. It’s worth a read. By violating the design principles of the entrenched Mac desktop interface, G and N propose that more powerful interfaces could exceed the aging model and define the “Internet desktop.”

It’s been almost 20 years since the Anti-Mac design principles were proposed, and almost 30 since the original Apple Human Interface Guidelines were published. Did the Anti-Mac principles supersede those of the Mac?

Here I reflect on the Mac design principles of 1986, the Anti-Mac design principles of 1996, and what I observe as apparent (and cheekily named) Post-Mac design principles of 2016… er, 2015.

Hello, Mac.

In 1984, this little guy showed the wider world how a graphical user interface (GUI) could finally realize computers “for the rest of us.” Supported by a coherent set of mutually reinforcing human interface design principles — and shipping with an operating system and apps that thoroughly expressed them — the Macintosh desktop interface defined GUI for personal computers. The Mac human interface principles codified design strategies for standalone, mouse-and-keyboard computers with window-based interfaces.

The System 1.0 desktop. http://applemuseum.bott.org/sections/os.html

The Mac desktop interface was the fertile ground from which sprouted innovations like Photoshop and desktop publishing. But it was a pre-networked, self-contained universe. By 1996, the Internet was ascendant — broadband was coming, cell phones were going digital, SMS was just around the corner (in the U.S.), Palm Pilot could put a computer in your pocket, and it was worth asking whether the dominant Mac user interface (UI) still made sense.

Hello, Anti-Mac.

The Anti-Mac design principles proposed by G and N derive from violation of the Mac design principles. Where the Mac principles emphasize directness, user control, and real-world metaphor, the Anti-Mac principles emphasize indirectness, divestment of control, and tighter connection to software mechanics.These design principles stress “the central role of language, a richer internal representation of objects, a more expressive interface, expert users, [and] shared control.” In the Anti-Mac world, you don’t do as much work — “information comes to you” — and your computing environment is connected and constantly changing. You do less pointing-and-clicking; instead you tell the computer what you want.

“The Anti-Mac principles outlined here are optimized for the category of users and data that we believe will be dominant in the future: people with extensive computer experience who want to manipulate huge numbers of complex information objects while being connected to a network shared by immense numbers of other users and computers.”

— Don Gentner and Jakob Nielsen, The Anti-Mac Interface

G and N provide a rationale for violating the Mac design principles — primarily that the Internet generation would produce expert computer users, powerful computers with “displays [approaching] the size of a desk,” and that the Mac design principles could impede those users and handicap human interfaces for the Internet era.

What happened to the future?

It’s hard to predict the future — see Maciej Cegłowski’s excellent Web Design: The First 100 Years. G and N’s rationale supports the Anti-Mac principles, but some surprisingly different trends have been observed. Not all the Anti-Mac principles apply.

The complementary Post-Mac interface principles reflect — to paraphrase Marshall McLuhan — the ways in which technology has changed the scale, pace, and pattern of human affairs. Chief among the events and trends are:

  • The merger of computers and routine, day-to-day living.
  • The decreasing importance of the computer itself.
  • Less time per app, divided among a flourishing bouquet of apps.
  • More magic apps.

The Post-Mac design principles.

Some of the Mac interface design principles persist. In other cases, the Anti-Mac principles have significantly changed interface design. And in others, I argue that the work of new principles can be seen.

Four Post-Mac technology trends.

Technology and routine life merged. Smartphones and other always-connected devices and services typify the first trend. In 1996, computers were useful, but usually separate from the routines of life. This kind of computer lived on its own desk, often in a special room, for special purposes. A huge change since 1986 (and 1996) is that more of our everyday life — driving, parenting, shopping, cooking, communicating — is inextricably woven with our “computers.” In fact, the line between a computer and some other technology — car, watch, television, phone — has blurred nearly to the point of disappearing.

A Mac, circa 1996. Looks a lot like a Mac, circa 1986.

The computer itself doesn’t much matter anymore. We care less about — and in many ways, demand less from — the computers themselves. Sure, we want the latest iPhone or whatever, but in fact the screen size and processor just don’t matter in the way that G and N thought they might. Instead of infinitely more power and larger screens, most of the way that technology has gotten into our daily routines is by being power- and size-constrained. Most of us don’t even need all the computing power in our pocket. We’ve gotten to a place of sufficiency, when just about any computer or smartphone is good enough for what we want them to do.

To be sure, new technologies — GPS, motion sensors, bandwidth, on-chip video-processing, etc. — continue to be invented and push the envelope of what’s possible, creating whole new products and UIs. But our sight and hearing are themselves constrained, so even screen resolution and audio quality (and necessary bandwidth) have useful upper limits. And we don’t generally need to know our location to a precision greater than a foot or two, so GPS precision has useful upper limits. While we’ll see improvement in technology, it plays a different role now. Much of the processing power we “use” is in the cloud, not in our physical possession.

Look at all those apps! Actually, look at all that mail…

The bouquet of apps. We use a lot of apps for a few minutes here or there, but don’t spend as much time becoming expert at them. (Professional exceptions apply.) And many of those teach us nothing more about how the computer itself works. When I started messing around with computers, there was no Internet to distract me and no more than a handful of programs to use — programs more closely tied to the underlying computer. So I got to know the computer very, very well. But aside from nerds like me, G and N’s generation-who-grew-up-with-computers and are expert users (i.e. highly knowledgeable about how computers work) never materialized. People instead got used to the multitude of straightforward apps that fit into their life. They’re expert users of the apps that coalesce with their lives, not of computers.

Apps that do more for us — magic apps. My early computer experiences — with a Macintosh SE around 1990 — were with apps that asked me to do everything. From HyperCard to Microsoft Word to MacPaint, I wasn’t sitting back and being entertained; I was actively making the computer do stuff. That’s all non-computer people (i.e. the “rest of us”) could do with most computer software for a long time. But that changed with the advent of broadband, the cloud, and rich multimedia technologies. Now we have a lot of magic apps that do most of the work for us. Google. Lyft. Netflix. Spotify. TiVo and its children. Instagram. Countless apps I’ve never heard of. People still create — in fact, more than ever — and play effort-intensive games. But the preponderance of on-trend and the consequent Post-Mac design principles reflect these “magic” applications. Instead of UIs to empower us to do more with apps, UIs empower apps to do more for us.

The ultimate “magic” app.

The Post-Mac world features non-expert users, good-enough devices, and a bouquet of purpose-driven magical apps that mesh with those people’s daily lives.

Let’s look at each of the Mac design principles in turn, and their complementary Anti-Mac and Post-Mac variations.

Metaphors

Anti-Mac principle Reality | Post-Mac principle Simulacra

Metaphors play an important role in user interface. They bootstrap understanding by connecting the UI we’re using to an object, an experience, or an idea that we’re familiar with from some other place — often “real life,” but sometimes other software. iTunes evokes a car stereo; Evernote evokes a planner or notebook; Amazon loosely evokes shopping in a department store. Other online shopping sites evoke Amazon.

Les Mis. Theatre Aspen. Not Paris. http://www.theatreaspen.org/news/aspen-daily-news-epic-in-the-tent/

Metaphors in user interface are like sets in theatre. They convince us to believe that the thing we’re looking at is like something else. A metpahor changes our expectations; they are colored by the metaphor. When software says it has a “library,” I’m convinced that it has some organized collection of stuff. Of course, I don’t expect to be borrowing from it like at the public library. And I don’t expect iTunes to be as limited as a car stereo. My Amazon shopping cart is infinitely big. The metaphor is just a bridge; lots of great designs start with a metaphor to aid understanding, and extend it to realize the capacity of people and technology.

Benefits. The Mac was lauded for its ease of use partly because it relied on easy-to-learn metaphors instead of forcing people to learn how the computer worked. There was a desktop and icons for folder and files, instead of a command prompt and directories and files. You could click on a few files, and drag them into a folder, and put that folder somewhere you wanted on the desktop. Doing that with a command prompt was harder and not so easily explained. In this way, metaphors kick-start learning.

In 1984 especially, this use of metaphor provided a familiar veneer to “wrap” around idiomatic computer systems that were unfamiliar.

Drawbacks. There was a brief time at Google when lots of folks were agog over skeuomorphism — literally representing a source metaphor visually in the user interface. (Think of the iBooks bookshelf faithfully recreated on your iPhone, wood grain and all.) Infatuation with skeuomorphic interfaces comes in waves — just as they were popular recently because of a few shining examples from Apple, they were also all the rage in the mid-1990s. G and N wield the skeuomorphic Magic Cap interface in a legitimate critique of overly strict use of metaphors.

The literal, skeuomorphic desktop of Magic Cap. Do you compose an email by opening the drawer with the envelope on it, clicking the Out box, or picking up the postcard?

Reliance on metaphor can cause problems. First, G and N point out that there are often mismatches between the source (i.e. real-world metaphor) and target (i.e. software interpretation) domains. The real-world library demands that I have a library card, and I may only borrow items from it. I own my iTunes library, don’t need a membership card, and I don’t exactly “borrow” items from it. Features may be missing from either the source or target domain, or there may be things that are present in both domains but work differently. These mismatches can be sources of misunderstanding and confusion. Think of the ways in which Google Docs or Microsoft Word are different from typewriters, or the ways in which Instagram is different from a Polaroid camera.

Second, strict use of metaphors — especially in a skeuomorphic way — can introduce clumsy and unnecessary indirectness to point-and-click interaction. For example, imagine the clumsy interaction of having to open a virtual drawer on a virtual desk, to pull out a calculator, to put it on the desk, then hit the “On” button, all before you could calculate a tip.

The Anti-Mac: Reality

G and N propose that “we need to develop new interface paradigms based on the structure of computer systems and the tasks users really have to perform, rather than paradigms that enshrine outmoded technology.” They provocatively suggest that interfaces should be “based on” the structure of the computer system itself — how the computer works under the hood. In general, modeling on tasks users really have to perform makes sense. But interfaces based on the structure of computer system, less so these days.

Alan Cooper, a noted voice on interface design, suggests three conceptual “models” that can be used to describe a given piece of software: An implementation model — how the thing actually works); a mental model — how we think that thing works; and a manifest or represented model — how the thing presents itself to us. G and N suggest that products should present themselves to us in a way that more closely mirrors the implementation model. They premise their Anti-Mac principle on this forecast: “the next generation of users will make their learning investments with computers, and it is counterproductive to give them interfaces based on awkward imitations of obsolete technologies.” Yes, generations since 1996 grew up with computers, but they don’t know more about how computers work — because they grew up with magic apps on good enough devices that didn’t require them to learn the implementation models.

Post-Mac: Simulacra

As the technologies we use day-to-day have become more intricately connected with our real and social lives, a newish model has emerged. Metaphors still abound, and usefully so. But interfaces aren’t modeled on the Anti-Mac design principle of reality, of being more closely tied to the mechanics of underlying technology. Instead, we find more of what I term simulacra.

If I were to write a statement about this principle, it would be that much of interface design today approximates systems or relationships in the real world, even if only crudely. Facebook’s model is a simulacrum of my real social relationships, and a Facebook Event is a kind of simulacrum-extension of a real-world social event. Their product relies heavily on approximation of real life, though their user interface itself employs mainly tried-and-true standard controls and desktop and Web user interface elements. (Read on to see how this principle is synergistic with the “representation of meaning” principle.)

Ironically, since computers have become connected devices, they have become more personal, because they now reflect deeper integration with us as people. The personal computer of 1986 or 1996 is actually less personal than what we use today. The principle of simulacrum is a reflection of technology’s new capacity to “be aware” of our selves, relationships, and many parts of our lives.

In 1984, metaphors (the desktop), were used to make the unfamiliar technology (file systems) accessible. In 1996, G and N propose interfaces that reflect the the underlying technology and its capacities. In 2015, the more dominant model is obscuring the technology altogether, and centering the software around simulacra of everyday life — with a healthy dose of metaphor, still.

Many of the things we want technology to do simply don’t relate to how computers work — things like having a conversation with friends, buying a pair of shoes, or driving somewhere (think of GPS directions). It would be a mistake to assume that we just want more powerful user interfaces to manipulate bits on a computer. Some software is about that, but not all.

(As an aside, it’s worth noting that a good deal of what technology companies do today is “model” our behavior as consumers, along with all manner of systems and networks and phenomena. Simulacra — even if they’re reductionist or flawed — are on trend.)

Direct manipulation

Anti-Mac principle Delegation | Post-Mac principle Both

According to direct manipulation, users see visual representations of objects on screen — like files, messages, videos, business cards, pages, words, and so forth — and act on and interact “physically” with those representations. In other words, you use the interactive vocabulary — point, click, drag, type — on those representations. Drag and drop is a classic example of direct manipulation in practice. It’s hard to imagine a graphic designer composing a magazine layout without direct manipulation.

Benefits. Like pushing buttons, picking up objects, or otherwise manipulating things in the real world, you can see the thing you’re working with and what happens when you interact with it.

Drawbacks. Some of the richest actions we might want a computer to take up on our behalf would be exceedingly difficult to describe by direct manipulation. Imagine this graphics app task: You want to duplicate a star one thousand times, rotating each copy by a slightly different amount around an axis at the tippy-top point of the star, in a spiral, out from the middle, growing each star by a slight and random amount each time. There’s almost no way to do that efficiently with direct manipulation. Scripting — describing and delegating — the task is easier.

G and N present another example: Software installation. Some software installation involves moving thousands of files into all kinds of special places; that’s a lot of work with direct manipulation. So they point out that software installers had become mainstream by 1996. Amusingly, in the past few years, software installation has often become as simple as dragging and dropping a single icon, especially on contemporary Macs. But their point still stands: Dragging and dropping thousands of files to all the right places would be awfully tedious.

Not much. (Yet.)

Anti Mac: Delegation

G and N’s alternate principle is all about telling the computer — ideally with a non-natural but relatively accessible language — what to do. That works well for the graphics software case, in which it’s easier to describe what we want with precise language. And the installer case — effectively, when we click “Install” we’re bypassing direct manipulation and “telling” the computer to do something for us. The ideal case is the computer from Star Trek: The Next Generation. You can ask her virtually anything, and she’ll do the work for you.

However, G and N acknowledge a few barriers to true delegation. Foremost among the barriers is that true natural language processing requires true artificial intelligence (AI). We’re nowhere near that yet. Command lines — like the widely used Unix command line — are powerful, but depend on strict, idiosyncratic vocabulary and syntax and require a great deal of learning and practice. They mention scripting languages — which did flourish briefly around 1996 — as potential solutions, as well as interfaces similar to text-based games that could “negotiate” with the user to arrive at a mutual understandable instruction for the computer.

The proposed design solutions rely on (at least) two premises: That people need computers to do complicated things that are hard to describe with direct manipulation, and that they’re willing to learn non-natural language-based “scripting-like” solutions. It turns out that people don’t want to learn those languages (and, implicitly, more about how the computer works); they want the computers to do more for them — ironically punting the full resolution of delegation to the AI-complete future. Luckily, a few magic apps are simple enough for us to have something like delegation today — for instance, I can basically ask Google Maps to give me driving directions.

Post-Mac: Direct UIs for delegated services

(Delegation for products and services, direct manipulation for UI.)

This is an interesting case because it illustrates a gap between UI design and product or service (or, if you want, experience) design. In a sense, products perform “delegation;” the more magical an app is, the more it does behind the scenes. Think of those driving directions, or searching on Google. In these cases, the UI itself is subordinate to the magic of the service.

When it comes to the UI, direct manipulation still rules. Scripting languages are a way of solving problems with direct manipulation— like how to get a thousand files into the right places, or how to manipulate a bunch of musical notes. And most of the things that people need help with — or want — from technology are not UI problems per se. (At least not in 2015.) They may be design problems, but not strictly UI problems. So, there are lots of “one-button” apps that do magical things. And they are examples of delegation, though not with an increased role of language like G and N proposed. They’re similar to “install” buttons.

As for direct manipulation’s continued place: Along with the merger of technology with routine activities of life and increasing magic happening behind the scenes, we got these fancy new touchscreen doodads.

Smartphone and tablet apps are overwhelmingly networked, single-purpose programs that practically cry out for direct manipulation. What could be more natural than tapping, pinching, or smooshing the thing directly on the surface of our iPad? In this case, the technology wants to be used in certain ways. On your average tablet, there is only direct manipulation.

In other words, the services in the background are more sophisticated and are indeed delegated operations — calling a Lyft, posting a photo to several social networks at once, or getting driving directions — but the UI you use to set those things in motion remain (and in fact are best, especially with touch screens) actions of direct manipulation.

In terms of delegation to replace direct manipulation in UIs: Scripting languages mostly failed to gain traction, and negotiation-like UIs, including Apple’s Automator, which do perform the kind of delegation G and N refer to, are at most bit actors. Some cool services do this for a living though; If This Then That (IFTTT) is all about delegation. And I’ve seen some apps on the horizon that will execute on this promise. But it’s not the norm in 2015.

See and point

Anti-Mac principle Describe and command | Post-Mac principle See and point

This principle is closely tied to direct manipulation. It more or less states that people can only (and should only) interact with things that are visible on screen, and use the mouse to point a cursor at those things and do whatever they want with them (manipulate them directly). The things on screen might be objects — files, folders, or shapes in a graphics program — or they might be menus. But the principle states that people choose from whatever’s visible on screen.

In the Apple Human Interface Guidelines, the principle is written as “See-and-point (instead of remember-and-type).” The guidelines point out that usees can “rely on recognition, not recall.” They don’t have to remember arcane commands, or about things that aren’t visible on the screen. They simply need to look at the screen, where every available object and activity is visually represented.

Benefits. A see-and-point environment is predictable; there are no hidden agents or objects at work. The interactive vocabulary can be as straightforward as point, click, drag, type (with a single-button mouse), which makes this kind of interface easy to learn. Everything you can do is plainly visible. These interfaces work well for novices.

Drawbacks. As G and N point out, there are lots of things that can’t be represented on a given screen — especially in a connected, Internet world. Those actions, or those objects, can’t always be displayed in a pure see-and-point UI. Plus our displays are only so big: We can’t put it all on one scren! If our interactive vocabulary is limited to a combination of the nouns on screen, and the verbs that our single-button mouse and keyboard offer, we are indeed working with a limited language.

Anti-Mac: Describe and command

G and N’s primary critique of see-and-point is that there are things we might want to tell the computer to do, or to refer to, that can’t be represented on screen. That is strictly true, but it’s not always a practical concern — especially in the world of novice users spending lots of time with all kinds of little, single-purpose apps.

There are cases in which it’s obviously useful to refer to something that isn’t “on screen,” such as when I search Google for [tacofino hours]. (Tacofino is a restaurant in Vancouver, not an object on my screen.) Excessive incoming email necessitated vocabulary to describe filters. Huge music libraries demanded “smart playlists” and UI to describe (in the abstract) the kinds of arbitrary playlists we wanted.

Post-Mac: See and point

As someone who can kind of find his way around a command line and a scripting language, I absolutely see that both can provide — in principle — richer interactive vocabularies and more “power” than see-and-point UIs. But they just don’t apply so often in the Post-Mac environment of novice users expecting lots of magic apps.

If you’re paying attention, you’ll note that Siri and Google’s equivalent — and perhaps Google itself — are describe-and-command UIs. I use Siri all the time to set up reminders, but not much else. She can’t do a lot more for me than that — yet. I engage in a process of “negotiation” with her from time to time, but it’s usually fruitless. Until it’s clearer that I can say almost anything I want to Siri, she’ll always play second fiddle to see-and-point. But, like Google Search, which is effectively a describe-and-command UI, that future interface will be extremely powerful.

For now, in 2015, the most powerful and magical services like GPS, online shopping, online banking, video chatting, and so forth, have simple see-and-point UIs to facilitate the things we want. And these UIs are increasingly mobile-first, making them even more straightforward and inclined to “fit” the small touch-screen and primarily occasional use by novices. Consider Facebook, one of the most powerful applications developed in the past decade, and a revolutionary service. Its UI is almost entirely see-and-point, with little describe-and-command in use at all. (People don’t search too often, besides typing the names of friends.)

(It’s worth noting that see and point combined with rich-cue modes, is probably the predominant UI pattern for mobile interfaces.)

Consistency

Anti-Mac principle Diversity | Post-Mac principle Consistency

The Mac design principle of consistency is about applications being “consistent within themselves” and “consistent with one another.” What that means is that UIs should strive for a kind of regularity that helps people learn how things work. That is, one way to do things, one way of referring to things. There are exceptions, but UIs are not usually creative expressions; artistic but unnecessary variation can make apps more difficult to learn. Consistency can be achieved by using standard controls, by adopting platform patterns, and by rigorous attention to any intentional variation during the design process.

Benefits: Once a user learns how something works in your app, they know how it will always work. And if your app is like other apps, once they learn how it works in your app, they’ll know how it works in other apps. As Apple said in the 1986 edition of the Human Interface Guidelines, “this benefits the typical user, who usually divides working time among several applications, and it benefits every software developer because the user learning how to use a new application builds on prior experiences with the same elements in other applications.” The guidelines helped developers make Mac apps that looked and felt like Mac apps.

Drawbacks: Pure, unadulterated consistency is impossible. G and N critique it vaguely by saying that it’s hard to apply, because of “conflicting things with which you can be consistent.” Any designer with experience creating software will tell you that there are certainly tradeoffs involved in achieving sufficient consistency, but plenty of heuristics for doing so. If your user has time to learn the app and is going to spend all day in it to earn a living, then by all means deviate from the norm. But if you’re just one of a couple of dozen apps they use every day, you have to be more careful.

A pen that looks like a shoe! (by nevR-sleep on DeviantArt)

G and N trot out a silly example about pens: Two kinds of pens look different, but they’re still pens, and we can tell them apart. Well, that’s about the level of consistency that software needs to aim for, too. Don’t make your pen look like a shoe (credit to G and N for that example).

Anti-Mac: Diversity

Unfortunately, G and N didn’t articulate a clear alternative to consistency in their thought experiment. They do say that “it is the rich and fine-grained representation of objects in the real world that allows for pens or books to have a wide variety of appearances and still be easily recognizable,” and go on, “as representations of objects in the computer interface become richer and more fine-grained, the need for complete consistency will drop.” Without more detail, it’s hard to know what the Anti-Mac design principle of diversity is meant to dictate. The Wild West of UI?

Post-Mac: Consistency

This conclusion is self-evident, especially in the Post-Mac environment. People benefit more from the Mac design principle of consistency than they would from diversity that impedes learning and knowledge transfer from app to app. Diversity is fine, married with visual and interactive unity — ensuring that things look, feel, and work mostly the same. Consistency is still a valuable guiding principle, and it doesn’t preclude creativity — you just need to deliberately break it.

Networks like Twitter, Facebook, Reddit, and so forth, have all contributed to the generation and adoption of standards for sharing content, authorizing identity online, and so forth. UIs and products depend heavily on “fitting in” to models people already understand; deviation can be confusing and costly. People can just jet off to the next, more consistent and easier-learned app.

WYSIWYG (What You See is What You Get)

Anti-Mac principle Represent meaning | Post-Mac principle Represent meaning

WYSIWYG — pronounced whizzy-wig — effectively states that what you see on screen should be a faithful representation of exactly what you’ll see when you print it. The Human Interface Guidelines say that “there should be no secrets from the user, no abstract commands that promise future results.” It’s a directive primarily about parity between on-screen and printed display, and intimately connected with direct manipulation and see-and-point design principles: WYSIWYG, and you can manipulate WYS directly to SWYG. It’s corrective of a frustrating experience in contemporaries of the day, like DOS computers running WordPerfect; there you’d enter a mode to tell WordPerfect to make a word bold, but you wouldn’t see it bold on screen — only later, when you printed the document. In other words, it was an abstract command, promising a future result.

Good old WordPerfect. WYS is definitely not WYG.

Benefits. Well, what you see is what you get. No surprises.

Drawbacks. None, really. Who can take issue with the faithful-print-representation goal? G and N interpret the design principle to be more limiting, that no thing on screen should “be” anything more than it appears to be. (Out the window go metaphors like the trash can.) Of course, any object on a computer screen can be something other than what it appears to be. A word can be a word, or it can be a link. It might even be a word that “contains” a whole other document. If the word is someone’s name, it might be a kind of representation of that person. These rich semantics are lost when what you see on screen is limited to literally representing just one state or one slice of the deeper underlying object.

Anti-Mac: Represent meaning

G and N propose an alternative to WYSIWYG that actually subsumes it. Their Anti-Mac design principle suggests that semantically-rich objects should be the atomic basis of interaction. In other words, things on screen should be more than they appear to be. If I’m looking at an miniature “business card” representing a contacting my address book, it should be more than just a picture of a business card: I should have access to the underlying richer data, probably about a person or a company or both. And I should be able to “use” the business card to do things that I might want to do with the underlying person or company, like address an email, connect two people from different companies together, whatever. The object underlying a given on-screen representation could be drawn and interacted with in all kinds of ways; WYSIWYG is still possible because one of the most appropriate representations of a Word document, for example, is the view of what it’ll look like when it’s printed. But that’s not the only view.

René Magritte.

Post-Mac: Represent meaning

This has become the dominant design principle, even though sometimes it’s a bear to make it happen. (There are a million and one ways to encode the rich semantics of any given thing, and it’s hard to settle on the best way.)

So much meaning, not just pins on a map.

In the Post-Mac environment of simulacra, almost everything on screen is a partial and virtual representation of some deeper, meaningful thing. My “home” location pin on Google Maps isn’t just a metaphor or a picture of a pin; it’s just one context-sensitive instance of a richer object — a meaningful Google Maps-internal notion of the place where I live. I can tap it to reveal more about it, and it plays a substantial role when I’m interacting with Google Maps. By default, the app shows me how long it’ll take me to drive from from my home to that place I looked up. Similarly, in most places you see a friend’s name in Facebook, you can bet it’s not just the letters of their name — it’s a little textual representation of them that you can click or otherwise interact with.

In interfaces predicated on simulacra of the real world, the atomic bits of the UI are only successful if they represent meaning. A prescription-management app is only valuable if a prescription object in the UI is somehow a virtual version of my prescription; ditto for an item in my Amazon shopping cart. The list goes on.

Bonus! If you build software on this principle, you can more easily design for progressive disclosure. Reveal only a little meaning at a time, as needed.

Right about now is probably a good time to have a stretch. :-)

User control

Anti-Mac principle Shared control | Post-Mac principle Shared control

The design principle of user control states that the user is in charge — we decide what happens, and when, and deliberately instruct the computer to do those things. Check for new mail, make that calendar appointment, or delete those files. The principle of user control guards against the computer doing harmful or unwanted things, and against feelings of lack of agency or control over the computer. If I want to screw up my Mac, why shouldn’t I be able to? The original Apple Human Interface Guidelines note that this danger is so central to user control that in the guideline text they address how to handle destructive actions — warn users against it, but let them do it anyway.

Benefits. In theory, if we’re completely in control of what the computer is doing, we will never be surprised. Our files will never accidentally disappear. We’ll know we’re completely in charge.

Drawbacks. There are a lot of things that computers can (and should) do that involve ceding control. I’ll forget to run the anti-virus. Let it run for me. (Wait, what happened to anti-virus software?) Frankly, I don’t want to pick through my mail for spam. I want Google’s machine-learning prowess take care of it. As G and N say, there are lots of things that are so repetitive or boring, or so complex and challenging, that we’d much rather have a computer do them for us.

Anti-Mac: Shared control

“The negative side of user control is that the user has to be in control.” Perfectly summarized. In the era of constantly-networked computers, it’s often (but not always) preferable for technology to do things on our behalf. The principle of shared control states that both the user, and other agents — daemons, external services, or other people — play roles in manipulating our computing environment. G and N go on, “by relinquishing control over a portion of the world, you can utilize the products of other people’s efforts, knowledge, and creativity.” Right on.

Post-Mac: Shared control

That this principle has become enshrined in contemporary product and user experience design is so obvious that it’s almost silly to write about it. But in exchange for the benefits endowed by shared control, it’s likely that we have become less aware of how the technology and our lives are woven together, and who’s controlling what. On average, mental models of how nodes in our technological worlds are linked, and their firing sequences, are limited and inaccurate. So, it’s important to add a clause to this principle — informed shared control. G and N acknowledge this need.

(I just thought of a family member’s malware-ridden Windows computer.)

Facebook privacy settings are perhaps the prototypical example. These personal preferences govern who (and what) has access to what we do and post on Facebook, and consequently further actions in connected services. And they have real-life ramifications — a problem that rarely cropped up on the non-networked Mac to which only you had access. Take automatic photo upload and sharing from mobile phones. Many of us will gladly let Facebook spread the “word” about what we’re doing, but it’s ideal that we have a notion of both what it’s going to do and who will see the results. Informed shared control means that we should have the opportunity to explicitly confirm or ask for the promised benefit.

Feedback and dialog

Anti-Mac principle System handles details | Post-Mac principle Feedback and dialog

This principle states that user actions should generate immediate and ongoing feedback, and that feedback should be useful and actionable, if appropriate. Software should provide clear (primarily visual) cues that keep the user informed and aware of what’s going on. It reinforces the principle of user control — G and N say “if the user is required to be in control of all the details of an action, then the [user] needs detailed feedback.” Exactly.

This isn’t just about response to clicks, or showing progress indicators during lengthy processing. The design principle entreats developers to write clear error messages that explain reasons for problems, and to provide actionable help to resolve them. In general, to keep people up to date about what’s going on.

Benefits. Clear, regular feedback confers several benefits. People know that the computer is working, because it responds to their input. They know what the computer is doing, because it tells them — including anticipatory guidance about how long complicated operations might take. (In theory, the computer’s not doing anything without communicating about it.) They know when something is wrong, because the computer raises a flag, and helps them fix the problem. Plenty of mid-1980s DOS software did not communicate status clearly, leaving people frustrated and scratching their heads about what was going on.

Drawbacks. If taken to the extreme (i.e. the computer should communicate to the user about everything that’s going on) this feedback could be overwhelming. No one needs a computer that’s constantly telling you everything’s OK. In the event that control was ceded to background processes — anti-virus scans, email checking, and so forth —the problem of constant feedback could multiply into too many notifications. (And it has.) Developers have an important job to do here: Decide what’s important for people to know about, and what’s not, and only tell them about what matters.

Anti-Mac: System handles details

The Anti-Mac principle works in harmony with shared control and delegation. If the computer can handle it, it doesn’t need to provide feedback — except in the event of a problem. I think that G and N are a little bit extreme (and academic) in their interpretation of the Mac design principle. But they suggest that the “computer should be more flexible in the amount of feedback it provides,” and that makes sense.

Post-Mac: Feedback and dialog

In the end, computers still need to — and do — provide a lot of feedback and dialog. Even for regular, automated processes in the world of shared control. Plenty of people like to get an alert for every single text message, email, Instagram, SnapChat, and Soundcloud comment (etc., etc.) they receive, whether “necessary” or not. And there are plenty of mechanisms — badging, Facebook chat bubbles, rich menu icons, sounds, even dedicated hardware — So flexible feedback and dialog is de rigueur in UI. The system does handle plenty of details, but clear feedback is the norm.

Flexible feedback mechanisms, many for background processes.

Again crops up the useful distinction between the nuts-and-bolts of the UI itself, and the product or service design. The UI needs to provide feedback to every click and tap, and to let people know what’s going on. But there are plenty of unobtrusive ways to provide constant and useful feedback about shared or background processes too. Consider how Uber and Lyft nicely show you where your ride is and (approximately) how long it’ll take to get to you. Or badging an app icon to show new messages.

MapQuest circa 2007. Little direct manipulation, non-continuous feedback.

Interestingly, good feedback and dialog was absent from the web during its transition to a real app platform. Because it was hard, because connections were slow, or because people just didn’t know to design it in, plenty of early web and even “web 2.0” products didn’t provide quick-enough feedback, nor clear guidance about what was going on. With the introduction of powerful smartphones and technologies like the V8 JavaScript engine in Chrome, it’s since become fashionable to overdo feedback with excessive animation. On the other hand, even that sometimes-gratuitous feedback reinforces the direct manipulation of touchable UIs; we want to see and “feel” things zoom as we pinch our fingers, and slide left or right as we swipe. Thus this principle is perhaps more important than ever.

Forgiveness

Anti-Mac principle Model user actions| Post-Mac principle Forgiveness

Expressing the forgiveness design principle is about making users feel safe, and helping them develop trust in software. Think of how often you Undo every day. Undo makes lots of apps safe for exploration and regular use. In addition to mistake-correction, software should guide people away from harmful or destructive actions.

Benefits. Even with the “simplest” UI, it’s easy to make mistakes. People feel more at ease if the UI appears to invite exploration, and when they learn that they can reverse mistaken actions. They can explore without messing things up. On the flip side, if they see an dangerous-looking, uninviting UI that forbids correction, provides little feedback, or makes it easy to do scary things, they’ll be fearful and mistrust will develop. (Command lines, for most people.) And if there’s not enough feedback, they might unwittingly make a mistake — ask anyone who’s gotten their privacy settings wrong!

Drawbacks. None.

Anti-Mac: Model user actions

This is one of the least well-defined Anti-Mac principles, but it is probably best defined by this quote from G and N: “the computer needs to build a deeper model of our intentions and history.” Undo is pointless if it’s at the wrong semantic level — imagine if Command-Z in Medium only erased one letter at a time. So, the question for the designers and developers is, “what does it mean for someone to correct a mistake?” And what does it mean for someone to gain a sense of trust? That’s where, I suppose, more richly modeling real user activities makes sense. Every software has a different notion of what it means to be forgiving — and, in line with the principle of feedback, must communicate to encourage that sense of safe pliability.

Post-Mac: Forgiveness

Again, in the Post-Mac world of rich, directly-manipulated visual interfaces on tablets and phones, and plenty of opportunities to do scary things (like one click to buy a very expensive item on Amazon), it’s self-evident that forgiveness is the principle of the day. Most products at least make an effort to warn about potentially harmful effects, and help people correct errors. And many apps truly reward experimentation and playing around; those that do are more likely to foster a deeper emotional bond with their users.

Google Maps. One click on a transit stop and I can explore a little more; just one click away and I’m back to the plain map. Forgiving. Also a great example of perceived stability; I clicked on the map, and some parts of the UI changed — it’s visually different, a new mode — but not radically so. One click on the X in the search box returns me to a safe starting point.

Perceived stability

Anti-Mac principle Change | Post-Mac principle Predictable change

This principle effectively states that the user’s environment should be predictable and familiar. It is especially synergistic with the principles of modelessness, consistency, and user control. Enormous effort went into designing a stable platform in the Macintosh, on top of which Mac-like apps could be built — apps which reinforced and took advantage of the thoughtful platform UI. For example, one menu bar, always in the same place, with certain kinds of menus (File, Edit, and View) consistently listing the same kinds of menu items, adapted to the specific instance of the app at hand. A more sophisticated interpretation indicates that apps should remember user preferences and UI state. If if someone leaves the program and comes back, things are where they were left off.

I often use analogies to describe the lack of perceived stability. One is the sporty sunken living room phenomenon —when you trip into a sunken room because you expect its floor to be at the same elevation as the room you’re leaving. Or the danger of black ice — it looks like it did before, but it’s different, and suddenly you lose your grip.

Benefits. Users can transfer knowledge from one part of your app to another, or from one app to another, or even from one platform to another. They can develop trust in the UI and their capacity to use it.

Drawbacks. If taken to the extreme, efforts to ensure stability could result in overly similar interfaces — sameness when difference is required. Then, the UI is indistinct and difficult to learn. So, this is best understood as an effort to keep the most fundamental elements of the environment familiar and predictable. It’s an art.

Anti-Mac: Change

G and N don’t actually name their Anti-Mac principle, but I think it is most aptly named change. They mention that a large and complicated application could be overwhelming but if it “discreetly rearranges the interface from time to time to offer us only the features of current interest to us,” it could be less overwhelming. Except if you’re looking for the Bold button and it’s disappeared from your toolbar because you don’t realize you’re in a different mode.

In short, the Anti-Mac principle states that the environment can and should change, an idea reinforced by the principles of shared control, deeper modeling of user actions, and richer cues (see the next principle). If the UI is smarter, why shouldn’t it change according to our needs?

Specifically, G and N argue, it’s inevitable that in a networked world of shared control, new objects will find their way into our environment from behind the scenes: New messages, new movies to watch, and so forth. But those objects are content, not chrome — an important distinction. Trotting out another analogy (sorry), the shopping cart should stay the same, even if the produce changes a bit from week to week. (But it damn well had better be in the same part of the store.)

Post-Mac: Predictable change

People still need predictable environments. A certain amount of novelty and change makes our lives interesting, and there’s room for that. But powerful UIs maintain a sense of perceived stability. And in Post-Mac world where people use lots of apps just a little bit, they depend on a reliable environment. Consider Apple’s tight control over the iOS platform experience, not only exercised in app-store review but also by way of plenty of ready-made UI architecture integrated into their developer tools. And consider the consistent overarching metaphors that you see from site to site on the web, such as placement and function of shopping carts.

Many great Post-Mac products express the design principle of predictable change. In general, good UIs still achieve perceived stability for most users, but content and some parts of the UIs may change in foreseeable ways. For example, every time I refresh Facebook, I may see new posts from different people, but the environment is more or less the same. If I shut down my Mac, the next time I turn it on it restores all the documents and browser tabs I had open. So, predictable change is about combining perceived stability in the environment with appropriate flexibility, in ways that are easily anticipated.

Apps like Flipboard can be customized according to my tastes, and I can configure Photoshop to have the tools I want, where I want them. What’s most important is that the environment is predictable for me — something that’s different for every product.

Q: I’ve never eaten before. What do I ask for?
DATA: The choice of meal is determined by individual taste.
Q: What do you like?
DATA Although I do not require sustenance, I occasionally ingest semi-organic nutrient suspension in a silicon-based liquid medium.
Q: Is it good?
DATA: It would be more accurate to say it is good for me, as it lubricates my bio-functions.

Aesthetic integrity

Anti-Mac principle Graphic variety | Post-Mac principle Aesthetic integrity

User interface is primarily a visual affair. Images and text on a screen convince us to think, then act, in deliberate ways. The principle of aesthetic integrity advises systematic, thoughtful, and appropriate visual design choices. Contemporary UI design owes a lot to a thread of extensive and multidisciplinary inquiry into perception (gestalt psychology), graphic communication (especially graphic design from post-World War II Germany and Switzerland), and Human-Computer Interaction. Being consistent with good design principles — visual form, layout and composition, and so forth — is a more sure way towards ensuring users see things the way you intend them to.

Ableton Live. Judicious visual design, appropriate for its use case.

A superficial interpretation of the principle is that UIs should look like 1970s Swiss graphic design — uniform, with lots of space, limited variation, and strict visual hierarchy. But a more realistic and charitable interpretation is just that apps should be visually regular and coherent.

Benefits. Clarity! Consistency! Organization! From legibility to…well, screens just not being eyesores, following best practices in visual design can ensure that a UI is attractive and digestible — helping us pick out patterns that make software usable. Aesthetic integrity is about appropriate visual design, not a single visual design. There’s a good reason that Google Search features a white background, with mostly large, high-contrast type — it’s optimizing for a brief experience, and a quick recognition of the search result or content you’re looking for — but a nighttime fly-by-wire cockpit UI for long-haul pilots shouldn’t follow the same patterns. Nevertheless, all interfaces that feature good graphic design sense capitalize on our perceptual abilities.

Drawbacks. If you think there’s only one way to design — one typeface, one type size, one set of colors — you might overdo the consistency and create an undifferentiated and bland user interface. But, I would argue that in most cases aesthetic integrity indicates appropriate variety.

South Korean apartment blocks. Aesthetic integrity does not mean extreme uniformity.

G and N propose the principle of graphic variety in opposition to aesthetic integrity. It feels like a perfunctory opposition to uniformity, rather than a true alternative to aesthetic integrity. They say that visually richer interfaces — more like our real world in their visual variation — would be “more interesting, more memorable, [and] more comprehensible.”

Certainly memorable, with rich cues and variety…

They seem to imagine a virtual world with a profusion of interactive objects, each calling for its own unique representation: “Totally uniform interfaces will be drab and boring and will increase the risk of users’ getting lost in hyperspace. Richer visual designs will feel more exciting. From a functionality perspective, richness will increase usability by making it easier for users to deal with a multiplicity of objects and to derive an understanding of location and navigation in cyberspace.” In this conception, “cyberspace” itself is a universe that assumes its own shape. For quite awhile, web sites were awfully uniform and indiscernible from one another without close reading — they were all text and blue links, with almost no styling. In that light, an exhortation for variety makes sense.

On the other hand, do we want recognizability due to familiarity, or memorability due to difference?

Post-Mac: Aesthetic integrity

Since connected technology has dissolved into our routine lives, and because of the consequent imperative for simulacra-modeled interfaces, our interaction is increasingly focused on a few core objects and experiences: Commercial transactions (shopping), watching and sharing picture, video or audio content, and conversations with other people chief among them. It’s become increasingly important to make these things reliably recognizable. There are even well-established patterns in wide use, like Sharing buttons.

SquareCash. Not like all other UIs, but still demonstrating aesthetic integrity.

Apple in 1996 was at its nadir of influence — and it was nearly unimaginable that 20 years later, its ways of design would dominate the technology world. For better or worse, they are, primarily because of the iPhone. Just as the Mac set the standard for desktop UI, we have witnessed the Apple-ization of everything, again. Startups, independent apps, and even Apple’s behemoth competitors strive for Apple’s distillation of “aesthetic integrity.” In other words, people copy Apple’s design — interface and marketing — even when it’s not appropriate.

In spite of the Apple-ization, great variation remains — although the notion of being well-designed reigns, and overwhelmingly in favor of the principle of aesthetic integrity.

Modelessness

Anti-Mac principle Richer cues | Post-Mac principle Richer cues

The modelessness principle advises support for any user actions avoidance of short-term modes in which user actions are limited, or where customary actions produce different results. Modes come in many flavors, from turning on bold styling in a word processor to completely locking out app functions until an important alert is dismissed.

Former Apple executive, HCI researcher, and anti-mode crusader Larry Tesler’s license plate.

People train on routine use of predictable UI. Single-clicks do one thing, double-clicks another. They learn to expect certain controls in certain places, and for them to work consistently. Modes can interfere with that learning, violating people’s expectations. In different modes, controls may be in the same place but do something different. Or the mode might look similar, with different controls. Mode switches are most confusing when the UI looks the same, but operates differently. Perhaps, like me, you’ve been caught in the confusion of iTunes’ modes — check out The Many Modes of iTunes for a brief review.

Benefits. In a truly modeless UI, the response to our actions is completely predictable, and we can always do whatever we want. The app lays out everything we can do, and we are free to choose from among those options.

Drawbacks. G and N neatly take down the aspirational and unachievable true modelessness, saying “users need the interface to narrow their attention and choices so they can find the information and actions they need at any particular time.” If a computer is extremely capable, then all the things one can do with it is overwhelming, and a comprehensible interface to it all practically impossible to design.

Anti-Mac: Richer cues

Modes are both inevitable and useful. Quoting Jeff Johnson, G and N point out that “real life is highly moded” (debatable) and that “what you can do in the swimming pool is different from what you can do in the kitchen” (obviously true). It is the rich variation in environment and objects — in their cues, affordances, and capacities — that make real-life modes relatively painless. The principle of richer cues argues that people can successfully use modes in software, especially if they are built with diverse and expressive parts.

Take an example from the real world. Like an ATM, any late-model point and shoot digital camera has a bunch of undifferentiated “soft” buttons that change depending on what mode the camera is in. Usually, the mode is indicated by markings on the display. It’s hard to become fluent and use those buttons naturally, without pulling your face away from the camera and verifying the mode and what the buttons do. Contrast that with an SLR. Its controls — varied in shape, size, position, texture, response, and function —are distinct enough to be learned and used expertly, without drawing your face away from the camera.

Post-Mac: Richer cues

In the Post-Mac world, a lot of apps look similar and rely on the same metaphors, but model different simulacra. For example, lots of applications let you share content, or facilitate conversations with friends. Yet they’re “rich” enough that most of us can negotiate them with ease.

Yelp, Lyft, and Apple Maps. Awfully similar, but different enough — same map, different “mode,” with richer cues. Even within each app you’ll find multiple modes, each slightly different. There are still mode errors — when the same action produces an unexpected or different result — but in general, richer cues have helped us figure things out. If all interfaces were text-based, or only used graphically-impoverished standard controls, richer cues would be difficult.

In effect, these apps are modes of a common underlying map-and-pin UI. In the maps example above, these apps violate the principle of modelessness by changing the way that taps on the map work, that searches work, and so forth. It’s not one map for all purposes, but rather specialized maps for single purposes — with richer cues to make the modes clearer. This is a stretch of an example; admittedly, without richer cues, mode errors can still be common and frustrating.

The characteristics of the Post-Mac interface.

So, after looking at each of the Mac to Anti-Mac to Post-Mac principles individually, how do they come together to describe the characteristics of the apparent Post-Mac interface?

For the “apparatus” of UI, many of the Mac principles remain central. It may be that Apple’s human interface heritage simply continues to set the tone (in part due to the company’s unparalleled success), or because of their appropriateness and value. As for the essence of products — what their UIs empower users to do — well, they have changed, and so have the principles underlying them. Many of the Anti-Mac principles do fit.

Modeled on real life, not technology. Post-Mac interfaces do remarkable things with technology, but generally in service of our day-to-day lives. As always-connected computers have dissolved into our routines, their technology-ness has diminished. That is, we pay less attention to them as technological objects. Their internal mechanics—like file systems — are less relevant. Simulation of real-world activities plays an increasingly central role in user interface. Metaphor is used less often to expose and explain an implementation detail, and more often in support of an experience that either is a simulacrum or intended to manifest in the user’s “offline” life. Technology is more and more personal.

Concomitantly, objects in our UIs are increasingly imbued with logical data about their meaning in our real (and virtual) lives. The elements on our screens are rarely “skin deep” but usually just one representation of a richer and meaningful concept, and we’re being trained to treat them that way. What you see is almost definitely not all that you get in a Post-Mac interface.

An environment that changes predictably, with outside influences and rich cues. Post-Mac interfaces flow and change according to the principle of shared control. Many products operate on our behalf, and UIs update accordingly. Many of our activities are defined by the response we get from actors and agents outside of our immediate technological environment — Find My Friends, anyone? But UIs do not change willy-nilly; the best Post-Mac UIs are coongruent with principles of aesthetic integrity, consistency, and perceived stability (especially platform consistency), such that we can generally anticipate the ways in which they’ll change. It’s rare that we have the rug pulled out from under us in a well-designed Post-Mac UI.

“Good” visual design. Certain visual styles have dominated UI design since the late 1990s, and persist in the Post-Mac interface. Consistently, Apple’s visual design has influenced competitors and acolyte developers alike. Trends like skeuomorphism, light-modeling, flat shapes, more text or less text, light and dark, have come and gone. But the “principles of visual design” referenced in the original Apple Human Interface Guidelines inform more of UI today, not less. It’s not about uniformity; there’s plenty of variation. But whether you’re looking at Windows 10, or the Wii U UI, or an Android smartphone, you’ll generally see evidence of well-organized information, use of scale, contrast, and color to creat comprehensible hierarchies, and so forth. While there’s variation from platform to platform, or use to use, to become one of the multitude of useful applications someone is willing to commit to, UIs generally have to participate in the maintenance of perceived stability.

Modes are still la modeespecially on small screens, where there’s just no room for all the buttons — but rich visual cues, and careful visual design, make them generally easy to negotiate.

(I would add that interfaces that haven’t been touched by a “professional” Swiss-thinking designer are starting to look awfully archaic. And standard UI kits like Google Material, Apple’s kits, and so forth, are further enshrining certain aesthetic principles in the Post-Mac world.)

Almost tactile interfaces that want to be played with. The Post-Mac world features highly personal devices like smartphones and tablets. They stay with us all the time, playing a part in infinite micro-activities. And their form factors demand the use of our hands in ways that connect us more deeply to them. We grasp them, hold them close to our bodies, and use interfaces that depend on the principles of direct manipulation, feedback, and forgiveness. Devices like the Apple Watch, Nest, and those little doodads that help you find your keys emphasize the central role that those “traditional” principles of UI continue to play.

Language has not exactly replaced visual representations (shapes, icons, and so forth) in user interface, though there are examples of language assuming that role — Google Search is text-as-UI. Rich, visual interfaces that both take advantage of the beautiful, high-resolution displays, and that focus attention on visual “content” like photos and and videos, are the norm in the Post-Mac world.

Conclusions?

I don’t really have any; this was more of an experiment — an attempt to thread 1986 to … nearly 2016. Comments, thoughts? Let me know. :-)

Full disclosure: I interned at Apple in 2002 and 2003, and contributed to two editions of the Human Interface Guidelines.

I’m a product designer with a specialization in health & medicine. I’ve worked for Iodine, FDA, Google, Google.org, Apple, BlackBerry, Marketcircle, and others. Follow me on Twitter: adam baker.

If you’re looking for someone to help you problem-solve and design your app, program, or service, feel free to get in touch.

--

--