The State Of The Web

The opening presentation from An Event Apart Spring Summit held online in April 2021.

Hello, my friends. I’d like us to try to collectively achieve something today. What I’d like us to achieve is a sense of perspective.

To do this we need to take a step back and cast an eye on the past.

For example, I can look back and say “Wow, what a terrible year!”

A year of death. A year of polarisation. Of inequality. A corrupt government. Protests in the street as people struggled to fight against systemic racism.

Yes, I am of course talking about the year 1968.

The past

By the end of 1968, the United States of America was a nation in turmoil. Civil rights. The war in Vietnam. It felt like the polarising issues of the day were splitting the country in two.

But in the final week of the year, something happened that offered a sense of perspective.

In an audacious move, NASA decided to bring forward the schedule of its Apollo programme. Apollo 7 was a success but that mission was confined to Earth orbit. For Apollo 8, human beings would leave Earth’s orbit for the first time in history. The bold plan was to fly to and around the moon before returning safely to Earth.

From today’s perspective, you might just see it as a dry run for Apollo 11 when human beings would step foot on the moon. But at the time, it was an unbelievably bold move. A literal moonshot.

On the winter solstice, December 21st 1968, Jim Lovell, Frank Borman, and Bill Anders were launched on their six day mission to the moon and back.

The mission was a success. Everything went according to plan. But the reason why we remember the Apollo 8 mission today is for something that wasn’t planned.

First of all, after the translunar injection when the crew had left Earth orbit and were on their way to the moon (already the furthest distance ever travelled by our species), someone—probably Bill Anders—pointed a camera back at Earth.

This was the first picture ever taken by a human being of the whole earth. It’s quite a perspective-setting sight, seeing the whole Earth. To us today, it’s almost commonplace. But remember that were was a time when no one had ever seen this view.

In fact, throughout the 1960s activist Stewart Brand had a campaign, handing out buttons with the question, “Why haven’t we seen a photograph of the whole Earth yet?”

I like the “yet” at the end of that. It gives it a conspiracy-tinged edge.

Stewart Brand suspected that if people could see their home planet in one image, it could reset their perspectives. They would truly grok the idea of Spaceship Earth, as Buckminster Fuller would say. The idea came to Brand when he was on a rooftop, tripping on acid, experiencing the horizon curve away from him and giving him quite a sense of perspective.

Later, he would start the Whole Earth Catalog. It was like a print version of Wikipedia, with everything you needed to know to run a commune.

Later still, he went on to found the Long Now Foundation, an organisation dedicated to long-term thinking. I’m a proud member.

Their most famous project is the clock of the long now, which will keep time for 10,000 years. This is just a scale model in the Science Museum in London. The full-size clock is being built inside a mountain on geologically stable ground. Just thinking about the engineering challenges involved is bound to give you a certain sense of perspective.

But let’s snap back from 10,000 in the future to that Apollo 8 mission in December of 1968.

This picture of the whole earth wasn’t the most important picture taken by Bill Anders on that flight. By Christmas Eve, the crew had reached the moon and successfully entered lunar orbit.

Oh my God! Look at that picture over there! There’s the Earth coming up. Wow, that’s pretty.

Hey, don’t take that, it’s not scheduled.

You got a color film, Jim? Hand me that roll of color quick, would you…

Oh man, that’s…

Quick! Quick!

This is what Bill Anders captured.

Earthrise.

I could try to describe it. But they should’ve sent a poet.

Fifty years later, this poet puts it beautifully. This is Amanda Gorman’s poem Earthrise.

On Christmas Eve, 1968, astronaut Bill Anders

Snapped a photo of the earth

As Apollo 8 orbited the moon.

Those three guys

Were surprised

To see from their eyes

Our planet looked like an earthrise

A blue orb hovering over the moon’s gray horizon,

with deep oceans and silver skies.
It was our world’s first glance at itself

Our first chance to see a shared reality,

A declared stance and a commonality;
A glimpse into our planet’s mirror,

And as threats drew nearer,

Our own urgency became clearer,

As we realize that we hold nothing dearer

than this floating body we all call home.

Astronauts have been known to experience something called the overview effect. It’s a profound change in perspective that comes from seeing the totality of our home planet in all its beauty and fragility.

The Earthrise photograph gave the world a taste of the overview effect, right at a time when it was most needed.

The World Wide Web

I wonder if it’s possible to get an overview effect for the World Wide Web?

There is no photograph of the whole web. We can’t see the web. We can’t travel into space and look back at our online home.

But we can travel back in time. Let’s travel back to 1945.

That was the year that an article was published in The Atlantic Monthly by Vannevar Bush. He was a pop scientist of his day, like Neil de Grasse Tyson or Bill Nye.

The article was called As We May Think. In the article, Bush describes a hypothetical device called a memex.

Imagine a desk filled with reams and reams of microfilm. The operator of this device can find information and also make connections between bits of information, linking them together in whatever way makes sense to them.

This sounds a lot like hypertext. That word would be coined decades later by Ted Nelson to describe “text which is not constrained to be linear.”

Vannevar Bush’s idea of the memex and Ted Nelson’s ideas about hypertext would be a big influence on Tim Berners-Lee, the creator of the World Wide Web.

But his big breakthrough wasn’t just making hypertext into a reality. Other people had already done that.

Douglas Engelbart, who wanted to make the computer equivalent of the memex, had already demonstrated a working hypertext system in 1968 in an astonishing demonstration that came to be known as The Mother Of All Demos.

The idea of hypertext was kind of like a choose-your-own-adventure book. Individual pieces of text in a book are connected with unique identifiers and you can jump from one piece of text to another within the same book.

But what if you could jump between books? That’s the other piece of the puzzle.

The idea of connecting computers together came from the concept of “time sharing” allowing you to remotely access another computer.

With funding from the US Department of Defence’s Advance Research Projects Agency, time sharing was taken to the next level with the creation of a computer network called the ARPANET.

It grew. And it grew. Until it was no longer just a network of computers. It was a network of networks. Or internetwork. Internet, for short.

Tim Berners-Lee took the infrastructure of the internet and mashed it up with the idea of hypertext. Instead of imagining hypertext as a book with interconnected concepts, he imaged a library of books where you could jump from one idea in one book to another idea in a completely different book in a completely different part of the library.

This was the World Wide Web. And Tim Berners-Lee called it the World Wide Web even when it only existed on his computer. You have to admire the chutspah of that!

But the really incredible thing is that it worked! In March of 1989 he proposed a global hypertext system, where anybody could create new pages without asking anyone for permission, and anyone could access those pages no matter what kind of device or operating system they were using.

And that’s what we have today. While the World Wide Web might seem inevitable in hindsight, it was anything but. It is a remarkable achievement.

The World Wide Web was somewhat lacking in colour originally. When I started making websites in the mid nineties, colour had arrived but it was limited.

Colour

When I started making websites in the mid nineties, colour had arrived but it was somewhat limited.

We had a palette of 216 web safe colours. You knew if a colour was “web safe” if the hexadecimal notation was three sets of duplicated values. If you altered one of those values even slightly, there was no guarantee that the colour would display consistently on the monitors of the time.

I have a confession to make: I kind of liked this constraint in a weird way. To this day, if I have a colour value that’s almost web-safe, I can’t resist nudging it slightly.

Fortunately, monitors improved. They got flatter for one thing. They were also capable of displaying plenty of colours.

And we also got more and more ways of specifying colours. As well as hexadecimal, we got RGB: Red, Green, Blue. Better yet, we got RGBa …with alpha transparency. That’s opacity to you and me.

Then we got HSL: hue, saturation, lightness. Or should I say HSLa: hue, saturation, lightness, and alpha transparency.

And there are more colour spaces on the way. HWB (hue, whiteness, blackness), LAB, LCH. And there’s work on a color() function so you can specify even more colour spaces.

Typography

In the beginning, typography on the World Wide Web was non-existent. Your browser used whatever was available on your operating system.

That situation continued for quite a while. You’d have to guess which fonts were likely to be available on Windows or Mac.

If you wanted to use a sans-serif typeface, there was Arial on Windows and Helvetica on the Mac. Verdana was a pretty safe bet too.

For a while your only safe option for a serif typeface was Times New Roman. When Mathew Carter’s Georgia was released, it was a godsend. Here was a typeface specifically designed for the screen.

Later Microsoft released another four fonts designed for the screen. Four new fonts! It felt like we were being spoiled.

But what if you wanted to use a typeface that didn’t come installed with an operating system? Well, you went into Photoshop and made an image of the text. Now the user had to download additional images. The text wasn’t selectable and it was a fixed width.

We came up with all sorts of clever techniques to do what was called “image replacement” for text. Some of the techniques involved CSS and background images. One of the techniques involved Flash. It was called sIFR: Scalable Inman Flash Replacement. A later technique called Cufón converted the letter shapes into paths in Canvas.

All of these techniques were hacks. Very clever hacks, but hacks nonetheless. They were clever and they worked but they always reminded me of Samuel Johnson’s description of a dog walking on its hind legs:

It is not done well but you are surprised to find it done at all.

What if you wanted to use an actual font file in a web page?

There was only one browser that supported font embedding: Microsoft’s Internet Explorer. The catch was that you had to use a proprietary font format called Embedded Open Type.

Both type foundries and browser makers were nervous about allowing regular font files to be embedded in web pages. They were worried about licensing. Wouldn’t this lead to even more people downloading fonts illegally? How would the licensing be enforced?

The impasse was broken with a two-pronged approach. First of all, we got a new font format called Web Open Font Format or WOFF. It could be used to take a regular font file and wrap it in a light veneer of metadata about licensing. There’s a sequel that’s even better than the original, WOFF2.

The other breakthrough was the creation of intermediary services like Typekit and Fontdeck. They would take care of serving the actual font files, making sure they couldn’t be easily downloaded. They could also keep track of numbers to ensure that type foundries were being compensated fairly.

Over time it became clear to type foundries that most web designers wanted to do the right thing when it came to licensing fonts. And so these days, you can probably license a font straight from a type foundry for use on the web and host it yourself.

You might need to buy a few different weights. Regular. Bold. Maybe italic. What about extra bold? Or a light weight? It all starts to add up, especially for the end user who has to download all those files.

I remember being at the web typography conference Ampersand years ago and hearing a talk from Nick Sherman. He asked us to imagine one single font file that could go from light to regular to bold and everything in between. What he described sounded like science fiction.

It is now science fact, indistinguishable from magic. Variable fonts are here. You can typeset text on the web to be light, or regular, or bold, or anything in between.

When you use CSS to declare the font-weight property, you can use keywords like “normal” or “bold” but you can also use corresponding numbers like 400 or 700. There’s a scale with nine options from 100 to 900. But why isn’t the scale simply one to nine?

Well, even though the idea of variable fonts would have been pure fantasy when this part of CSS was being specced, the authors had some foresight:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future.

With the creation of variable fonts, Håkon Wium Lee added:

And the future is now.

On today’s web you could have 999 font-weight options.

Images

In the beginning, the World Wide Web was a medium for text only. There were no images and certainly no videos.

In an early mailing list discussion, there was talk of creating a new HTML element for images. Perhaps it should be called “icon”. Or maybe it should be more generic and be called “embed”. Tim Berners-Lee said he imagined using the rel attribute on the A element for embedding images.

While this discussion was happening, Marc Andreessen popped in to say that he had just shipped a new HTML element in the Mosaic browser. It’s called IMG and it takes an attribute called SRC that points to the source of the image.

This was a self-closing tag so there was no way to put fallback content in between the opening and closing tags if the image couldn’t be displayed. So the ALT attribute was introduced instead to provide an alternative description of the image.

For the images themselves, there were really only two choices. JPG for photographic images. GIF for icons or anything that needed basic transparency. GIFs could also do animation and today, that’s pretty much all they’re used for. That’s because there was a concerted campaign to ditch the GIF format on the web. Unisys, who owned the rights to a compression algorithm used by the GIF format, had started to make noises about potentially demanding license fees for its use.

The Portable Network Graphics format—or PNG—was created in response. It was more performant and it allowed you to have proper alpha transparency.

These were all bitmap formats. What if you wanted a vector format for images that would retain crispness at any size or resolution? There was only one option: Flash. You’d have to embed a Flash movie in your web page just to get the benefit of vector graphics.

By the 21st century there were some eggheads working on a text-based vector file format that could be embedded in webpages, but it sounded like a pipe dream. It was called SVG for Scalable Vector Graphics. The format was dreamed up in 2001 but for years, not a single browser supported it. It was like some theoretical graphical Shangri-La.

But by 2011, every major browser supported it. Styleable, scriptable, animatable, vector graphics have gone from fantasy to reality.

There’s more choice in the world of bitmap images too. WebP is well supported. AVIF is is gaining support.

The IMG element itself has grown too. You can use the srcset attribute to give the browser a range of images to choose from to best suit the user’s device and network connection. You can use the loading attribute to get lazy loading of images for free—no JavaScript required.

We now have audio in HTML. No JavaScript required. We now have video in HTML. No JavaScript required.

These elements have been designed with more thought than the IMG element. They are not self-closing elements, by design. You can put fallback content between the opening and closing tags.

The audio and video elements arrived long after the IMG element. For a long time, there was no easy way to do video or audio on the web.

That was very frustrating for me. The first websites I ever built were for bands. The only way to stream music was with a proprietary plug-in like Real Audio.

Or Flash.

While the web standards were still being worked on, Flash delivered the goods with streaming audio and video. This happened over and over. Flash gave us vector graphics, animation, video, and more. But the price was lock-in. Flash was a proprietary format.

Still, Flash showed the web standards bodies the direction of travel. Flash was the hare. Web standards were the tortoise.

We know how that race ended.

In a way, Flash was like the Research and Development incubator for the World Wide Web. We got CSS animations, SVG, and streaming video because Flash showed that there was an appetite for them.

Until web standards provide a way to do something, designers and developers will reach for whatever tool gets the job done. Take layout, for example.

Layout

In the early days of the web, you could have any layout you wanted …as long as it was a single column.

Before long, HTML expanded to provide some rudimentary formatting for that single column of text. Presentational elements and attributes were invented. And even when elements and attributes weren’t meant to be used for formatting, people got creative.

Tables for layout. A single pixel GIF that could be given width and height. These were clever solutions. But they were hacks. And they were in danger of turning HTML into a presentational language instead of a language for structuring content.

CSS came to the rescue. A language specifically for presentation.

But we still didn’t get proper layout tools. There was a lot of debate in the early days about whether CSS should even attempt to provide layout tools or whether that was a job for a separate technology.

We could lay things out using the float property, but really that was just another hack.

Floats were an improvement over tables for layout, but we only swapped one tool for another. Our collective thinking still wasn’t very web-like.

For example, designers and developers insisted on building websites with a fixed width. This started in the era of table layouts and carried over into CSS.

To start with, the fixed width was 640 pixels. Then it was 800 pixels. Then people settled on the magical number of 960 pixels. Designers and developers didn’t seem at all concerned that people had different sized screens.

That was until the iPhone came out. It caused a panic. What fixed width were we supposed to design for now?

The answer was there all along. Even before the web appeared in mobile devices, it was possible to build fluid layouts that would adapt to screen size. It’s just that the majority of designers and developers chose not to build in this way.

I was pleased that mobile came along and shook things up. It exposed the assumptions that people were making. And it forced designers and developers to think in a more fluid, webby way.

Even better, CSS had expanded to include media queries so it was possible to alter layouts at different breakpoints.

Ethan came along and put a nice bow on it with his definition of responsive design: fluid media, fluid layouts, and media queries.

I fell in love with responsive web design instantly becuase it matched how I was already thinking about the web. I was one of the handful of weirdos who insisted on building fluid websites when everyone else was using fixed-width layouts.

But I thought that responsive web design would struggle to take hold.

I’m delighted to say that I was wrong. Responsive web design has become the default!

If I could go back to my past self in the mid 2000s, I’d love to tell them that in the future, everyone would be building with fluid layouts (and also that time travel had been invented apparently).

Not only that, but we finally have proper layout tools for the web. Flexbox. Grid. No more hacks. We’re even getting container queries soon (thanks, Miriam!).

Web browsers now are positively overflowing with fantastic design tools that would have been unimaginable to my past self. Support for these technologies is pretty much universal.

When browsers differ today, it’s only terms of which standards they don’t yet support. There was a time when browsers differed massively in how they handled basic web technologies.

There was a time when being a web developer meant understanding all the different quirks between browsers.

And browser makers spent a ludicrous amount of time reverse-engineering the quirky behaviour of whichever browser was the market leader.

That changed with HTML5. We remember HTML5 for introducing new APIs, new form fields, and new structural elements. But the biggest innovation was completely invisible. For the first time, error-handling was standardised. Browsers had a set of rules they could work from. Once browsers adopted this consistent approach to error-handling, cross-browser differences dried up.

That was good news for web developers. We were sick of dealing with different browsers taking different approaches. We had been burned with JavaScript.

JavaScript

In the beginning, there was no scripting on the web, just like there was no styling. Tim Berners-Lee wasn’t opposed to the idea of executing arbitrary code on the web. But he pointed out that you’d need everyone to agree on which programming language browsers would use.

You need something really powerful, but at the same time ubiquitous. Remember a facet of the web is universal readership. There is no universal interpreted programming language.

This problem of which language to choose was solved in the usual way. Brendan Eich, who was working at Netscape, created a completely new programming language in just ten days. It would be called… LiveScript. Then the marketing department got involved and because Java was the new hotness at the time, this scripting language was renamed to… JavaScript. Even though it has nothing to do with Java. Java is to JavaScript as ham is to hamster.

The important thing is that multiple browsers implemented it. Then the hype started. We were told about this great new technology called DHTML. The D stood for dynamic! This would allow us to programmatically manipulate elements in a web page.

But… the two major browsers at the time, Netscape Navigator and Internet Explorer, used two completely incompatible syntaxes. For Netscape Navigator you’d use document.layers. For Internet Explorer it was document.all.

This was when developers said enough was enough. We wanted standards. The Web Standards Project was formed and we lobbied browser makers to implement web standards, like CSS and also the Document Object Model. This was a standardised way of manipulating elements in a web page. You could use methods like getElementById and getElementsByTagName.

That worked fine, but it was yet another vocabulary to learn. If you already knew CSS, then you already understood how to get an element by ID and get elements by tag name, but with a different syntax.

John Resig created jQuery so that you could use the CSS syntax to do DOM scripting. There were lots of other JavaScript libraries released around the same time, but jQuery was by far the most popular. Clearly, this syntax was something that developers wanted.

Now we no longer need jQuery. We’ve got querySelector and querySelectorAll. But the reason we no longer need jQuery is because of jQuery. Just like Flash, jQuery showed what developers wanted. And just as with Flash, the web standards took more time. But now jQuery is obsolete …precisely because it was so successful.

It’s a similar story with Sass and CSS. There was a time when Sass was the only way to have a feature like variables. But now with custom properties available in CSS, Sass is becoming increasingly obsolete …precisely because it was so successful and showed the direction of travel.

In a way, jQuery and Sass (and maybe even Flash) were kind of like polyfills. That’s a term that my friend Remy Sharp coined for JavaScript libraries that fill in the gaps until browsers have implemented web standards.

By way of explanation to anyone in the States, Polyfilla is a brand name in the UK for what you call spackling paste. So JavaScript libraries like jQuery spackled over the gaps in browsers.

But some capabilities can’t be polyfilled. If a browser doesn’t provide API access to a particular sensor, for example, there’s no way to spackle that gap.

For quite a while, if you wanted access to device APIs, you’d have to build a native app. But over time, that has changed. Now browsers are capable of providing app-like experiences. You can get location data. You can access the camera. You can provide notifications. You can even make websites work offline using service workers.

Native apps had all these capabilities before web browsers. Just as with Flash and jQuery, native apps pointed the way. The gap always looks insurmountable to begin with. But over time, the web always manages to catch up.

At the beginning of 2021, Ire said:

By the end of the year, I would predict that any major native mobile application could be instead built using native web capabilities.

The present

The web has come along way. It has grown and evolved. Browsers have become more and more powerful while maintaining backward compatibility.

In the past we had to hack our way around the technological limitations of the web and we had a long wish list of features we wanted.

I’m not saying we’re done. I’m sure that more features will keep coming. But our wish list has shrunk.

The biggest challenges facing the World Wide Web today are not technical challenges.

Today it is possible to create beautiful websites that make full use of colour, typography, layout, animation, and more. But this isn’t what users experience.

This is what users experience. A tedious frustrating game of whack-a-mole with websites that claim to value our privacy while asking us to relinquish it.

This is not a technical problem. It is a design decision. The decision might not be made by anyone with designer in their job title, but make no mistake, business decisions have a direct effect on user experience.

On the face of it, the problem seems to be with the business model of advertising. But that’s not quite right. To be more precise, the problem is with the business model of behavioural advertising. That relies on intermediaries to amass huge amounts of personal data so that they can supposedly serve up relevant advertising.

But contextual advertising, which serves up ads based on the content you’re looking at doesn’t require the invasive collection of personal data. And it works. Behavioural advertising, despite being a huge industry that depends on people giving up their privacy, doesn’t even work very well. And on the few occasions when it does work, it just feels creepy.

The problem is not advertising. The problem is tracking. The greatest trick the middlemen ever pulled was convincing us that you can’t have effective advertising without tracking. That is false. But they’ve managed to skew our sense of perspective so that invasive advertising seems inevitable.

Advertising was always possible on the web. You could publish anything and an ad is just one more thing you could choose to publish. But tracking was impossible. That’s because the early web was stateless. A browser requests a resource from a server and once that transaction is done, they both promptly forget about it. That made it very hard to do things like online shopping or logging into an account.

Two technologies were created later that enabled state on the web. Cookies and JavaScript. If these technologies had been limited to a same origin policy, they would have nicely solved the problems of online shopping and authentication.

But these technologies work across domains. Third party cookies and third-party JavaScript enables users to be tracked as they move from site to site. The web gone from having no state to having too much.

There is hope. Browsers like Firefox and Safari are blocking third-party cookies by default. Personally, I’d love it if third-party JavaScript got the same treatment. You can also install add-ons to make your browser more secure, although these add-ons are often labelled ad-blockers, which is a shame. Because the problem is not advertising. The problem is tracking.

Perhaps none of this applies to you anyway. You may be thinking that this is a problem for websites. But you build web apps.

Personally I’m not keen on the idea of dividing the entirety of the World Wide Web into two vaguely-defined categories. I have yet to hear a good definition of “web app” other than “a website that requires JavaScript to work.”

But the phrase “single page app” has a more definite meaning. It refers to an architectural decision. That decision is to reinvent the web browser inside a web browser.

In a sense, it’s a testament to the power of JavaScript that you can choose to do this. Browsers render content and perform navigations, but if you’d rather recreate that functionality from scratch in JavaScript, you can.

But should you? Browsers have increased in complexity so that we can build without complexity. We can use the built-in power of modern HTML, CSS, and JavaScript to make web browsers do the work. If we work with the grain of the web, we can accomplish more and more with less and less code.

But that isn’t what happened. Instead developers have recreated form controls like dropdowns and datepickers from scratch using divs and lashings and lashings of JavaScript.

Perhaps this points to some missing features on the web. It’s still too hard to style native dropdowns and datepickers (but that’s being worked on—there’s standards work underway to give us more styling control over form elements). But that doesn’t explain why developers would choose to recreate something like a button using divs and JavaScript when the button element already exists and can be styled any way you like.

I think there’s a certain mindset being applied to web development here. And that mindset comes from the world of software. Again, it’s a testament to how far the web has come that it can be treated as a software platform on par with operating systems like iOS, Android, or Windows. There’s a lot to be learned from the world of software development, like testing, for example. But the web is different. When a user navigates to a URL, it shouldn’t feel like they’re installing a piece of software.

We should be aiming to keep our payloads as small as possible. And given how powerful browsers have become, we need fewer and fewer dependencies—fewer and fewer polyfills.

But performance has gotten worse. Payloads have gotten bigger. Dependencies like JavaScript frameworks have become more and more widespread even as they became less and less necessary.

When asked to justify the enormous payloads, web developers have responded by saying that user’s expectations have changed. That is correct, but not in the way that I think they mean.

When I talk to people about using the web—especially on mobile—their expectations are that they will have a terrible experience. That websites will be slow to load. And I guarantee you that none of them are saying, “Well I’d be annoyed if this were a website but seeing as this is a web app, I’m absolutely fine with this terrible experience.”

I said that the biggest challenges facing the World Wide Web today are not technical challenges. I think the biggest challenge facing the web today is people’s expectations.

There is no technical reason for websites or web apps to be so frustrating. But we have collectively led people to expect a bad experience on the web.

Our intentions may be have good. We thought users wanted nice page transitions and form elements that were on-brand. But if you talk to people, you find out that what they want is to accomplish their task without megabytes of JavaScript getting in the way.

There’s a great German word, “Verschlimmbessern”: the act of making something worse in the attempt to make it better. Perhaps we verschlimmbessert the web.

Let’s step back. Get some perspective. Instead of assuming that a single page app architecture is needed, ask what users need to accomplish. Instead of assuming you need a CSS framework or a JavaScript library, see what you can do in browsers today with native CSS and vanilla JavaScript. Don’t include a bunch of dependencies by default just in case you might need them. Instead, as Rachel puts it:

Stop solving problems you don’t yet have.

Lean into what web browsers can accomplish today. If you find something missing, that’s the time to reach for a library …but treat it like a polyfill. Whereas web standards stick around, every library and framework comes with a limited lifespan. Treat them as cattle, not pets.

I understand that tools and frameworks can make your life easier. And if we’re talking about server-side frameworks, then I say “Go for it.” Or if you’re using build tools that sit on your computer to do version control, linting, pre-processing, or transpiling, then I say “Go for it.”

But once you make users download tools or frameworks, you’re making them pay a tax for your developer convenience.

We need to value user needs above developer convenience. If I have the choice of making something the user’s problem or making it my problem, I’ll make it my problem every time. That’s my job.

We need to change people’s expectations of the World Wide Web, especially on mobile. Otherwise, the web will be lost.

The future

Two years ago, I had the great honour of being invited to CERN to mark the 30th anniversary of the original proposal for the World Wide Web. One of the other people there was the journalist Zeynep Tüfekçi. She was on a panel along with Tim Berners-Lee and other luminaries of the early web. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it. If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

I believe that we can save the web. I believe that we can change people’s expectations. We’ll do that by showing them what the web is capable of.

It sounds like a moonshot. But, y’know, moonshots aren’t made possible by astronauts. They’re made possible by people like Poppy Northcutt in mission control. Katherine Johnson running the numbers. And Margaret Hamilton inventing the field of software engineering to create the software for the lunar lander. Individual people working together on something bigger than any one person.

There’s a story told about the first time President Kennedy visited NASA. While he was a getting a tour of the place, he introduced himself to a janitor. And the president asked the janitor what he did. The janitor answered:

I’m helping put a man on the moon.

It’s the kind of story that’s trotted out by company bosses to make you feel good about having your labour exploited for the team. But that janitor’s loyalty wasn’t to NASA, an organisation. He was working for something bigger.

I encourage you to have that sense of perspective. Whatever company or organisation you happen to be working for right now, remember that you are building something bigger.

The future of the World Wide Web is in good hands. It’s in your hands.

Also on Medium

Responses

Baldur Bjarnason

“Adactio: Articles—The State Of The Web” “When I talk to people about using the web—especially on mobile—their expectations are that they will have a terrible experience. That websites will be slow to load.” adactio.com/articles/18580

Andrew Woods

Everyone who works with the World Wide Web should watch this talk.

Tom Atkins

Thank you for sharing this wonderful talk Jeremy. Beautifully delivered and presented. An inspiring eulogy to the web and one that I deeply share. I’m curious about the frustration with JS frameworks. Isn’t their existence just another example of what is needed in browsers?

# Posted by Tom Atkins on Thursday, November 4th, 2021 at 8:30pm

Joseph Scott

“If we work with the grain of the web, we can accomplish more and more with less and less code.” 👍

Jason M. Klug

“The greatest trick the middlemen ever pulled was convincing us that you can’t have effective advertising without tracking. That is false. But they’ve managed to skew our sense of perspective so that invasive advertising seems inevitable.” adactio.com/articles/18580

Jason M. Klug

“We thought users wanted nice page transitions and form elements that were on-brand. But if you talk to people, you find out that what they want is to accomplish their task without megabytes of JavaScript getting in the way.” adactio.com/articles/18580

Rich Paul

Designers after some Friday inspiration, check out @adactio’s brilliant The State Of The Web, if you haven’t already. A wonderful nostalgia trip for those of us of a certain age, and a brilliant historical education for newer members of the industry. adactio.com/articles/18580

# Posted by Rich Paul on Friday, November 19th, 2021 at 2:19pm

Bramus!

Great talk by Jeremy Keith which was the opening presentation from An Event Apart Spring Summit held online in April 2021. In true Jeremy-style it starts off with space and the early days of the web, to bring us to the present day.

Watch this talk (or read the transcript). And then watch it again. It’s packed with good advice.

Browsers have increased in complexity so that we can build without complexity. We can use the built-in power of modern HTML, CSS, and JavaScript to make web browsers do the work. If we work with the grain of the web, we can accomplish more and more with less and less code.

But that isn’t what happened. Instead developers have recreated form controls like dropdowns and datepickers from scratch using divs and lashings and lashings of JavaScript.

[…]

Let’s step back. Get some perspective. Instead of assuming that a single page app architecture is needed, ask what users need to accomplish. Instead of assuming you need a CSS framework or a JavaScript library, see what you can do in browsers today with native CSS and vanilla JavaScript. Don’t include a bunch of dependencies by default just in case you might need them.

Lean into what web browsers can accomplish today. If you find something missing, that’s the time to reach for a library … but treat it like a polyfill. Whereas web standards stick around, every library and framework comes with a limited lifespan. Treat them as cattle, not pets.

That last quoted paragraph resonates perfectly with this tweet I sent out earlier this year:

If you hit the Twitter thread, you can see that link to Pace Layers, something that Jeremy covered in his talks “The Layers Of The Web” and “Building” — also two talks worth your time.

The State Of The Web →

# Posted by Bramus! on Monday, November 29th, 2021 at 9:30pm

7 Shares

# Shared by Patrick Haney on Thursday, November 4th, 2021 at 12:14pm

# Shared by Bramus on Thursday, November 4th, 2021 at 12:25pm

# Shared by AGNT on Thursday, November 4th, 2021 at 1:02pm

# Shared by Seth A. Roby on Thursday, November 4th, 2021 at 4:30pm

# Shared by Tom Atkins on Thursday, November 4th, 2021 at 8:30pm

# Shared by Apostolos Tsakpinis on Saturday, November 6th, 2021 at 11:44am

# Shared by Joe Gaffey on Monday, November 8th, 2021 at 1:26pm

22 Likes

# Liked by Trent Walton on Thursday, November 4th, 2021 at 11:59am

# Liked by Stephen Mizell on Thursday, November 4th, 2021 at 11:59am

# Liked by Rich Senior 💙 on Thursday, November 4th, 2021 at 11:59am

# Liked by Christian Alden Jacobs on Thursday, November 4th, 2021 at 2:10pm

# Liked by Trey Piepmeier on Thursday, November 4th, 2021 at 2:10pm

# Liked by Erik Isaksen on Thursday, November 4th, 2021 at 2:10pm

# Liked by tommy george on Thursday, November 4th, 2021 at 2:10pm

# Liked by Rudiger Meyer on Thursday, November 4th, 2021 at 2:10pm

# Liked by Luke Dary on Thursday, November 4th, 2021 at 2:10pm

# Liked by Paul Morriss on Thursday, November 4th, 2021 at 2:11pm

# Liked by Tim Beadle on Thursday, November 4th, 2021 at 2:11pm

# Liked by Ismael J. Tisminetzky on Thursday, November 4th, 2021 at 2:11pm

# Liked by John F Croston III on Thursday, November 4th, 2021 at 2:11pm

# Liked by 𝕮 on Thursday, November 4th, 2021 at 2:11pm

# Liked by Huxley on Thursday, November 4th, 2021 at 3:47pm

# Liked by Zack Hawkins on Thursday, November 4th, 2021 at 4:53pm

# Liked by vollepeer on Thursday, November 4th, 2021 at 7:41pm

# Liked by Tom Atkins on Thursday, November 4th, 2021 at 8:41pm

# Liked by mkalina on Thursday, November 4th, 2021 at 10:12pm

# Liked by Apostolos Tsakpinis on Saturday, November 6th, 2021 at 12:09pm

# Liked by Uhl Albert on Saturday, November 6th, 2021 at 6:11pm

# Liked by Joe Gaffey on Monday, November 8th, 2021 at 2:33pm

1 Bookmark

# Bookmarked by Jamie Tanna on Tuesday, November 9th, 2021 at 5:44pm

Have you published a response to this? :