The (other) Web we lost

TL;DR; In the aftermath of the Browser Wars, the W3C and developer groups like the Web Standards Project worked long and hard to rebuild a unified, un-fragmented Web. But these last few years, we developers have gone and re-fragmented it all over again all by ourselves. Maybe we should think about what we are losing, before we lose this Web for good.

Just on a year ago, long time Web Industry figure Anil Dash wrote “The Web We Lost”, a lament for the earlier participatory, pre social-​​Web of blogging, before all our writing and photos and videos and thoughts and lives ended up in silos like Facebook and Twitter and Instagram and Youtube. It resonated with many, particularly those who had lived through those days (many of whom ironically went off to work at these very silos).

Particularly if this period of the Web, up until say the mid 2000s, predates your full-​​time Web use, you should take the time to read it (not just skim it). The Web really has changed in the last decade, and definitely not always for the best. Anil writes

We’ve lost key features that we used to rely on, and worse, we’ve abandoned core values that used to be fundamental to the web world

and observes that one reason he wrote the piece was “for the people creating the next generation of social applications to learn a little bit of history, to know your shit”.

Maybe there’s something in the water, as in the past couple of weeks, Faruk Ateş, and Jeremy Keith have both ruminated a little on this topic.

As it happens, for much of the last year, I too have been thinking about something I feel we’ve lost of an earlier web–though it’s something different, a much more constrained aspect of the Web. Something hidden from sight for almost all of the Web’s users. Its code. I think here too ‘we’ve abandoned core values that used to be fundamental to the web world’.

The Browser Wars

Anil speaks of the Web since around 2000, and how it has changed. But I am thinking back further, to the mid 1990s. The really early days of the Web. Now, almost all of us will have heard of the browser wars. When prodded as to what actually happened during these times, to explain what this war was about, most would reply that it was a time when Netscape and Microsoft fought to control the web, by making their browser dominant.

But how did they do try and do this? In a way,the browser wars really should be called the HTML wars, as in the earliest battles of this long war (initially fought more between Netscape, and its predecessor Mosaic, and numerous other browsers built by small teams, and individuals, both commercial and open source, before Internet Explorer even arrived) the battlefield (to really start straining the metaphor) was the language of HTML.

Back then there was no “standard” version of HTML. Instead, browsers introduced new language features with new browser versions. Features like img and lists, and tables and frames, the font tag, the embed tag, and on and on. Whole languages like JavaScript, and complex architectures like the DOM. In many ways this was good. The Web increased in sophistication, and gained vital features quickly.

But, none of these features was initially standardized, none developed by a standards organization, and each of them was initially proprietary to the browser which first implemented them, before other browsers hurriedly reverse engineered them, and then implemented their own, typically at least somewhat incompatible version.

One thing we learned from this period is early often beats better (the img ‘tag’, for example, proposed by Marc Andreesen, despite the concerns of Tim Berners-​​lee who favoured the more extensible embedding mechanism) won, and continues to haunt us in this responsive age. Decisions we make when it comes to the architecture of the Web can have a very long half-​​life, and so need to be made carefully.

It’s commonplace for Web developers today to lament the, let’s face it increasingly trivial, inconsistencies between browsers as a huge pain point in Web development. But imagine a landscape where developers rushed to implement sites using the very latest, just implemented, poorly documented features with every new browser release?

It was this chaotic environment which saw the formation of the W3C, whose role, still often misunderstood, was, and remains, to bring major stakeholders together, among these browser developers, to standardize these innovations. A little later, Web developers felt the need to advocate for the better adoption of these emerging standards, and so was born the Web Standards Project (which ironically earlier this year declared “Our Work Here is Done”. Why ironically? We’ll see in a moment.)

But, I think we’ve lost sight of what the Browser Wars, and the subsequent foundation of the W3C and Web Standards Project were about — avoiding the “Balkanization” of the Web. We knew that the importance of the Web was its universality, its interoperability. And we knew that was at risk, with the web fragmented by browsers.

And quite incredibly, if you’re familiar with the nature of the Web in the late 1990s with Netscape’s JSSS (JavaScript StyleSheets) versus CSS, incompatible undocumented DOMs, JavaScript versus JScript and VBScript, a bitter battle between Netscape Communications and Microsoft, we created a Web that wasn’t balkanized, by standardizing HTML, and CSS, and JavaScript, and the DOM, and selling far from enthusiastic developers on the benefits of this Web. It was a remarkable achievement, by so many people, working at companies like Microsoft, Apple and Mozilla and Google, by many who worked largely thanklessly to standardize these technologies, by those who advocated for, and educated their fellow developers about a standards-​​based web. Some have become well known, most worked hard for little outward reward. But all loosely connected by a sense that this was how the Web should be.

And then, little by little, like the apocryphal frog boiling to death so slowly it didn’t notice until too late, we threw it away.

  • We allowed, encouraged, embraced the fragmentation of the DOM with libraries and frameworks. Each with a slightly different way to access attributes, or add elements, or traverse the DOM.
  • We fragmented JavaScript with languages that compile to it.
  • We fragmented HTML with any number of templating systems.
  • We fragmented CSS with preprocessors like Sass and LESS.

Oh what have we done?

This is not to say each in and of themselves is totally bad. Far from it. Each of these technologies and innovations does address developer pain. Many point the way forward for future standard web technologies. But collectively, we’ve taken a relatively simple set of self contained technologies—HTML, CSS, JavaScript, the DOM—each with their own well defined roles, each relatively straight forward to learn, and start working with, and we’ve created a chaotic lanscape of competing technologies, many of which do exactly the same thing, in slightly different though incompatible ways.

And along the way we’ve introduced dependencies on other languages and environments because for many of these technologies, we’ve also introduced a build stage in our workflow, something the Web previously didn’t have.

What have we gained?

Arguably developers are more productive now. Perhaps we’ve made our lives easier during the initial development phase of our projects. Though I doubt there’s any more than anecdotal evidence to support this belief. Trust me, it’s a common place argument for all new development technologies that they make developers more productive. You’ve read some of the literature on that haven’t you (remember Dash admonishing us to ‘know [our] shit’)? And Software Engineers learned long ago that only a relatively small percentage of the overall cost of a system is in its initial development. Maintenance, over years, and even decades is where a very sizeable percentage of all system costs occur (yes, there is a lot of literature on this too).

But what have we lost?

Once upon a time, what made developing for the Web different from most other development was that we all spoke a common language, a Koine, a lingua franca. Compare this with for example developing for Windows, the single largest platform for many years, where a myriad of languages, frameworks and development environments sit on top of the underlying Windows API. This commonality brought with it a number of distinct benefits, among them, making Web technologies easier to learn, and making what we build with them more maintainable. And it helped ensure the long term viability of your knowledge and expertise as a web developer.

Learnability

Having a common set of technologies for front end development made learning to develop for the Web easier. Finding tutorials on, knowledgable experts in, and meetups dedicated to these technologies was relatively straightforward. The roadmap for acquiring skills and knowledge was relatively straightforward. You could pick up some HTML, and CSS, and build something useful. Over time you could increase your knowledge of these, and add an understanding of JavaScript and DOM to extend your capabilities. There was a network effect to having as large a group of developers sharing common languages and concepts.

Do we honestly want to diminish this network effect by fragmenting the technologies of the web, by creating silos of expertise, with little by way of a common language? We should at least be aware of this potential consequence of our choices. Aware of not just what we (typically as individuals) might gain from our choices, but what we collectively, what the Web, may lose.

Maintainability

Having a common set of technologies makes maintaining existing code bases more straightforward. The underlying technologies of HTML, CSS, JavaScript and the DOM are stable over long periods of time, unlike most frameworks, libraries, languages and preprocessors (not to mention the toolsets and languages these often rely on). Will the framework your service relies on be maintained 5 years from now? And by relying on less widely used technologies, the number of developers available to maintain a codebase diminishes significantly.

Traditionally, more than half of the cost of a complex system has come during its maintenance phase, and while on the Web we’ve been more likely to throw out and start all over again than traditional software projects, as what we build for the Web becomes increasingly more complex, and mission critical, the maintenance phase of projects will become increasingly long and costly. And again, it’s pretty well understood that maintainability has a lot more to do with the ability of disparate developers to understand and reason about a code base over a long period of time than it does with ease of using find and replace.

Interoperability

One of the core principles of the Web is “interoperability”. While this specifically addresses the concern that “computer languages and protocols … avoid the market fragmentation of the past”, I’d argue that fragmentation should not only a concern when it comes to the systems our code runs on. We should also be concerned about fragmenting the community of Web developers. The fewer developers with a working knowledge of a technology, the less interoperable that technology ultimately is.

There’s also the issue of how interoperable these various technologies are with one another. Say you like the way Sass implements variables, but also want LESS’s @debug feature (to give one of potentially countless examples)? You need all of LESS, and all of Sass, and probably a mess of frigging around. The monolithic approach of so many Web ‘innovations’ has a significant impact on how interoperable they are with one another.

Longevity

If you’ve spent years developing knowledge and expertise in Flash or Silverlight/​WPF, this is increasingly useless. The same will happen for jQuery, as it has for other, once seemingly dominant JavaScript libraries and Frameworks such as Prototype. It will happen to all the libraries and frameworks we invest in so heavily today, AngularJS, Bootstrap, you name it. Very few technologies last years, let alone decades. As someone investing a reasonable amount of my time and effort in learning a technology, I’d be concerned that this effort was well placed.

What’s to be done?

jQuery, Backbone, AngularJS, CoffeeScript, Bootstrap, Sass, LESS (just to name some of the most popular frameworks, libraries, languages and preprocessors we’ve developed over the last few years to address challenges we’ve identified as we attempt to make the Web do more and more complex things, not to call out anyone in particular) are sophisticated, powerful technologies, well entrenched in a great many workflows, used by thousands, tens of thousands, or more. They aren’t going away. And following them will come others. Perhaps the slow submersion of the underlying technologies, HTML, CSS, DOM, JavaScript is inevitable. After all, few if any developers write assembler, let alone machine code any more. But, as Anil Dash wrote about the other Web we lost “we’ve abandoned core values that used to be fundamental to the web world”, and I think that’s true in terms of the code we write as well.

But just what might these core values be? That’s not hard to explain, they’re in fact explicitly outlined by the W3C (once again, Anil’s words “learn a little bit of history, to know your shit”).

[The] W3C aims for technical excellence but is well aware that what we know and need today may be insufficient to solve tomorrow’s problems. We therefore strive to build a Web that can easily evolve into an even better Web, without disrupting what already works. The principles of simplicity, modularity, compatibility, and extensibility guide all of our designs

W3C Goals and Operating Principles, their emphasis

Are these principles of simplicity, modularity, compatibility, and extensibility guiding developers when they design and implement new languages? Frameworks? Preprocessors? Certainly in many cases they are. This is particularly true of pollyfills, and ‘prollyfills’. These don’t aim to ‘boil the ocean’ byproviding a huge array of functionality, but rather, follow the ‘small pieces loosely joined’ model. They do one thing, and do it well.

But in many cases, our solutions aren’t modular, or simple, or compatible (particularly with one another). In fact I’d argue this is the very heart of the issue. Rather than address a specific pain point in a simple, modular, interoperable way, our solutions often become increasingly complex, ad hoc agglomerations of solutions to all kinds of problems. Here for example is what the designers of the CSS pre-​​processor Sass have to say about the design principles for Sass

there’s no formal process for how we decide to add new features to Sass. Good ideas can come from anyone from anywhere and we’re happy to consider them from the mailing list, IRC, twitter, blog posts, other css compilers, or any other source

which rather calls to mind the Homer

The Homer, a car designed by Homer Simpson

Small pieces loosely joined

In 2002, one of the Web’s pioneers David Weinberger (among other things, co-​​author of the Cluetrain Manifesto) wrote “small pieces, loosely joined”, a way of thinking about what makes the Web different. I’ve long thought it applies well to the technologies of the Web, and should guide us as we build for the Web, whether it’s our own sites, those for our clients or employers, or the very technologies that move the Web forward.

If each time we came to solve a problem we thought “how small a problem can I solve”, or even “what really is the problem here”, and then solve that in a way that is as modular, compatible, and extensible as possible, we’d be going a long way toward taming the explosion of complexity we’ve seen over the last half a decade or so, and to returning at least in part to the other Web we lost.

Perhaps this Web simple had to grow up, to meet the challenges of the ever more complex artefacts we’re building. When the Web was about documents and sites, perhaps we could be simple, but in an age of Apps that’s a luxury we can’t afford. Perhaps the technical underpinnings of all platforms of necessity fragment over time.

But, before we lose this Web for good, I think we owe it to that Web to really understand it, what makes it tick. And when we make technical, architectural choices about what the Web looks like, we don’t just focus on what we (as individuals) gain, but what costs there are to this Web as well.

14 responses to “The (other) Web we lost”:

  1. Brilliant article. I do think these things go in cycles though, and that the current fragmentation will be standardised and that over time, that in turn will become fragmented again as people come up with diverse solutions in order to work effectively.

    It inevitable that the technology changes and that some domain specific knowledge becomes useless, (and I’m the first to admit I grieved over the loss of Flash!) but would say that doing something is better then nothing and that there is a lot of transferable skills that come from time spent coding into a language.

    Having come back to web development, I’m struck by how fragmented things are, but the landscape has changed with more awareness and professionalism amongst devs, so while we might have strayed from the standards it won’t be too long before the new tech is merged back to the great repository in the sky.

    Maybe this is the call to arms for the next phase of standardisation.

  2. I’ve been struggling with this quite a bit myself lately. I’ve moved away from jQuery to old fashioned vanilla JS, but have been feeling the pull of Sass as more people (and specifically, more web shops) are looking for people with experience in it.

    I also manage an open-​​source front-​​end boilerplate, Kraken, and have had many folks ask for a Sass version to make customizing it easier (via variables and what not). In making the conversion to Sass, the thing that struck me was the added complexity.

    Multiple files. Nested folders. Interdependent variables, mixins, and functions. Compile/​process/​build before you can see your work. And the instructions for people to work with it are more complex, as I need to accommodate both Sass and vanilla CSS users.

    There are certainly Sass features I’d love to see pulled into CSS, but I also miss and love the simplicity of the vanilla variants of web technologies.

  3. @Martin Coulthurst — I think jQuery actually provides a good example of this: “it won’t be too long before the new tech is merged back to the great repository in the sky.”

    New functions like element.classList, document.querySelector, and Array.prototype.forEach bring jQuery based methods of DOM manipulation into vanilla JS… for the better.

  4. Some great points coming at an important time in the history of the web. In many ways we are seeing an advance of technology just like any. Some parents think it critical their children learn to drive a manual transmission car before an automatic. Others think it best to have children drive an automatic first. Now we have two camps — those that can drive manual and those that cannot — fragmented and not speaking the same language. This is the unfortunate consequence of any technological advance — silos and camps.

    Meanwhile — more people are able to drive because of automatic transmissions, or if that’s too much of a stretch, it is easier to drive now because of automatics. In the web, more people are able to build using frameworks and “wrapper” languages, and we are all able to build more in shorter time periods. We may have lost the comfort and ease of the language of the few, but the craft is extended to more craftsmen — and the craft itself is extended.

    I’m certainly in favor of teaching “vanilla” HTML, CSS and JS first. It’s critical to understand the underlying principles — that “this does this because of that.” But I’m not in favor of halting a technological advance. The advantages far outweigh the difficulties in communication.

  5. Nah, this is all overblown. I’ve been making web sites since ’97. DOM library competition, template languages, and CSS compilers are NOT the same kind of problem as IE vs Netscape. They might look similar if you squint and whine, but those fragmented competitors all boil down to the standard tech. They all use the same musculature in modern browsers. They, in fact, accelerated the underlying standardization we now have by hiding the differences for us. Once the differences were hidden, the browser makers lost most standing to defend the differences. They made it easy to move forward with their interfaces. Version after version since then have seen things standardize and advance quickly, and with more uniformity than ever. And the seed of destruction for all the template languages and DOM libraries is already rooted and growing: web components. Then we will truly have small pieces, loosely joined. The CSS preprocessors will hang around longer, but web components will kill them too. Once you are shipping things in small, componentized pieces encapsulated with shadow DOM, the weight of complex CSS preprocessing will become unsustainable.

    The very near future will again see the dominance of HTML, Javascript and CSS. Libraries and boilerplates and frameworks will all be forced to shrink and split to match the scale of web components. This is part of why i’ve invested myself in creating HTML.js. It’s small and the thinnest possible sugar for the vanilla DOM, a much better friend for web component developers than jQuery could ever be. The future is vanilla. Bet on it.

  6. Great article — I feel like the majority of the technologies, helpers, templating, frameworks etc we use to develop with require the caveat of ‘use responsibly and with moderation.’
    Interoperability is definitely one of the major sticking points I’m seeing with all of these applications we’re creating to help us work faster, we already have fragmentation with mobile devices, and browser-​​specific apps (thanks Chrome). We keep creating these one-​​size fits all solutions that operate like a swiss army knife, when all you need is the corkscrew.

    Particularly with CSS frameworks like bootstrap and foundation, and with javascript frameworks like jQuery we’re using these things to speed up our workflow at the cost of the performance of the people that have to use the application or website, which feels kind of selfish — we’re shifting the stress from us to someone else. Which is why I’ve started divorcing myself from jQuery slowly where possible (I’m a designer, be nice!)

    As far as pre-​​processors go, I like the idea that something exists separate to CSS that compiles into CSS — it allows us have developer only comments, split up CSS files (without extra HTTP requests), and write our own helper functions for reducing repetition. I don’t want those things to be part of CSS. As long as we are mindful of the output of what we create — maintainability and longevity shouldn’t be a major issue.

    It is, however, concerning when I read Eppstein’s quote that there is no standard — and while SCSS and LESS syntax are aren’t too far removed from CSS — it’s a concern to me that we could end up with something like Coffeescript. I think, at the very least, a system designed to aid should have syntax mostly derived from it’s originator so as to be able to be understood by someone who is proficient at that originating language. It’s why I’ve never picked up Coffeescript, used Sass syntax (the alternative Sass syntax to SCSS), or picked up Stylus.

    I use Sass (SCSS syntax), I use it to separate out my stylesheets into a group of partials that get compiled into the one stylesheet. It helps me for maintainability, I can comment the hell out of it and only choose to render the comments I want public. I have an entire sheet of variables for typography, grid, and layout. and one of the few mixins I actually use allows me to switch my stylesheet to having no media queries for IE8 support and below. For loops allow me to make nth-​​of-​​type animations with little time cost etc.

    It enables me to efficiently support and enhance what a site can offer and who it can offer it to in drastically less time — in the contracting world — that time can sometimes mean the difference between support and no support.

    I agree these aids shouldn’t come at cost to the people using the applications/​websites that we are creating, and as tools their usage should always be one of preference and never mandatory otherwise the very group of people they’re designed to help end up worse off. It certainly hasn’t taken long for our workflow tools to become part of HR and recruiter shopping lists and when that happens it creates a barrier for any newcomer that doesn’t know any better, but also forces people to spend vast amounts of time learning tools and not learning core technologies. Which begs the question, are our tools for expediency making us worse at the stuff we’re trying to do better?

    • By: JezChatfield
    • December 4th, 2013

    In the 1970’s we taught programming by looking at machine code, then assembler, and finally introduced high level languages like FORTRAN, BASIC and COBOL. The analogy I’d draw is that HTML, CSS and to a great extent JavaScript are the low level structures. And now people are learning higher level abstractions that let them do more specific functions more easily. There are divided roles — how many people master computer science and can develop new methods in AI, develop new user interface models, understand the psychology of human computer interactions, and grasp enough of business to maximise business functions? It’s a small number of highly competent individuals. The early web was easier to grasp, and we asked less of it, and we had small numbers of largely highly competent individuals who made the small numbers of websites we offered, work.

    Now, 20 years on, we’re asking that many orders of magnitudes more websites deliver serious business value, in less than a second, on a more diverse range of devices.

    You’re going to get specialisation — most people can’t handle the breadth or don’t want to. You’re going to get domain specific languages, and people specialising in those.

    You say “fragmentation” as if it is a bad thing. Personally, I’m fine with it. I don’t see why building websites and apps for hundreds of millions of businesses, with very common structures for most of them, shouldn’t result in large numbers of web devs with no understanding of machine code, no competence in programming microcode, no understanding of gate delays, no facility with assembly language, no appreciation of predicates, and no ability to write a JIT compiler. But they should be able to inspect HTML5 and at least work out which high level DSL is generating their current misery, and fix it.

    There will be a wide range of languages and competencies. And there’ll be a relative handful of serious dudes who grasp the whole thing, well enough to make fundamental changes. That’s been the history of how we, as a species, tackle technology.

    We don’t ask that a modern car mechanic should be able to smelt ore, design materials, create production lines and be an active petrochemical chemist. We expect them to run the diagnostics some clever specialist has put together, and disassemble and replace a part that a robot previously put in place during manufacture. Life evolves. So does tech. Learn the new stuff, and expect it to be replaced in the next decade, by other stuff.

    Quit bitching about a Golden Age in which a miniscule fraction of the current count of websites, did less, for a wealthy elite.

  7. Flash was equipped to do what the unstandardized web could not at the time. As browsers become more sophisticated, their was less need for the reliance on the uses of flash.

    Of course technologies will be usurped by others. I think its just apart of the job. Of course learning and teaching others has to be grounded in the foundations, HTML CSS and JS.

    But I see no reason to vilify some new magajavascript that pops up, maybe we’d be entering a new era of digital interactions, old works and old is standardized, but the world is changing, the web is changing.

    Standards are slow to keep up, there’s no new news there. Its the devleopers and designers who will decide what the web is for and how it will be used.
    But this really isn’t the question at hand.

    • By: Phuong Zhou
    • December 6th, 2013

    You don’t really have much to say.

    • By: Benjamin Knight
    • December 9th, 2013

    In my opinion these technologies move the web forward as an application platform, and ideally become a proof-​​of-​​concept for later-​​to-​​be-​​standardized language features, as jQuery has done for JavaScript, and I would imagine Sass/​LESS will affect CSS similarly, if they haven’t already. Maybe this is fragmentation, but at least it’s fragmentation in the interest of developers rather than the evil-​​flavored anti-​​web fragmentation of the browser wars era.

    • By: Nico
    • December 10th, 2013

    Good points.

    The main problem we have is that front-​​end devs are considering that CSS is a toy and Sass/​whatever is the language… of course, Sass is the toy and CSS is the language.

    It is only some comfort.

    • By: Remper
    • December 12th, 2013

    Actually, I don’t see your point. With the same reasoning we could say that for example “We lost our Java”, since there are so many languages that are executing on the JVM. But I don’t mind to use Scala instead of Java, it’s a healthy process of inventing new languages and tools. And I can still use my java libraries with Scala, so I don’t even have to port anything.
    It doesn’t matter which language or technology you choose, they are quite interchangeable. You can choose Backbone instead of Angular, or use CoffeeScript if you like (I don’t). It doesn’t matter, since we have strong foundation which is Javascript, HTML and CSS.
    It’s okay to use different frameworks and technologies. When I see guys writing code in CoffeeScript. I don’t get mad at them, even though I can’t understand why do you ever need some language on top of JavaScript. But it’s okay if they are doing that inside their own company or community. This way I don’t have to learn CoffeeScript or even see it’s strange syntax.
    So it doesn’t matter, actually. All this new technologies, that are on top of our good old JavaScript, HTML and CSS, are simple enough to understand if you need to, but also are not required if you don’t want them.
    If someone can’t learn more than one library, or switch from one library to another once it became obsolete — he or she really should just consider different profession, because it happens all the time in Computer Science.

    • By: Rob
    • December 15th, 2013

    I’ve been saying this for quite some time while getting funny looks from people who call themselves “web developers” but think Javascript and jQuery are two completely different things.

  8. Your last paragraph summed up the whole issue for me. If we’re choosing technology because it makes things easier for us, then we have the wrong priorities. What matters is what benefits users, and the web in general, not what speeds up development time or makes our job easier.

    Though personally I haven’t been doing that :-) I never stopped writing vanilla code, never used CSS pre-​​processors or JS frameworks (and never failed to amaze clients and employers with my productivity!)

    I accept that such tools have a place and a role in enabling productivity for less experienced developers, and I don’t have any kind of elitist notion that that’s an inferior thing to “what I do”. But neither do I accept that skill in such tools is a real substitute for skill in using the underlying standards. None of the frameworks that are so ubiquitous today will still be in common use 20 years from now (just as Facebook and Twitter won’t still be around), but the web will still be made of HTML, CSS and JS (and people will still publish articles, and pictures of cats).