Where I write about Facebook’s Instant Articles

I’ve switched to blogging back on my old site (www.baldurbjarnason.com). The latest post there is Facebook and the media: united against the web.

A few media websites have made a deal with Facebook to present their articles within Facebook’s iOS app instead of on their own websites. Apparently, it’s all the web’s fault.

Although, it should be mentioned that Instant Articles are published only on Facebook’s iOS app and degrade gracefully into links to the media site’s original website elsewhere, so it isn’t nearly the wholesale assimilation of everything everywhere that people present it to be.

Commentators online report this news alternately as armageddon or the second coming.

Read more over on my blog…

Five publishing-related thoughts on a Friday afternoon

Re-posted here from Medium for my own archives. Feel free to ignore.


I figure that since I broke my resolution yesterday not to blog about publishing, I might as well throw a couple more thoughts out today in an attempt to clear my system completely of useless speculation on publishing. Hopefully this means that I won’t have to do this again for another year or so.


Clay Shirky’s “Fast, Slow, Fast” model of print decline has been on my mind lately. If, as seems likely, the consumption of social media, apps, websites, and the like have slowly been substituting book reading in people’s lives it also seems plausible — looking at industry statistics — that their book buying patterns haven’t caught up. As in, readers are already used to having a to-read pile but haven’t adjusted their purchasing habits to match their slower pace through said pile. If true, then that adjustment, when it hits, would be dramatic enough to match what Clay Shirky describes as print’s second fast decline. Except in this case this would be money leaving the industry, not moving within the industry as it did when ebook first exploded.


Just because the actions of several tech companies might be immoral or misguided, that doesn’t mean the medium that their platforms carry is also immoral. It doesn’t prevent social media and websites from doing amazing things. An analogy: I find many of the actions of large media companies (publishers, TV stations, movie studios) reprehensible but that doesn’t prevent amazing books, TV series, and movies from being made. And, if anything, social media’s network structure creates a stronger disconnect from corporate control than you would see in other media.


The increasing role of digital in the various media industries is commonly presented in publishing rhetoric as a ‘transition’. The ebook format’s percentage of overall sales is used as a proxy to measure the progress of said transition. This is key to the ‘plateau’ idea — that the end result of the transition is a natural equilibrium between ebooks and print and that we are nearing that equilibrium. My problem with this: it’s as if the newspaper industry decided to measure their ‘digital transition’ exclusively by tracking how many print subscribers switch to paywall subscriptions. Putting too much credence to this statistic would blind them to the fact that they are bleeding readers and subscribers to other industries who offer a completely different kind of service — one that nonetheless substitutes for newspaper reading. Focusing too much on the ebook adoption percentage distracts from the possibility that readers might be switching to other media entirely.


It doesn’t matter if readers of social media and the web don’t read anything you write on the web closely — it doesn’t even matter if they finish it— as long as they get what they need from it. As a writer, it’s bloody annoying but it’s also our problem, not the reader’s. That the web and social media lets readers graze information and only dive in for a close read when they really want to (which is hardly ever) is arguably a feature from their perspective. They don’t owe us a close reading and we as writers only have value to them as long as our work fulfils a need or desire.


Imagine that you run a company whose core competencies are content creation and engaging graphic design. You have your old platform which seems to be slowly declining — at the very least it isn’t growing. You have new platforms that are designed to complement your production processes but either don’t complement your core competencies (i.e. reflowable ebooks and their substantial design and content limitations) or aren’t selling that well (fixed layout formats). Now imagine that your business is operating in a hyper-competitive environment — other fields are aggressively competing for the time and money of your customers. Which strategy presents the least longterm risk?

  1. Wait for the vendors of your existing new platforms to improve them, giving your competitors time and space to go after your customers. And your vendors may not succeed at delivering the features you need in time for you to put them to good use. (Waiting for vendors of other platforms to change them to complement your business is identical to this strategy. Except, in that circumstance you have exactly zero leverage to persuade the vendor to make the changes you need.)
  2. Send your technical staff off to standards organisations to improve the standards that underlie existing platforms so that they match your core competencies (design, content creation) or to change them so that they complement your production processes. That process takes years and might fail anyway because you have to reach a consensus with other companies from other industries who don’t see the value in your core competencies. And then you need to wait for platform vendors to implement them anyway, so this strategy inherits all of the downsides of the first strategy as well.
  3. Build a new platform from scratch on your own (or in collaboration with other companies in similar situations). Which could take years and is extremely likely to fail. Most new software projects fail. Most new systems fail. Building new software platforms is much harder than just making software (which is hard enough).
  4. Use other pre-existing platforms that have more design capabilities and seem to be popular among consumers (i.e. apps and websites). This means adding software development to your core competencies and it means completely transforming your processes and organisational structure, both of which are hard and expensive. It also means looking at new business models.

The answer, in my opinion, is strategy number four. The other three strategies have a high possibility of outright failure and delay any meaningful attempt at expanding your customer base by several years. Strategy number four is the hardest (waiting for vendors and filibustering standards organisations doesn’t take much organisational effort) but it’s also iterative. It can start at extremely small scales and build up from there. Platforms are all or nothing. It can feed back into your existing business at every single level (a core competency in software development is a huge strategic advantage for any modern company). It compartmentalises production risk to individual projects instead of spreading the inherent risk of software development to the entire platform. It lets you enter new market segments and new venues right now, instead of waiting for years or relying on platform vendors whose incentives are misaligned with yours. And, finally, it offers you strong differentiation because most pre-existing web and app companies are crap at content creation and graphic design (although that is changing very quickly).


Anyway, lunch break is over. And, hopefuly, so is my current publishing industry blogging spree. If I’m lucky, I won’t feel the need to blog about the publishing industry for at least a few months — preferably forever.

Why should people read more books?

Re-posted here from Medium for my own archives. Feel free to ignore.


I don’t know how many books I read last year. I probably could find out if I wanted to but I don’t particularly care. It isn’t important.

What I do know is that I read a lot of interesting and thought-provoking writing. I watched videos that changed my mind and my approaches to life. I listened to podcasts and other media that taught me new skills and opened up new perspectives. I learned. I discovered. I like to think that I’m a better person now than I was a year ago. Books, for the most part, weren’t involved.

Which gets me to the heart of my problem with Hugh McGuire’s “Why can’t we read anymore?”. It never questions whether it’s worth it. It never questions whether enacting the digital equivalent of hairshirt ascetism in order to read more books is worth the effort. It takes the moral judgement of the cultural elite as fact. It never asks whether the value a book gives you equals that of the social media and websites that you’re giving up. It just takes that as a given.

Most of what follows isn’t strictly speaking a response to Hugh’s piece. (Apologies, Hugh!) He is writing about his own habits in a constructive effort to improve his life, which is cool. However, his rhetoric and line of reasoning echo a strain of anti-digital elitism that I’d like to pick at. It’s a strain of bildungsphilister that is pervasive in publishing circles and sees books as an unalloyed good and social media as a corruption.

(I’m not going to link to those tracts. I’m linking to Hugh because I like him and agree with him on most other things. Just not in this particular case.)

O tempora! O mores!

The very least we can do is acknowledge the possibility that this worldview— books good, social media bad — may not be universally applicable. That the reason why many (not everybody, but definitely many) now read fewer books is that the web, social media, and apps give them more value, provide a better experience, and just generally have a bigger — more positive — effect on their life than books would.

For almost every adult reader, a website that couples text, video, and exercises is a better way to learn than reading a book. (Both trumped, of course, by having an actual mentor who both knows the subject and how to be a good mentor.) And the website remains better than a ‘enhanced’ ebook because it doesn’t have to be structured linearly, in chapters, or mimic the printed format in any way.

For almost every thinking voter, mid-length articles and commentary are more informative and more thought-provoking than books on the same subject.

And don’t try and sell me the idea that books can present more complex ideas or break free from the echo-chamber. Most ‘thinking’ books are padded to hell. Besides, nobody buys a book of political commentary unless they think it’s likely they will agree with it beforehand.

More importantly, once you become savvy to the craziness native to social media (which is everybody who practices social media regularly for more than a year or two), it becomes an excellent source of a low-level understanding about important events in other countries and places. I wouldn’t have a clue about what was going on in Baltimore at the moment if it weren’t for online activists posting information on Twitter and Tumblr. And Twitter brought to me this excellent interview with David Simon on the roots of Baltimore’s problems. The empathic awareness of the root and context of the important problems of our day that social media enables is impossible to do in a book.

All of this is assuming that people are curious, open-minded, and interested in learning, but if they aren’t there’s nothing books can offer that will fix it.

And if you’re in the habit of ignoring the people around you, don’t blame your tools.

That isn’t to say that social media has no downside. It definitely increases your exposure to idiocy and your vulnerability to abuse and harassment. But, just because one or two social platforms fail to address those problems, that isn’t a reason to abandon the ideas of the web, social media, or apps in their entirety.

The neuroscience sidebar

From Hugh’s piece:

So, every new email you get gives you a little flood of dopamine. Every little flood of dopamine reinforces your brain’s memory that checking email gives a flood of dopamine. And our brains are programmed to seek out things that will give us little floods of dopamine. Further, these patterns of behaviour start creating neural pathways, so that they become unconscious habits: Work on something important, brain itch, check email, dopamine, refresh, dopamine, check Twitter, dopamine, back to work. Over and over, and each time the habit becomes more ingrained in the actual structures of our brains.

This piece in general isn’t directed towards Hugh’s post but I always get a bit twitchy when people bring neuroscience into a sociocultural debate.

(I may be a bit touchy on this specific issue since dopamine production is at the heart of Parkinson’s, which my grandfather died of, so YMMV.)

Other people have covered dopamine myths better than I ever can, but here are some highlights.

  • Dopamine doesn’t seem to have anything to do with pleasure, enjoyment, or liking things. Mice unable to produce dopamine still seem to enjoy their sugared water.
  • Dopamine seems to be involved in the learning process and seems to kick in when you expect to learn something new or rewarding.
  • Dopamine seems to kick in more strongly when the rewards are less predictable.
  • Dopamine seems to play a key role in motivation or feeling the need to do something, which explains why it’s so often brought up in ‘I’m addicted to the internet’ pieces.

So, at face value it might seem plausible that excess dopamine compels us to check social media even though we don’t enjoy it.

The problem with this kind of pearl-clutching is that the brain simply does not work that way. You simply cannot boil complex behaviour down to a single neurotransmitter or a single centre of the brain, especially not behaviours as complex as social communication, language, or social interaction. The brain is a complex system of interacting clusters and chemicals of various sorts that involve the body, our senses, and our environment in ways that we still don’t fully understand.

Blaming our behaviours on dopamine or the neurotransmitter du jour is insulting to us because it removes our agency. It assumes that the only reason we continue to engage with the web and social media is because we feel compelled to do so by a chemical — that we are not responsible for the decisions that take us to the web. Unfortunately for this line of rhetoric, we are not slaves to our hormones or our neurotransmitters. If you do something you do not like, you have only yourself to blame and need to take it up with your therapist. Do not blame a wayward chemical.

Hugh again:

There is a famous study of rats, wired up with electrodes on their brains. When the rats press a lever, a little charge gets released in part of their brain that stimulates dopamine release. A pleasure lever.

Given a choice between food and dopamine, they’ll take the dopamine, often up to the point of exhaustion and starvation. They’ll take the dopamine over sex. Some studies see the rats pressing the dopamine lever 700 times in an hour.

We do the same things with our email. Refresh. Refresh.

He’s referring to this 1954 study.

This particular comparison, that a person’s email habit is like that of a rat stimulating electrodes in its brain, is bullshit for a variety of reasons.

  1. There is no such thing as a ‘dopamine centre’ in the brain. A part that stimulates dopamine release has a multiplicity of roles. Stimulating it is a brute force activation of a set of complex interlocking systems. It is not the controlled release of a single neurotransmitter.
  2. That a rat with no alternate form of stimulation would resort to pushing a lever that stimulates its brain artificially tells us nothing about human behaviour. It doesn’t even tell us anything about rat behaviour since we don’t know if it would make the same choices if it had a healthier alternative source of stimulation.
  3. It assumes that the choices we have made in the past have little to no bearing on where we are today, that unless we resort to guilt-driven self-discipline and asceticism, the attraction of ‘dopamine’ rewarders like social media or the web is literally irresistible.

We do not behave like rats in a cage. Hell, rats in general don’t behave like rats in a cage. And, again, drawing conclusions about human behaviour in general based on a specific study of rats with electrodes in specific parts of their brains is dodgy at best.

(Another thing about trade publishing — the lot responsible for the books Hugh is writing about and hoping to read more of: they are incredibly bad at accurately representing current scientific research.)

Veering away from Hugh’s post, back into the main topic

Instead of assuming that we only engage in social media because we can’t help ourselves, what about assuming that we use it because we like it?

After all, it’s plausible that an animal as social as the human being would prefer social interaction over a cognitively tasking solitary activity with dubious rewards, even to the degree of preferring bad social interactions over the alternative.

That’s actually very plausible.

Just because we dislike some parts of Twitter, to use a minority platform as an example, that doesn’t prevent us from liking the rest of it, enjoying it so much that we hang on in the hopes that it will improve, rather than to give it up. We can like something and dislike it, both at the same time. We’re complicated like that.

Why don’t we start with the assumption that social media and the web are taking over because people actually enjoy them and go from there? People generally like people and having them on tap in a context that you can turn on and off at will just increases the attraction and utility. Why isn’t the onus on those who want to promote book reading to show that books are more enjoyable, more useful, and more relevant than social media, apps, and the web?

Because they generally aren’t, that’s why. Because most people in publishing are beset by the horrifying suspicion that books simply aren’t competitive with other media, that’s why. They know they’d lose that argument. For your average consumer, books are a worse learning environment, less fun, less rewarding, and less relevant to their day to day lives than almost any other alternative.

The implication is that if we don’t guilt or scare people out of social media and into reading books, they will overwhelmingly choose not to read books.

Which is probably true. With good reason.

The sheer variety boggles

Books are predominantly neurotypical, straight, white, male, and middle-class. Most people aren’t. The publishing industry responsible for making books is incredibly homogenous.

Social media gives us direct access to people who are like us. It doesn’t matter whether you’re queer, on the spectrum, a person of colour, female, or poor, you are much more likely to find your experience, your life, your needs represented and addressed in social media and on the web than in books.

Most people have a pretty good reason for not reading more books. The books we publish aren’t for them, but by and for a bunch of middle-class white men who think their tastes should rule the world.

If you don’t fit into a shape that publishing assumes represents all of humanity, the books that speak directly to you are relegated to a segregated ghetto of either a tiny selection of titles intended for your particular ‘minority’ or titles that have that ever so pervasive aftertaste of bitter moral panic.

After all, if publishing started to represent us and people like us, then we might start thinking that we’re normal. And that wouldn’t be good for society now would it?

Don’t fix people, fix books

If you think that book reading should be a mainstream activity, one that’s performed by a majority of the population, then you don’t accomplish that by assuming that everybody is broken and needs to be fixed.

Don’t assume that social media has no real world value.

Don’t assume that the web is inherently inferior to books.

Don’t try to guilt people into abandoning media that has enriched their lives and broadened their horizons more than books ever could.

Don’t blame it on a neurochemical.

Whatever you do, for Christ’s sake don’t slap a bunch of animations, video, and crap scroll-jacking effects on your ebooks and call it a day.

Instead we need to fix books. And fixing them isn’t a question of technology. Otherwise they’d already be fixed.

Fixing book means making them truly diverse. It means making both the people who write books and those who publish them more diverse.

Fixing books means making them more immediate and quicker to publish.

Fixing books means making the industry around it less conservative and less reactionary.

Fixing books means making them more accessible — not just in terms of screen-reading but also in terms of their writing and design.

Fixing books means not marginalising the magnificent plurality of the English language. It means publishing books in all of the various Englishes in all of their class, race, regional, and national varieties.

Why don’t people read more books? Because most books aren’t for them.

If we want people to read more books we need to make books for them. Until publishing does, we in publishing have no right to complain.

How is taxing ebooks as print books supposed to work?

Re-posted here from Medium for my own archives. Feel free to ignore.


It’s a popular stance among publishers that they and their industry are a gentle sprinkle of special snowflakes and that their software (i.e. ebooks) should be taxed at a lower VAT rate than other software (i.e. websites or any other kind of digital file).

They’ve managed to wrangle several EU member countries to their cause:

According to the four ministers, to foster innovation and secure the future of Europe’s e-publishing, technology-neutral regulations must be clearly asserted at the European level. The declaration was signed by France’s Fleur Pellerin, Italy’s Dario Franceschini, Poland’s Malgorzata Omilanowska, and Germany’s Monika Grutters.

The problem is that defining all digital media as services is exactly what a technology-neutral regulation looks like. All digital content has the same VAT. Nobody has clearly outlined how you can define ebooks as special without discriminating against other digital media, other methods of publishing digitally, other digital textual media, or the various kinds of self-publishers.

So, how is it supposed to work?

Software, ebooks, digital video and audio, and websites are all defined in terms of EU tax law to be digital services.

To those who want to lower VAT on ebooks but not on digital media in general, how do you propose to decide which is which?

Pelican Books online ebooks, high VAT service or low VAT ebook?

Directly sold PDFs like Amy Hoy’s JFS, high VAT service or low VAT ebook?

Single book app like Joseph Albers’ Interaction of Color, high VAT service or low VAT ebook?

Literary games and apps like Frankenstein or 80 Days, high VAT service or low VAT ebook?

Web-based subscription sold ebooks like digLloyd’s Advanced Photography, high VAT service or low VAT ebook?

Oyster book subscriptions, high VAT service or low VAT ebook?

Safari Books Online which offers ebooks, audio books, and online video courses, high VAT service or low VAT ebook?

Digital audio books, high VAT service or low VAT ebook?

An ebook that embeds a video documentary, high VAT service or low VAT ebook?

An ebook that is nothing more than annotations on a radio documentary series, high VAT service or low VAT ebook?

A non-linear hypertext delivered as a bundle of HTML files in a zip file, high VAT service or low VAT ebook?

An image, high VAT service or low VAT ebook?

How about a series of images in a zip file with minimal metadata, like a CBZ comic book, is that a high VAT service or low VAT ebook?

If you define low VAT ebooks as a specific file format (epub, mobi, PDF) what about all of the other formats, past present and future? What would the process be to get a format ‘certified’ for lower VAT? Make it too flexible and you might as well lower the VAT across the board. Make it too rigid and you’re killing innovation in digital publishing.

If you define low VAT ebooks as text-oriented files sold by specific vendors, how is that not anti-competitive discrimination? How would a vendor or self-publisher get ‘certified’ for lower VAT?

If you define low VAT ebooks as those with an ISBN (which in many countries cost money) how is that not anti-competitive discrimination against self-publishing or web-based subscription services offering exactly the same content but in an entirely different format?

If you define low VAT ebooks as something other than services — as a virtual pseudo-object unlike all other digital media — how do you propose to handle the licensing that is a mandatory part of today’s ebook retail? (Hint: nobody buys an ebook; we all only get licenses for the ebooks we buy and all of that licensing legalese is based on the concept of ebooks as a service.)

What would the buyer’s statutory consumer rights be? Because if you can’t treat ebooks as a service for VAT, you can’t bloody well treat them as a service in terms of consumer rights or licensing.

What do you do with what are clearly services (e.g. subscription sites) but deliver ebook formats?

What happens when all of the answers to all of these questions end up being different for each and every EU member state? EU VAT is already a complex mess and you want to make it even messier, even harder to deal with, even more difficult for companies and individuals to deal with?

How is this idea supposed to work?

(Of course, I think a separate and lower ebook VAT is a bad idea even if you do get it to work because it’s fundamentally backwards and reactionary — a de facto state subsidy of a stagnant industry. But that’s an entirely different topic that deserves its own blog post.)

Kathy Sierra’s Badass: Making Users Awesome – the book you all should read

I’ve been reading and re-reading Kathy Sierra’s book Badass: Making Users Awesome since it was released the other day and I can’t recommend it enough.

I’ve been obsessing about teaching and skills development theory for over a decade now, ranging from the ideological like Ivan Illich and Neil Postman to the philosophical like John Dewey.

Oh, and so many research studies.

All this time I have never encountered a book that so cohesively ties together so many of teaching and training best practices into a single, sensible conceptual model.

But Kathy Sierra doesn’t just pull it off she does so with amazing clarity—explaining complex concepts in simple ways without dumbing them down.

Then she ties it all in with user experience software design without skipping a beat.

If you do any teaching or training, this should be the next book you read.

If you do any UI, UX, or product design, this is the book you should be reading right now.

It’s a short and highly readable book so you have absolutely no excuses not to.

It’s awesome. Buy it. Read it.

Idle Sunday thoughts about web trends

I’m packing my stuff into boxes. At the last minute, of course, since the truck arrives early tomorrow morning.

While I’m packing, I’ve been listening to a variety of podcasts on web development. I don’t listen to these podcasts for their information. In between the casual banter and chat that surrounds the usual talking points, a sense of a cohesive and mature craft starts to form. The conversations surrounding the craft bring out the art in it.

Progressive enhancement. The indie web. Microformats 2. Responsive design. Adaptive content. Isomorphic or progressive javascript.

These are all of a kind. They are a part of the art of developing for a fluid platform with wide-ranging capabilities—capabilities that can change on the same device from one extreme to another, within minutes, just by walking down into the basement seating area of your local coffee shop.

There is an art to making things that not only tolerate this quixotic foundation, but thrive. It’s obvious that a large and influential contingent in the web development community is driven by a strong sense of what works and what doesn’t work in this environment.

It’s also obvious that a just as large contingent doesn’t.

Client side rendered web apps with no fallbacks for when the javascript fails to load or stalls mid-load—which happens all the time on slow connections. Sites that assume everybody is running the latest browsers on a fast connection and are completely useless and inaccessible anywhere else. Web apps that only work in a single browser. The way almost everybody seems to use Angular, Ember, and React.

Treating the web like another app platform makes sense if app platforms are all you’re used to. But doing so means losing the reach, adaptability, and flexibility that makes the web peerless in both the modern media and software industries.

Even if you do pull it off, you can only succeed by sacrificing the very qualities that justify the web’s existence.

Or, at least that’s what I’m thinking as I’m packing my boxes well into the night, hoping to finish in time for me to get a few hours of sleep before the movers arrive in the morning.

Repetition only works in fiction

Re-posted here from Medium for my own archives. Feel free to ignore.


Again. Again. Again.

It’s the mainstay of every narrative genre. In romance you have the persistent suitor. In comedy you have the running joke. In the heroic journey it represents the protagonist’s perseverence in the face of adversity. It’s the iconoclast fighting for truth, justice, and the American way. In plotting it’s an essential part of the narrative rhythm. In the story’s structure it’s what enables foreshadowing and emphasis.

But anybody applying these narrative tropes to real life wouldn’t be a hero. The stubborn romantic lead becomes a stalker. The idealist becomes a fanatic. The epic hero is a psychopath.

Writers use repetition — prominent events that echo one another — to represent and symbolise the quiet foundations of our lives: persistence, patience, and grit. These understructures are unobtrusive and invisible. They are daily, hourly, and by the minute. They have no outstanding — epic — moments because they are a constant. They are not events. They are either qualities of your life or they are not.

By collapsing a symbolic representation of the moments of our lives with the personal interactions of our lives, social media brings these narrative tropes into our social circle.

Where they don’t work.

Instead of representing and symbolising the quiet foundation, they replace them. What used to represent patience generates friction.

Having convictions in your personal or work life is laudable. Stick to those same convictions online and it people start seeing it as a performance — identity construction — at best and as provocations and trolling at worst. Every person I follow online who sticks to their guns has become a polarising figure, drawing as much wrath as they do draw support.

You can’t cash in your convictions in exchange for a calmer, more easygoing online life because that draws as much rage as persisting in them.

The only option seems to be non-participation, keeping to the more trivial, less substantive topics in social media and making sure that you have no real investment in your engagement in them.

Is it possible to stick to your guns without pissing people off?

Is it possible to participate in social media on substantive topics in a constructive way?

Is the world we’re building — where people only engage with and communicate with the like-minded and the similarly concerned — really the only way forward?

I really don’t know.

The web has covered the basics — that’s why it’ll get harder from now

This is a combination of two posts I originally wrote on Medium. Re-posted here merely for my own archives. Feel free to ignore.


“Tired of Safari” and “Apple’s Web?”

The drama surrounding touch events is a long-standing one and Apple has done a good job of playing the villain in this particular farce.

This is just one facet of the core problem with the web as an application platform: we will never have a unified web app platform.

What Apple, Google, Microsoft, and Mozilla want from web applications is simply too divergent for them to settle on one unified platform. That’s the reason why we’re always going to get Google apps that only work in Chrome, Apple Touch APIs that are modelled on iOS’s native touch model, and Microsoft Pointer APIs that reflect their need to support both touch and mouse events on a single device at the same time. There really isn’t an easy way to solve this because standardisation hinges on a common set of needs and use cases which these organisations just don’t share.

The web continues to work well as a platform for structured documents that are progressively enhanced with interactivity. Just hypertext and forms alone will get you a lot further than you think towards solving most of the ‘app’ problems that organisations are facing today. With a bit of progressive enhancement you can create really productive systems that work everywhere because they are just bog-standard websites.

Unlike structured, interactive documents, complex web applications that are built on a common set of APIs — APIs which work the same everywhere — are very unlikely to happen. You’ll have your Chrome APIs. You’ll have your Safari APIs. You’ll have your MS and Firefox APIs (because their needs are quite similar). And you’ll have cross-platform frameworks that bridge the gap by making compromises everywhere. But a universal web application platform? That’s just another spin on the ‘write once, run anywhere’ chimera.

Of course, Apple is still being very annoying in this particular case and their reasons for objecting seem spurious and political. I’m not trying to defend Apple.

What I’m trying to say is that we should expect the drama surrounding Pointer Events to repeat itself a lot over the next few years. It’s a taste of things to come.


(Thinking out loud while on the train. If this post seems a bit stream-of-consciousness that’s because it is.)

It’s hard to get a picture of where the web is at as a platform just by randomly browsing Can I Use or following blog discussions.

But given how frequently both Google and Apple (like in the posts I linked to and commented on yesterday) are being compared to Microsoft it’s clear that something’s up.

The standardisation process has always been fraught with difficulties and tensions but I think it’s clear that the various tech behemoths that influence standardisation are finding less and less common ground and time passes.

If you’ll let me caricature the various companies’s attitudes a bit:

Google wants apps that look and work identically across Android, iOS, and web browsers and it wants them all to work like Android apps. Moreover, its various transpilers and cross-compilers indicate a desire for these apps to share a considerable amount of code and architecture.

Facebook, as evidenced by its React project and other opensource code, wants apps to share an architecture but not UI (i.e. do what’s the best UX for the platform).

Apple seems to like things like CSS Shapes and Regions and other design-oriented features but little interest in implementing app-related features. (Although, oddly enough, they don’t support OpenType font-feature-settings.)

Mozilla wants ‘webby things for the web’: APIs that conform to the principles and architecture of the web. (Google, in contrast, seems less fussed about that.) But they’re also eager to turn the web into a truly universal platform.

Opera seems to have ceded all strategic direction to Google.

And Microsoft… Microsoft’s seeming goals actually look well aligned with Mozilla’s goals, for good reason. They are both minority platforms locked out of the big platforms of the day: Android and iOS.

These companies want different things from the web and once you’ve covered the basics, agreeing on a common, standardised approach to things like app architecture, UIs, and UX is going to be progressively more difficult.

Once you’ve gotten these companies to agree on basic APIs like FilesCryptography, basic CSS design and layout features, Audio, and such standardising the rest is going to be harder because there is no one true solution to higher level platform problems like what architecture is best for any given app.

I don’t mind that, personally. I like the tactic that Facebook has been using of building on top of what’s available, transpiling from upcoming standards where available, and creating an architecture that seems to be a fairly sensible one, but I also like the fact that we aren’t all forced into the architecture they’ve chosen. Once you get up to the level of app architectures, standardisation becomes less and less desirable because architectures vary.

And that’s without getting into the discussion of whether apps and their architectures are relevant to the web in the first place. As I wrote yesterday, we can get very far just by using hypertext, forms, and progressive enhancement.

What worries me more is the general question of quality. Apple’s issues with the quality of iOS8 and Yosemite have been much discussed (for good reason, they are substantially more buggy than their predecessors in my experience). But both Chrome and Firefox have their own quality and reliability issues as well that never seem to quite go away.

What hope do we have for standardising complex new app architectures when we don’t seem to be able to reliably implement the foundation they’re built on?

Curious minds want to know.

A draft of a chapter of some thoughts on things.

The following is a very early draft of a chapter from a book I’m writing with Tom Abba. Think of it as a couple of academics/creators trying to help other creators avoid all of the dumb mistakes they’ve made. It’s very early stages still but all feedback welcome (send them to baldur.bjarnason@gmail.com or to Tom, if you can dig up his contact details somewhere. Probably on his twitter. Or his home page. Maybe just shout it in our general direction).


Choose your structural grammar

My dad has regularly been going to the theatre for decades. He and a few of his friends have a subscription at Þjóðleikhúsið and, come rain, come shine, every few weeks they go to see whatever it is that they're staging. It doesn't matter if it's getting awful review, whether it's a farce or a tragedy, they go, watch, and then talk about it over wine. This tradition has survived two divorces and several major career changes.

Theatre has that effect on people (especially if you fancy yourself as a cultured middle-class citizen of the world). People get hooked on watching it. People get hooked on working in it. Theatre isn't a mainstream hobby activity but it's here to stay.

It is, arguably, the oldest form of storytelling that we still practice. (The other contender being music, although given how intertwined drama and music have been, the distinction is moot.)

Speak to any historian of cinema (especially the amateur ones) and you'll get a yarn about how early cinema consisted just of a camera pointing at a stage: recorded plays that didn't use the medium to any sensible degree and that film didn’t begin to advance until filmmakers began to break away from the conventions of the stage.

This narrative—even though it's demonstrably, completely, and utterly untrue1—has become a standard trope in media commentary.

Even though this story is a complete fiction, its message is a useful one: different media have varying qualities which means that each medium lends itself more to doing some things over others. It’s a McLuhanite parable—his pithy ‘the medium is the message’ aphorism writ large.

Which is all good. My only problem is that there’s a better yarn we can use for this: the story of an earlier media evolution that has much stronger parallels to our current new media predicament.


The novel has a longer history than people expect. How long, exactly, is a bit more complicated to answer because then you have to start defining exactly what a novel is in terms of length, style, and structure.

We’ve clearly been telling stories in prose for millennia but even if we restrict ourselves to something more specifically novelistic in terms of structure and style then we’re still talking about more than a thousand years2.

This is something we’ve clearly been doing for a while.

Despite this extended history, prose never really took off as a method for telling long stories. It dominated non-fiction, philosophy, and theological studies, sure, and it was the primary form of telling really short stories like fairy tales, fables, and ghost stories.

But when it came to telling longer interconnected stories poetry was what most storytellers reached for: Gilgamesh; Homer’s Iliad and Odyssey; Ovid’s Metamorphoses; Virgil’s Aeneid; Beowulf; Poetic Edda; Dante’s Divine Comedy; Ariosto’s Orlando Furioso; Milton’s Paradise Lost; Byron’s Don Juan; Pushkin’s Eugene Onegin.

Prose stories and novels existed but they have been in the storytelling minority for most of their history—even many of the exceptions relied heavily on poetry. Most of The Canterbury Tales are in verse. Even the Prose Edda was written and presented as a textbook for poets—it isn’t strictly speaking intended to be a prose retelling of the Norse myths. It was a Christian-era explanation of norse myths so that contemporary poets could read and use the metaphors, idioms, and similes that were based on those old myths. Drama and poetry ruled the storytelling roost.

(On a tangental note: what the Prose Edda omits, elides, and adds is just as interesting as the retelling itself. If you compare the Poetic to the Prose Edda, it seems clear that Snorri Sturluson adjusted the myths a bit to suit the more Christian culture of his day. For example, you can read the Poetic Edda as saying that Freyja ruled over the armies of Valhalla with Odin—that she, as the viking feminine ideal, was a lot more warlike than the Christian retellings made her out to be. Make love AND war, instead of make love, not war. The idea that the viking goddess of love would be a passionate general appeals to me.)

It wasn’t until moveable type became the norm that the novel began to make headway and even then poets like Byron and Pushkin dominated the scene with what were essentially novels in verse.

It isn’t that printed poetry doesn’t work. It does. It’s that poetry isn’t reliant on print, as a form it works just as well orally3 as it does printed.

Novels needed print to thrive as a medium.

Print distribution put novel distribution and dissemination on an even level with poetry. But even with a more even playing field it took the novel many years to reach parity and then surpass poetry as the western world’s primary form of written storytelling.


Back when I was a kid in gagnfræðaskóli (the Icelandic equivalent of high school, literally ‘school for useful studies’) a friend of mine, pressed for time, wrote a book review essay for school pretending that a AD&D roleplaying session of his was a fantasy novel.

His teacher had given the class the assignment to review a book of their own choice. He’d been too lazy to read something so he just gave the session a title and wrote a literary ‘review’ of it for the class. The teacher couldn’t tell the difference and none of the kids blabbed.

Much ink (and pixels) has been spilled on the issue of the role of storytelling in games. There’s always been a narrative element to games but the use and importance of stories in games exploded in the late 20th century. Even without computer games, roleplaying games, board games with an explicit and important setting and back story, and choose-your-own-adventure books make the issue complicated enough on their own.

That a medium like games can accommodate and use narrative elements but not be dependent on them seems to break the brains of a lot of academics, despite the fact that this is the role that stories tend to play in at least two other historically important art forms:

  • Poetry? Can use stories and story-like elements but doesn’t need them as a form.
  • Music? Ditto.

Of course, this complicates all attempts to define a theory of games. Is it a good game when you’re just using the mechanics of the form to deliver a story? Is it a good game if the story is rubbish but does an excellent job of serving the gameplay? How many angels can dance on the head of a pin?

Asking if something is a good game or novel is only a useful question if you’re an academic or an annoying snob. You can take that question and its siblings (such as did this conform to the rules of its form as academics define them?), put them in a box, and throw them in a particularly indigestive volcano. They don’t help you create. They don’t help you create.

The questions to ask are more along these lines:

  • How did this affect me?
  • Was the experience consistent?
  • Did it play with or too my expectations in an interesting way?
  • Would I do this again?

Anybody who has spent any time researching readers and players knows that these four qualities—effect, consistency, expectations, and repeatability—are what is important to them about works of art.

When it comes to deciding on a medium or genre as a creator, how those four qualities play out and support or don’t support our goals and intentions is the single most important factor to consider.


My great grandfather was a journalist, translator, politician, academic, poet, playwright, and a priest.

Not all at the same time but he multitasked more than you’d expect.

He was an interesting fellow—founding member of staff of the Icelandic National Broadcasting Service and in charge of their newsroom during the start of World War Two—but what’s relevant to us today is that the list of the media he worked in roughly corresponds to the list of subjects and fields he worked in.

He didn’t do journalism or news reportage as poetry or drama. He didn’t deliver academic essays on Byron’s work from the pulpit.

He did mix up some things. His poems were very political. His drama had religious overtones. But for the most part the various media were treating like sorting boxes: something from this field went into that box, a different field went into another.

He realised what a lot of creators today don’t: some things only fit in some boxes.

Most of what’s in people’s head about writing and creating is romantic nonsense dominated by psychobabble (‘the creative personality’, ’artists have an irresistible urge to create’) or mysticism (‘the creative spirit’). Most of those bullshit notions about art and creativity aren’t compatible with thoughtful consideration of your actions. Beginning writers don’t choose novel writing because it’s the right choice for what they have to say. They don’t even think about what it is they have to say in the first place.

I know because I’ve made all of those mistakes myself.

Despite the label, creative acts—storytelling in particular—tend begin with the creator just going with the defaults.

If you can write, the default is to write a novel. If you can draw, the default is to draw a comic book. If you have money and gullible friends, you make video.

Major upheavals in media, such as the shift from poetry to prose, or the current introduction of digital media, only happen because somebody began to choose something other than the default. It happened because, at some point, an storyteller looked at the stories they had to tell, then at the qualities of the various media at hand, and decided on using something less tried, less developed, and unexplored.

Neil Gaiman's Sandman includes a story called A Game of You. It's usually remarked as one of the least popular of the series. It's difficult, awkward where its predecessors (especially the piece that comes before; Season of Mists) are sly and clever. At its heart is a narrative about dreams, and the power of an internal fantasy life that might tell us things about the external world. It is also an echo of a Jonathan Carroll novel; Bones of the Moon. Gaiman initially abandoned the story after reading Carroll's novel, finding the similarities too close to ignore. Carroll told him:

Go to it, man. Ezra Pound said that every story has already been written. The purpose of a good writer is to write it new.

A Game of You is a cousin to Bones of the Moon. They share genetic material, a DNA of story, but each tells its tale in ways that only their chosen form can deal in. The grammar of a 24 page comic book, with monthly instalments, words and pictures in concert on a page, re-reading and visual connections, is markedly different to that of a novel. The two works are, as Gaiman suggests, born from 'two radio sets tuned to the same goofy channel', but what arises from that transmission is native to their form, each using the grammar of their medium with subtlety and grace.

This place. This point where you are looking around and poking your way through digital media. You don’t get here without an essential curiosity—a compulsion to chip away at the unexplored and to wander into the dimly lit unknown.

And the first step in that wandering is a decision to choose. Once you’ve made that decision, whether you end up going with the default or not doesn’t matter, because you will have considered and weighed your options, instead of just being pulled along with the crowd.


One of the biggest mistakes you can do is simply lump all digital media into one and pretend that it’s all the same thing. That’s like pretending that all print books are alike and that the distinction between novels, short stories, journalism, poetry, and comics isn’t meaningful.

Digital storytelling, once you’ve let it settle after shaking it up like a snow globe, tends to settle into two broad piles, each which can be subdivided into countless mini-piles.

The first pile, on your imaginary left, is games.

The second pile, on your imaginary right, is hypermedia.

There’s a bit of indistinct sludge in between the two where you can’t quite tell which pile it’s in. That’s okay. Crisp, paper-like boundaries are for print anyway.

Games are the more easily recognisable of the two. Not because there’s more of them (in fact, there’s less) but because they have a much clearer boundary. When you can’t figure out whether a piece of storytelling is a game or hypermedia, that’s because it isn’t fitting the definitions coming out of the games field. Hypermedia doesn’t care. Hypermedia loves everything and everybody. Possibly a little bit too much.

Games design is much too big a concept to be covered here. Like poetry and mechanised print, games predate digital by several millennia. Their principles, while benefiting enormously from digital, aren’t dependent on it.

The ‘hypermedia’ that predates computers, on the other hand, works in ways that are fundamentally different from actual hypermedia. To pull that off in print, you’d need to be able to perform instantaneous transformation of matter.

Because it isn’t the link, per se, that puts the ‘hyper’ in hypertext. It’s the instantaneous and dynamic transformation of one text into another when you press the link that gives hypertext the oomph we associate with hyper media.

Think ‘hyperspace’ and you’re on the right track.

The hypertext that you read and enjoy vastly outnumbers the games you play because hypertext is how the web and apps tell stories.

And almost everything we do on the web and in apps is storytelling.

Facebook’s a story. Twitter’s a story. Blogs are stories. Every website, every app, every chat platform, they’re all hypertext and they are all stories.

That most of these are also conversations doesn’t make them any less hypertextual because hypertext is fundamentally conversational. That’s what linking and dynamically including texts in a variety of context does. It makes conversations. That’s hypertext.

Even in a plain old web page, links are conversational. Unlike references, which are formal even at best of times, links can be witty, tragic, satirical, tongue-in-cheek, and laugh out loud funny, even when neither the linking or the linked text are any of these things. Simple things like linking from a person’s name to the page in a medical dictionary for restless leg syndrome can be hilarious in the right context, even when the tone of both texts is serious and deadpan. That’s hypertext.

Hilarious juxtapositions of tweets or Tumblr posts are a common enough phenomenon for it to become a regular trope on Twitter and Tumblr. That’s hypertext.

Even ebooks are hypertext, if only by virtue of their reading context. Some of them are only accidental hypertexts, sticking to print conventions and ideas even as they have lost all meaning and sense in digital. Others, like this book, are written as hypertexts first, where links are used as one of the primary punctuation marks—more common than the m-dash, less pretentious than the semicolon.

This isn’t a book; this is hypertext.

Because this text was written with digital first in mind—unlike those print books which have been skinned and then re-coated with a digital gloss—this is a loose, conversational, and sprawling hypertext that might well eventually be bundled up and stuffed into print form like a set of clothes stomped into a suitcase while the taxi to the airport is waiting outside.

Which is fine. If I don’t want you to criticise my preference for reading print books lifeless, skinned, and flattened into ebook form, I don’t get to criticise you for preferring to read the ebook as a bleeding, severed appendage cut off from its network.


Games design is huge. Lucky for us, there are a lot of books and websites covering the subject so we really don’t have to do the form an injustice by covering it badly.

My personal favourites are:

  • A Theory of Fun by Raph Koster.
  • Lost Garden by Daniel Cook. A website that is a treasure trove of notes, ideas, theories, experiments, and examples on games design theory and practice.

There are more and I’ll add them as I think of them.

Digital media of all kinds is built on a series of action feedback loops. You do something and the device gives you feedback on that action. It’s the foundation of User Interface (UI) and User Experience (UX) design and the basis of everything we do in the field.

The core difference between the structural grammars of games and hypermedia is that in games the centre of meaning is in the action feedback loop but in hypermedia it is in the feedback loop’s context.

This difference in grammars expresses itself as different kinds of structures. Games are a tightly interwoven structure of feedback loops: one loop leads directly into another and they build on each other like Lego™ blocks. Sometimes that structure is hierarchical, i.e. levels of increasing difficulty and requiring increasingly complex actions: finish one to get to the next). Sometimes it is networked: e.g. a large space that you can explore where difficulty and complexity is distributed spatially.

In hypermedia, no matter whether it’s Michael Joyce’s Afternoon, A Story, Kottke’s weblog, or Twitter in its entirety the centre of meaning is in the context: where you get to after you take the action. The action only has meaning insofar that it affects the context. The page or tweet you see is what says something, the link and following it only modifies it.

The popularity of game mechanics in user interface design complicates things but mostly because they are usually badly thought out and not that unique to games.

Some of the things labelled as game mechanics are merely good UX design practices, like having clear, dynamic, and immediate feedback loops throughout your app. Others, like using leaderboards and the like to foster competition and manipulate your users into dehumanising their fellow people and thinking of them as things to be beaten is a tactic long used by the managers of sales teams. It, and a lot of other ‘game mechanics’ are only really competition mechanics and aren’t specific to games.

In the end, the ‘is it a game or not?’ question doesn’t matter to us. While the distinction between the two is important when it comes to understanding the strengths of each, it’s important also to understand that digital media (as well as a lot of non-digital media) can be more than one thing at the same time.

You can make a game that works just as well as just a story with all of the game’s feedback loops dialled down to ‘So Easy a Drooling Infant Could Do it’. You can make hypermedia, apps, and websites that can be played like games.

Absolutism doesn’t work for digital. Often the answer to the questions you ask yourself as a creator will be ‘both’.


I don’t remember the first time I told a story. None of us do. It doesn’t matter whether its genetic or learned, nature or nurture, storytelling is a basic human activity.

We only have two ways of teaching:

  • A show-do loop. The teacher demonstrates. The student tries to do. Gets feedback from the teacher who my or may not show again. The student tries again. Repeat as necessary.
  • Storytelling. The teacher encapsulates the showing, the doing, and the information needed to do, in a story.

Every teaching method or form is just a variation of one of those two, usually replacing the teacher or the storyteller with a technological proxy.

Games are strong on the former method: a feedback loop between showing and doing. Hypermedia is strong on the latter: even incomprehensible non-sequiturs are filled with narrative logic once you post them online, on the web, on Twitter, or on Facebook. The very context coopts everything that appears into telling a story.

In real life, how you teach isn’t limited to just one method but usually a mix of the two depending on the subject, strengths of the teacher, and the abilities of the student. The blurry line in digital media between games and hypertext is just a reflection of common practice.

What you teach isn’t limited to skills or knowledge, although that’s what we usually associate with teaching.

Sometimes what you teach is emotion. Feel the sting of murderous jealousy. Experience humbling shame. Understand the fear of death. Fall in, feel, and lose love.

This is what it feels like.

As teachers storytellers cannot just pour information into the heads of the listeners. They have to lead them to an understanding. It doesn’t have to be exactly the way you understand it—we all start in different places—but it needs to be of a kind with your understanding. Emotions need the same build-up, practice, demonstration, and experience as any other thing you teach.

And to be able to do that you need to understand your medium. You need to have at least made a conscious note about what you’re doing. Choose your medium. You need to know how that medium is and has been used. It doesn’t matter if your colleagues in that form aren’t doing what you’re doing, their techniques are relevant. Joe Sacco’s Safe Area Goražde, Marjane Satrapi’s Persepolis, and Will Eisner’s Contract With God are, respectively, journalism, autobiography, and fiction but they share a form of storytelling. It doesn’t matter if you’re doing a comic on cute cats, their methods are relevant to your work. Copy the way they do things. Try them for yourself. See what works for you and what doesn’t.

The same applies to games designers and hypermedia authors. Don’t limit yourself to the games or hypertext that are covering exactly the same subject as you are. The form is where the methods and the structure comes from. Copy ideas from apps, websites, and games.

Choose your structural grammar. Study it. Practice it. Repeat as necessary.

  1. Right out of the gate, early cinema focused on spectacle, fantasy, and documentary works. Most of the stage adaptations come after the special effects and documentary films. The crude and stage-bound nature of early film has more to do with the limitations and immobility of the cameras than an over-bearing influence of drama on the filmmakers.
  2. Who’s with me on holding a party in 2021 celebrating the thousand year old birthday of the published novel?
  3. Oral transmission coupled with the mnemonic aids of verse makes poetry less dependent on print for distribution and authorship.