HTML is too complex

(This is the ninth post in a series on the publishing industry’s new product categories.)

The syntax of HTML and XML—angle brackets and closing elements—isn’t complex. It’s tedious, but it isn’t complex. If the problem lay in the basic syntax we’d have an easy time fixing it. The problem with markup complexity lies in the underlying model. Or, in the lack of one. Simply put, HTML is a mess.


This is from an email sent by Matthew Thomas to the WhatWG mailing list (that list was at the time responsible for the development of HTML5) almost ten years ago. Everything it says is still true:

In response to the proposal that HTML5 add a host of semantic elements, each with no default rendering to distinguish it from other elements, Matthew predicted the following:

  • The A-list of Web developers will begin using all the elements
    correctly on their Weblogs, and they will feel good about it.

  • A greater number of Web developers will never use most of these
    elements, but they will replace all occurrences of <div> on their
    pages with <section> because it’s more “semantic” (just like they
    did with <em> for <i> and <strong> for <b>), and they will feel good
    about it.

  • The vast majority of article producers (Weblogs and online
    newspapers) will never use <article>, because there’s no visual or
    behavioral benefit from doing so. So <article> will never become a
    reliable way of dissecting or aggregating pages.

  • The number of knowledgable HTML authors, the proportion of HTML
    pages that are valid, and therefore the overall usefulness of the
    Web, will be less than it otherwise would have been because of
    HTML’s increased complexity.

I’d argue that his prediction, ten years ago, was pretty much spot on:

  • The A-list rewrote their own sites to use fancy HTML5 semantic elements, then wrote books, presented talks, and sold workshops teach people how to do the same.
  • The hangers on and wannabes try a bit but don’t use any of the elements except maybe header and footer, and possibly article after that was blessed as a generic sort of standalone content container instead of section. Most of the elements are regularly used incorrectly.
  • The vast majority don’t use any of the semantic elements unless it’s by accident like a thoughtless copy-paste.
  • The only reason why the proportion of valid HTML files has increased is because HTML5 retroactively blessed invalid files as valid, provided they wear the HTML5 doctype.

The web remains too unstructured for article to become a good way for ‘dissecting or aggregating pages’ as originally envisioned. The HTML5 outlining algorithm isn’t used by anybody (except the A-list gurus) and, even worse, supported by very few browsers or screenreaders.


As Matthew Thomas mentioned in the email above, unless there is an immediate visual or behavioural benefit to using an element, most people will ignore it. This is compounded by the angle-brackets mess of HTML. By completely separating design (CSS), behaviour (JS), and structure (HTML) the specification gods have taken away the context that would make it easier for us mere mortals to give our documents a meaningful structure.

That’s without getting into the problems with the syntax itself.

While the separation makes using HTML for documents and ebooks more difficult, it is essential for it becoming an app platform, which obviously now the web’s primary purpose.

(Most websites today are just web apps for delivering ads. They certainly aren’t made with readability in mind.)


There was a long period of time when the markup of most websites was unreadable because they used a mess of nested table tags to render the site. The markup was meaningless and complex. For a few years, though, after that, when you viewed the source of your average website, you would have seen relatively clean and nicely structured markup that most people could understand, even without specific knowledge about HTML. Google’s web crawlers loved simple, well-structured documents and so the web filled with them.

Now we’re back to seeing almost the same level of complexity and messiness in most web pages as we saw in the worst days of table-hacking. The semantic elements from HTML5 are largely unused. Those that are used such as <header> and <footer>, are used incorrectly because people misunderstand what they mean. Every page is riddled with div elements with opaque classes and IDs nested in a document structure that is more complex than many I saw in the table-layout days.

This escalating complexity is arguably one of the biggest ongoing issues in web development because it makes things like authorship, search engines, discoverability, and automation more difficult than it should.

You see, if the markup you assign to a piece of content has a specific meaning, you can write code that’s aware of this meaning. You make human meaning machine readable. This is useful if you want to make the text more searchable or if you want blind people to be able to hear it with their screenreaders. If the markup is too complex (both the underlying model and the markup syntax) to use properly, the humans won’t be able to do the markup properly, making the content’s meaning machine-opaque again. HTML5 has a big problem with markup complexity where even A-list developers have spent countless hours debating what the various new semantic elements actually mean.

Hint: They don’t mean what most of us assume they mean. Section, Article, Footer, Header, all of them have differences in meaning from what we’d assume from existing practice or basic understanding of English.

HTML5 is itself complex. Most developers can’t or won’t put in the effort to properly mark up their content semantically. EPUB3 and its ilk add even more complexity, more ‘semantic’ elements and attributes, all of them even more difficult to understand and harder to explain than the basic new semantic elements of HTML5.

Badly implemented complexity, such as in HTML5 and EPUB3, means we get all the pain and difficulty of escalating complexity, but with few of the benefits. Unfortunately, these are formats whose limitations we have to work around and surpass. They are a disadvantage on both the web and ebook industry. One of the tasks publishing has ahead is to try to neutralise that disadvantage.

The ebook as an API

(This is the eighth post in a series on the publishing industry’s new product categories.)

The problem many publishers are facing is that their titles need to be reused in a variety of contexts.

Book apps are very unfashionable at the moment but there is a brisk trade in small, fairly cheap, and functional apps based on book content, where the content is often licensed by a small app development outfit from a small publishing outfit or book packager. These range from military history apps, to children’s apps, to travel guides and in many ways are prototypical Content Development Kits like I described in an earlier post.

Then we have a variety of web and app gateways popping up that sell access to ebooks on a subscription basis, either directly to consumers or to libraries or other educational establishments.

The source format for these apps is often an EPUB version of the title and this is, in the cases I know, the source of a lot of problems and complications. The structure of the EPUB doesn’t tell the app developer where to hook the functionality of their app into the title’s content. This means that the app developers have to spend considerable time adapting and editing the text of the title and its structure. In some cases they have to spend more time on pulling structured text out of a crap EPUB than on the development of the app itself.

For most large publishers they see development as the single biggest cost of creating apps from their titles. This is because they are focusing on the digital equivalent of a tent-pole blockbuster movie. Small publishers and small app developers tend to focus on smaller scale apps with a much bigger emphasis on code reuse. For them, anything that cannot be automated is a liability and a cost centre.


You can think of a structured ebook file as an API. Most existing ebooks don’t need any API capabilities. A novel benefits little from it.

Reference books, however, gain immense value from becoming detailed and functional APIs in the digital space.

A reference book that is the source of only one or two concurrent editions in print can be the content source for hundreds, if not thousands of apps. A classic example being the dictionary services built into Mac OS X and iOS.

Any reference title can be a similar source provided that its content has been made available as an API.

Unfortunately, we don’t have many formats or tools that make this easy, making a lot of these services custom jobs.


Rewind

Stop. Go back. Reread. Can you tell what the big problem is with what I wrote above? The idea that publishers could benefit from turning their titles into well structured ebooks—files that can serve as APIs—has a fatal flaw:

Only certain kinds of books have the internal structure that suits this purpose. Books that can be mapped onto a database structure (e.g. reference books) work perfectly. Structured non-fiction tends to work well. Anything that has a story less so.

Even the most structured dictionary or reference book is still not flexible enough to really suit the purposes of app, web, and interactive media developers. They need more. They need content that is adaptive.

What they need are structured projects that offer enough variety in their fabric to adapt to varying devices and context. Instead of single length chapters you need entries that have full-length, abridged, even more abridged, and tweetable versions of the chapter’s content. You need the chapter’s full title, tweetable title, display title (if different). Every chapter needs descriptions of varying lengths (like the chapter’s content). Do that for every chapter in the project, mark it up so that it’s usable, and you’ve got the beginnings of something really flexible.


Adaptive content

What this means is quit thinking that what you are doing is designing and creating for the final presentation. You’re not in the business of making brochures. You’re not in the business of mobile applications. You’re not in the business of making web pages. You are in the business of making content and structuring that content so that it’s presentation independent, so you can get it out onto whatever device or platform you want to. (Karen McGrane – Uncle Sam Wants You to Optimise Your Content For Mobile)

Adaptive content—making things work for mobile, web, desktop, apps, tablets—is not just a design problem but an authorship, business, and editorial problem.

A large content library is not an asset in this context but a liability. It’s an ossified monolithic resource when you are surrounded by small and nimble players using small and flexible resources. The individual smaller players do not represent a threat—most of them are more likely to fail than not—but as a whole they do. Where each one may only address a tiny sliver of your back catalogue’s target market, they do so with content that is more flexible than yours because they had to start from scratch. Or, they have had the time and focus to adapt it by hand because their survival depends on this one title, which is a level of attention you can’t give to tens of thousands of titles.

As a collected whole, the smaller web players, self-publishers, three person publishing houses, indie app developers, and the like, are much more likely to be able to properly leverage the advantages of digital publishing than a large publishing mega-conglomerate. Publishers approach each edition as something that demands a unique design, custom editing, and detailed work to adapt the title’s content to that editions particular form. This isn’t scalable, neither in terms of labour or cost.

Adaptive content is essential when we face a plurality of devices. Having a ‘mobile content’ strategy means that you are just making the same dumb mistakes again because there will be other platforms in the future, and if your content isn’t readily adaptable you’re just going to face the exact same problems again that you are facing now with the mobile transition.

Not to mention the fact that you are opting out of a revenue stream from licensing your catalogue to various developers.


Your existing non-fiction titles are flies caught in amber. They exist only as evidence of a single evolutionary context, incapable of adapting or changing to survive in a new one. Because of the costs and work involved in making an extensive back catalogue adaptive, it becomes a liability when competing with a host of smaller outfits starting from scratch.

My last word on DRM

Trying to change a major publisher’s mind on DRM is a lost cause. That’s why even though I disagree with IDPF’s DRM efforts, I can only hope that their work will result in the wholesale adoption of a completely ineffective and useless DRM technique and bring us into a de facto DRM-free world.

(You could argue DRM isn’t a problem for existing consumers. That’s true, but only because we just buy from Amazon.)

What I hadn’t expected but has become abundantly obvious over the past year is that the publishing industry has a pathological preoccupation with controlling the reader’s actions. I had originally expected publishers to respond to reason, logic, and ruthless capitalistic ROI calculations (all of which weigh against DRM). But those who favour DRM do not respond to logic. When nailed on one argument they slip over to another:

No no no, piracy isn’t a problem, it’s uncontrolled sharing

Po-tay-to po-tah-to. When it becomes hard to find evidence that piracy affects revenue the response is to rebrand it and claim that it’s still a problem.

I don’t quite know how to deal with an irrational obsession on that scale. Obviously, if piracy is cutting your revenue, that is a bad thing. But so much of the concerns around piracy are indistinct and vague—piracy worriers can’t articulate specific business consequences beyond ‘lost sales’ hand-waving with no data to back it up.

My own views on piracy, as a result of having worked in the software industry for a few years, is that, insofar that it’s an actual revenue drain, fighting it is largely a lost cause. As games developer and publisher Jeff Vogel has been fond of pointing out: if you’re selling a digital product, by definition everybody who decides to give you money is an honest person. The dishonest people will just pirate your product without a care or worry. Heavy-duty, iron-clad DRM can’t force the dishonest to buy and imposing it would have massive detrimental results both for the honest reader, the author, and the publisher (but not the dishonest reader). The dishonest would rather simply go without than pay. And even if you could force them, they’d be the customer from hell, overloading support channels and public forums with Olympic level whining.

So why worry about them? ‘Pirates’, even if you can prove they exist and affect revenue, which most publishers discover is hard, are a non-factor from a business perspective, about as relevant to your sales as Mac users are to a Windows developer. They simply aren’t a part of our market. This fact is also a big part of the reason why measuring piracy and the impact of piracy is so difficult. It’s like trying to measure the GDP impact of a Buddhist chanting. Non-participation isn’t measurable. Estimating the loss from non-participation is little more than science-fiction.

I wouldn’t mind talking about piracy to publishers if they discussed the issue in the same logical, matter-of-fact, manner that most indie software developers discuss it. But they don’t. What publishers mean when they say ‘piracy’ is more of a general worry about change and new technology and so most of them are extremely reluctant to discuss the details of what they believe and fear.

In publishing, the piracy concern is a superstition, magical thinking driven by a hope that the digital space is fundamentally inhospitable and the old times will be proven to be fundamentally superior to the new. The believers resist all attempts to empirically verify whether there is or isn’t a problem. Arguing with them is like arguing with a creationist. They don’t want proof, even if it is in their favour, because what they want is blind faith. What is worse is that many are using the fear of piracy as an excuse for not entering interesting new markets, which is a loss of opportunity, money, and revenue more certain than any threat that stems from piracy.


The real problem publishers have

Tracing the problem right down to its roots, it’s clear that the publishing industry’s behaviour towards readers (not their readers since we’re talking about the author’s readers not the publisher’s) comes from the same source as their behaviour towards authors.

Just as with authors, who they saddle with fundamentally unfair and insulting contracts, publishers simply have a disdain and fear for readers, preferring to let proxies like bookstores take care of all direct contact with them.

I see no other way to explain their behaviour. You just don’t do these things to people you like (readers or authors).

DRM is an extension of that disdain and fear. It is intrinsically hostile to the reader. It’s value isn’t supported by any evidence of substance. Publishers are willing to harm their authors’ readers just on the possibility that they might be doing something they disapprove of. There is no evidence of harm to the author or the publisher.

The question whether sharing or piracy actually takes place or whether it will affect publisher revenue is clearly irrelevant, otherwise they would have abandoned DRM years ago.


DRM will never be harmless even if it’s useless at preventing piracy or sharing

Welp.


ETA:

Outside of the fundamental disrespect for the reader it represents, there is one major business reason why DRM is a very bad idea for publishers:

It involves inflicting a recurring technical, infrastructural, and administrative cost on all of their sales in perpetuity to solve a problem they can’t prove exists. By tying their entire catalogue, in perpetuity, to the fate and competence of a single external service provider (whoever provides the DRM solution) publishers are taking a business risk of unfathomable proportions. These are the kinds of risks that sink large companies.

That anybody would make this sort of decision without hard evidence to back it up is utterly mind-boggling.

Except, except, except

Publishers really invest in quality and editing.

Except editors keep getting laid off as a part of cost cuts.

If you want proper marketing for your book you need a publisher.

Except you’re expected to do most of the online marketing yourself, and online is where all the sales are happening.

You need a publisher if you want the book to look good.

Except the books from big publishers often look like crap in digital and utterly mundane in print—no better than a well made self-published book.

At least publishers always provide good covers.

Except the covers so often completely misrepresent the book and whitewash it of all minorities and personality.

Publishers can help you sort out your social media ‘platform’.

Except they increasingly won’t even sign you on unless you already have a platform.

The editor always brings out the best in the text.

Except when the editor simply doesn’t see the world the way you do and buries everything unique and special.

The editor always hones and clarifies what the book has to say.

Except when the editor simply doesn’t get what the book has to say.

At least with a publisher the printed book won’t be Print-On-Demand.

Except it increasingly will be.

Books that haven’t been edited are always crap.

Except when they aren’t. Sometimes the combination of beta readers, volunteer editors, and the right book results in something good.

Books that are edited are always improved by the process.

Except when they aren’t. Sometimes the editor is just a bit of a fool. You rarely get to choose your editor.

Books from big publishers are always edited.

Except when they aren’t and the writer has to bring in a freelance editor if they want any real editing done.

A publishing company is a well-honed machine for creating and releasing books.

Except the next available slot in their publishing schedule is in the autumn, two years from now.


There’s this tendency among advocates to compare the absolute worst of the enemy with the perfect, best case scenario on your own side. The crowd that is hostile to self-publishing often likes to compare the worst dinosaur porn (which still sold, though, and made more money than many other titles) to one of those wonderful, Never-Neverland publishing companies that to this day invests massively in editors, doesn’t use exploitative covers, spends its untold riches on making the book’s typography absolutely perfect, has a workflow that spits out beautiful, error-free ebooks with ease, gives every author a personal PR rep, and has a multi-million dollar marketing budget for every title.

Of course self-publishing looks bad when you compare it with a piece of fiction that’s less realistic than the more deranged parts of Alice in Wonderland.

The reality is that book retail has been steadily deteriorating over the years and publishers themselves have been compromised by decades of cost-cutting. Most book sales are online. Titles today get much less editorial attention than similar titles did years ago. Covers have always been completely disconnected from the book’s actual content.

In terms of marketing, quality, distribution and design the difference between a competently published book and a competently self-published one is now less than you think. Competent self-publishing is getting easier every year as tools and services improve. Publishers offer less and less as they try to stay competitive through cost cuts and ‘optimisations’. Over time publishers seem to be devolving into self-publishing services that offer little but demand everything.


One key difference between self- and traditional publishing that is unlikely to fade in the near future is traditional publishing’s rank disdain for the author.

Rank disdain? Yes, there’s no other way to describe it. The standard contracts offered to most authors are insulting. The contracts are de facto for life and make breaking up with the publisher complex and hard. They frequently feature non-compete clauses that shackle the author’s entire career to the whims of the publisher.

These are terms that would be onerous in an employee contract where the desperate victim can at least expect a salary, if not actual benefits. For publishers to expect an author to swallow them in exchange for the pittance that most books earn is nothing short of insulting.

Pointing at other media industries won’t get you out of this one. Standard contracts in other media industries are also unfair and insulting. There’s a lot of disdain for creators going around.


But, Baldur, haven’t you said that some of the biggest pain points for most publishers today are narcissistic idiot diva authors?

Absolutely.

  1. It doesn’t matter if the author is a fool. Those are contractual terms you shouldn’t even offer to fools.
  2. Who do you think you are going to get when you make the demands publishers are increasingly making? Somebody who constantly self-promotes? (I.e. does their own marketing and social media.) Somebody who doesn’t consider the primary reward of publishing to be monetary? (Very few can make a living writing books.) Somebody who is willing to do all of this work for the sole perk of being able to ‘perform’ in public as an author? Narcissistic idiot divas, that’s who you get.

What would a fair contract look like?

Fixed term, for five to seven years. If you want more, you can renew the contract for another five to seven years once the first term is up. Or, you can just pay for the privilege of a longer term. Up front.

Absolutely no non-compete clauses. If you don’t want a writer to publish other writing elsewhere, pay them enough money so they don’t feel the need to. If you can’t afford to pay that much then you have no right to complain let alone to demand non-competition.

No options or rights of first refusal. Stand or fall based on your standard of work and the strength of your relationship to the author.

The copyright always, always, stays with the author.

Both parties should have the right to unilaterally end the contract and all of the obligations it entails should there be a major change in the circumstances of the other party. E.g. the publisher should be able to end the contract if the writer is sectioned or jailed. The author should be able to end the contract if the publisher gets sold or declared bankrupt.

(The bankruptcy bit is a bit complex, admittedly, but having language in the contract that covers the scenario is always going to help the author more than harm.)

A fair publishing contract is one between peers, where the rights of the two parties are in balance.

Even if the naysayers are right and self-published titles are always objectively worse than traditionally published titles, at least with a self-publisher you can count on the publisher treating the author with respect.


What books have always had are small teams. Outside of the people who work for the printer, the number of people who work on a book directly—especially compared to other media industries—is tiny. Maybe a couple of editors, typesetter, cover designer, writer, maybe a couple of office staff—making a book isn’t manpower intensive. You could argue that selling and marketing a book is, however, but that’s also the bit that is being disrupted by the web.

Finding your small team, one you can trust and rely on to help you make your book, is more important than whether you are self-published or not. It’s the team that makes the book, not the publishing model.

The problem for writers seeking to be published is that they usually can’t shop around for a good team. They might be able to shop around for a good editor, but not a team. People within a company inevitably vary and the author is forced to trust that they’ll be lucky with the people assigned.

What the self-publishing model does do is put the author in control over the team instead of the publishing company’s stockholders. If the author wants to spend more on editing because the editor is just so good, they can do that. If the author wants to spend less on editing because they have such a good group of beta readers, they can do that. If they want the cover to actually reflect the book’s story, they can work with a cover designer to make it so. They can assemble a team of freelancers around every book as needed, on an ad hoc basic.

The publishing companies that don’t compromise on the small team and can offer the flexibility to respond to the book’s needs won’t have to worry about self-publishing at all.

Other publishers will have to worry.

A thought exercise

Which may or may not make sense.

Imagine that we have a resource mineral that, while not vital to the survival of the human race or essential to continuing technological progress, plays a pretty important role of making said things easier, more convenient, and bearable.

Now imagine that this mineral needs to be refined before it becomes a useful product. The big six refineries have locked down all of the rights to all of the major sources of the mineral, which they stockpile. Their stockpile and mineral purchases vastly outstrip their capacity for refining them, but they continue to hoard them nonetheless.

Several things become obvious in this scenario. First of all, every single new entrant that attempts to play at the same level as the existing six gets crushed very quickly. They can’t rely on stockpiles of existing materials. They have to rely on new mineral deposit discoveries and to do so they try to entice existing prospecting agents over to their side.

The problem is that said agents became rich by working with the big six and so their cooperation is fairly limited.

Then there’s the issue of modernisation. When you run a big operation with massive revenue, any technological upgrade of your processes costs a fortune—for the big six refiners to finance the retooling of their plants, the new technology needs to deliver a sizeable impact to the bottom line. Which hardly ever happens because most technological improvements are incremental.

Another, obvious, thing is that lower quality competitors enter the market as demand exceeds what the big six can supply. These use a new mineral similar to what the big six supply – it serves the same purpose – but it isn’t the same and operates differently with different, possibly more, side effects. These entrants make up for the lower quality of their material through a variety of means. They build their refineries closer to the customer, ensuring prompt delivery. They try not to be tied down to the same strict processes as the big six and so can respond quicker. But, mostly, they compete by building much closer ties to the miners of their competing mineral than anything the big six have done, using networks and modern communications to leverage a large number of smaller mining operations instead of a smaller number of large mines. Year after year, their access of unprocessed mineral improves and improves.

Now, this presents a dilemma for the big six. The competing product is clearly a bit rubbish, but it’s improving quickly and, worse for the big six, a lot of the customers don’t seem to care. They value other things than the over-engineered, expensive product the big six refiners produce.

There are a couple of obvious thing for the big six to do:

Either they compete directly with the new product by building out new operations that leverage the material properties they have now discovered that their customers actually do value. The problem with this approach is that this gives their competitors a big leg up, their entire business model just got validated and they have a massive head start on the big six.

Another obvious thing to do is to try and offer their services to the new industry, become a high-margin provider of solutions and services. The problem there is that they actually have fewer core competencies in this new field than the new entrants and so end up selling these services mostly to each other and to dumb money trying to capitalise on the disruption.

The blindingly obvious thing to do, which they never do, because it means giving up what they see as a competitive advantage, would be to sell off their stockpile of minerals. They have more resources than they have the capacity to turn into a product. By opening up the licensing market of the mineral they create a completely new industry of refineries who can be agile because they aren’t tied down to mineral acquisition, but are free to innovate in their production techniques.

This enables new kinds of companies and new kinds of products. It opens up the field to a new set of disruptors who can disrupt both the big six’s existing refinery operations and the newer, lower quality refiners. Moreover, since the single most important core competency of the big six is the acquisition of mineral rights, they have ensured their continued existence and profitability.

Back to the real world.

The big sixfive, as a rule, demand long term rights exclusivity while only really leveraging the full value of these books for the first few years. After that they just stockpile them, hoping for a film deal or sequel that might spin up demand for the older titles again.

This is a massive resource that they are simply not equipped to turn into product. Their product development units (publishers, that is) are busy publishing new stuff. Non-exclusive, API-based, and widespread licensing of their backlist to competitors in chunks would make them a lot of money, ongoing, and it would open up the field to a completely new kind of publisher.


Postscript, a year later

I’ve been hearing conflicting reports about how easy it is for writers to break up with their publisher. The vast majority of reports say that it is at the very least a hassle and at worst a nightmare.

But there is a subset of the reports that say that it’s easy peasy. It isn’t fun and the publisher might get a bit grumpy, but it isn’t too difficult.

My theory is that the minority who find it easy to get the rights to their work back are getting their stories from people with influence, i.e. authors who either have enough money to have lawyers on retainer, or enough reputation oomph to convince the publisher to be nice.

Whatever the cause since the majority seem to be having difficulties and since there’s another big whack of titles who don’t have an advocate (like titles controlled by uninterested heirs), the above scenario isn’t too far fetched. Publishers seem to be employing non-compete clauses and hard-to-sever contracts to hoard titles and authors they aren’t otherwise using.

It crosses my mind that publishers are hoping bulk licensing of their back catalogue to, for example, subscription services might earn them a lot of money, and that’s part of the reason why they are hoarding titles and authors.

Losing faith in yourself

(This is the sixth Stumbling into Publishing post.)

It always starts well because I always start alone. Working by myself, I have the peace of mind to just focus on the work, work on the components, polish the details, and simply go where my interest takes me.

Sometimes you share with people you trust; never more than with a small group.

Sometimes you lose interest and just stop. You are working alone and so who is going to judge you for giving up?

As time goes by, the thrill you had when you first started working begins to give way to something deeper—more profound. Your desk becomes the place where you sit down, heavy with troubles and worries, and stand up, hours later, light with serenity and calm.

Your routine becomes a glass palace where your soul lives, made out of emotional transparency and honesty of work; a place as fragile as it is beautiful.


Before 2010, I had always blogged just for myself. When I first started blogging, Facebook hadn’t even reached the point of being an obnoxious idea discussed at a college canteen. If you wanted to write and share with your friends, your options were Livejournal or a blog. I picked blogs, mostly because it offered more scope for experimenting with web technology. (I’ve been making websites for what is approaching two decades. It used to be fun.)

I never had any cause to be concerned about the ‘people out there’ who came to my blogs from outside my social circle.

That changed when I decided to take blogging seriously. It was all a part of the one and the same experiment:

Learn about publishing and study it.

Do it by self-publishing.

Write for an audience on the blog.

Take the idea of having an audience seriously.

Write fiction for an audience.

Take the self-publishing process seriously.

Learn everything you can about the process.

Talk to people.

Take them seriously.

Adjust your writing to suit the audience.

Lose the love for writing.

Let a deluded community guide you away from much more important problems and to focus on the tedious and mundane.

Lose the joy in tackling a hard technical problem.

Let self-interested shills dictate the issues you focus on.

Lose the thrill of figuring out a complex issue.

Lose the love of telling a story.

Lose heart.


I documented earlier the toxic difference between blogging and other kinds of writing, where the immediacy, tight feedback loop, and brevity of the form doesn’t compromise the writing to nearly the same degree.

The problem was more fundamental than that. It was much more basic than a conflict of form and desire. I failed myself.

I got into blogging, debating with people, commenting on the format specification process, talking at conferences, and participating in the publishing community online. I began to take seriously the feedback people in the community gave me and adjusted my writing to suit. I changed my focus of what problems to look into based on the suggestions of people in publishing. I began to rely on external feedback—other people’s responses—to counteract my utter conviction that I suck at this—that I suck at everything publishing.

Because, like so many others, I think I suck.

The feedback loop didn’t counteract that conviction. It strengthened it.

Because:

  • Everything I enjoy writing tends to be less popular.
  • Conclusions I come to on my own, through reading, studying, researching, and working, are—if not ignored—immediately labeled ‘controversial’ and ‘provocative’.
  • The only really positive response I get is when I state the bloody obvious—the kind of observations everybody with common sense agrees with.

Even worse is the fact that I was (and am) monitoring the responses. I used to check Google Analytics every week. I used to monitor the response on Twitter. Even though I pretended otherwise, these things mattered to me.

Then I burned out. I didn’t lose faith in myself. I burned out when I realised that the only reason I inflicted this torment upon myself was that I never had any faith in myself to begin with.


When you’ve burnt out, lost faith in yourself, and lost respect for your audience, you begin to play at being deliberately provocative.

Part of it is a desperate attempt to get people to take something—anything—more seriously than just a yay or nay retweet. Dialling the idea up to eleven and having them reject it is better than having them ignore it completely.

Part of it is a loss of respect. While I respect the people I’ve communicated with online and in real life, I haven’t respected blog readers as a group for a very long time.

I think the only way to respect blog readers as a group is to ignore them. Write either for yourself or for a specific individual. The most interesting and rewarding blog posts are the ones that are like a letter to yourself or to a friend.

That means giving up on the idea of getting any real value out of your blog, much like the only way to enjoy social media means giving up on the idea of benefiting from it.

Your blog can either be a resource for your career, or it can be a piece of work you enjoy writing.

Social media can either further your business, or it can make your life richer.

Optimising your activity on blogs and social media is toxic. It’s a pit of venomous adders because all of the compromises and adjustments that increase the response and improve your career are also actions that poison your mind. They draw you away from yourself and into a feedback loop where your self-worth depends on what button some moron decides to press on their smartphone.

Once you’re hooked by the loop you start doing crazy things like compromise your other writing projects, self-censor, and make a truck-load of bad choices in general.

See also: ‘sellout’, ‘Judas’, ‘dishonesty’.

(Lesson one: if you’re going to sell out, do so properly and get paid. Don’t sell out for free.)


I made bad decisions:

I decided I wasn’t any good at anything I do.

I decided that anything I enjoyed didn’t have value.

I decided other people knew better than I did about what I was good at.

I no longer trusted my own taste.

I lost the will to work on projects I enjoyed.

I listened to people’s feedback, adjusted my work to suit, but I didn’t believe them and so lost the emotional connection to my work. I no longer had the bond with my work, the pact you have with your own imagination that you need to make something that interests yourself.

All that crap about how the only way to reach your readers is to be true to yourself? Probably true about book readers but patently untrue about blog readers. The manipulative, evil, and sleazy crap works online. It doesn’t just work, it works brilliantly. It gets traffic. It gets conversions. It gets leads. It gets sales. I have no idea how people do it without completely losing their souls.


The path to repair yourself is to write for yourself again:

Don’t write for the hungry horde of blog readers.

Don’t write to prove yourself or demonstrate what you can do.

Don’t write for the feedback loop.

Disengage.

Disentangle.

Explore.

Wander around the words and see the sights.

It takes a while. Bad habits tend to stick around—those bastards are hard to shake. But, with practice and heavy writing, they begin to fall off one by one.

Some bad habits seem to stick around forever but that’s all right. There’s always room for more improvement.


Most of the posts I’ve been publishing on my blog this January were written some time over the past couple of years. They are not new. Most of what I wrote during this period just went into a folder, to be stored and forgotten. Some of them I wrote to shake some bad habits loose. With many I failed.

In writing largely for myself over the past couple of years, I’ve built up a small pile of text files. Not all of it is suitable for you piranhas. Some of them are. Some of them won’t make sense to you. Some of them are just crap. With a lot of them I was still in the habit of stating the obvious. Some of them are clearly written in the toxic blog style I hate. Some of them I like. Others I don’t like but will post anyway. Some I genuinely enjoyed writing. Others were a torment through and through. Whatever the quality, over the next few weeks I’ll be posting a bunch of them on this blog, roughly one a weekday, until they run out.

I’m monitoring the traffic to these posts and replying to the comments. Not as a part of the old feedback loop where I used to try and figure out how to get to you people. More as a comparative experiment. Since the writing process and the monitoring of the data have been completely separated, there is relatively little risk of the feedback loop reinstating itself.

I don’t know what I’m going to do once I’ve unloaded that bucket of half-digested crap onto the blog. Maybe I’ll just continue in that vein, writing for myself and throwing an occasional one onto the site after it’s aged a bit and the stink of the first draft has been brushed off. Maybe I’ll just stop and just use the blog to let people know about what else I’m up to whenever I’m up to something interesting.

Whatever else I do I’ll keep writing. The only question is whether you will have a chance to read it or not.

I haven’t decided. Whatever I do, I’m not going to take your advice on it. Don’t tell me what you want. You don’t get a say.

Changing your readership mix

(This is the seventh post in a series on the publishing industry’s new product categories.)

The mix of reader types in your readership isn’t an unchangeable fact, a curse bound in iron by the gods of old, a universal constant for all eternity. It can be changed.

Actually, that isn’t really true. The readership mix for most titles and genres is probably set in stone, one of those big blocks of ‘fixed, can’t change’ that you just have to work around.

What you can do is create a new readership with a new product in a new product category, but one that uses the text, images, and other materials from the old product. A new product that appeals to a market that is different from your print edition.

Most of those new product categories are just rebadged interactive media, and to create those you need people who know how to create interactive media (interactivity designers, app developers, or whatever you want to call them).


Most publishers give the digital edition of a title thought only after the fact—after the book has been written, edited, proofread, line-edited, typeset, and on its way to the printers—preferring to see what they can accomplish by tweaking whatever piece of digital rubbish their print workflow automatically craps out, wipe the InDesign shit-stains off it, and call it an ebook.

If you want to do something interesting with the digital edition of the title, you need to plan this right at the start, before any work is done on the title. And please do involve a professional developer right at the beginning. Think of them as co-authors of the digital edition and not as carpenters putting up a shelf you’ve specced out.

Once you start, once you’ve planned the print version of the title, your options for the digital edition go down dramatically. At that point, the easiest thing to do is to gloss up the title with idiotic ‘enhancements’ or other interactive doohickeys. Anything else is too expensive because you are, in effect, reinventing your production workflow on the fly.

Don’t do that. That’s crazy.

If all else fails and you’ve been given the task of adapting a pre-existing title into digital, you have a simple set of options:

  • If the title is likely to have an ebook-friendly readership and the title is structurally an easy match for ebooks, just do a basic ebook.
  • If the title doesn’t have an ebook-friendly readership but is easy to adapt into an ebook, do one and hope you get lucky.
  • If the title won’t sell as an ebook and won’t be easy to turn into an ebook, don’t do an ebook. Do something radically different.

Avoiding wild dogs/ebook fanatics

One reason to do a totally unviable ebook that’ll just lose you money and only sell three copies is to avoid PR backlash. You see, people who want ebooks really want ebooks. They get very angry when there isn’t an ebook version and will complain loudly. It’s better to do what they—.

Hah! Had you going there for a moment. Don’t do that. That’s crazy. Here’s a simple rule for you:

Ignore crazy people who haven’t already given you money.

Which problem would you rather have?

  1. A few nutters complain on Twitter and on Amazon that there isn’t an ebook version available. Most people ignore them.
  2. A few dissatisfied nutters keep sending you support emails because the ebook edition you released is much worse than the print edition because it was a money-losing low-budget production.

I’d choose the first problem every time. The last thing you want to do is piss people off who’ve already given you money.


What does ‘radically different’ mean?

So, you’ve backed yourself into a corner. You have a title that probably has to be something other than an ebook.

At this point your only real options are:

Either do nothing (a perfectly valid choice, since nobody runs a business specifically to lose money, doing nothing is always an option)…

Or, you take the title and everything related to it, give it to an accomplished app developer, and tell them to make something out of it. Don’t tell them what to make because, if you work in publishing, you are almost inevitably clueless and incompetent when it comes to the web and apps. Give them a target audience they should serve. Any involvement by you beyond instigating the project will decrease its chances of success. Or, judging by some of you, it would massively decrease its chances.

Tell them to figure out a new product with a new title (the old title’s readership isn’t interested in digital, remember) using your materials. Set up whatever rules, goals, and benchmarks you need to feel comfortable. Set up whatever licensing agreement you both think will make you both some money. Then get out of the way because, honestly, if you’re in publishing, you probably don’t know what you’re doing.

(In digital media, I mean. Oh, you thought I meant in general? So, sorry.)

Best part? You can do this again with another developer. You can give another developer a brief to create another product from the title, for another target audience, with another name. Once you are playing at this level you are creating completely new products with new titles that just happen to be based on your stuff. Why limit yourself to one go at the roulette table? Especially if you can convince the developer to do the project without paying them up front payment while sharing the profits.

Even better, there’s a way for you to get almost unlimited tries at the table at little cost to yourself—all upside: just create a standardised licensing kit for a selection of your non-fiction titles. You don’t even have to do an API, just what you might call a ‘content developer kit’ or CDK: a zip package of the title’s content in a structured format that developers can license on whatever payment basis you want. Bonus points if you set up a self-serve ecommerce site where developers can buy CDKs at whatever price you set (preferably royalty-free; there’s room here for flexibility). Just lay down a few branding, contract, and promotional guidelines and you’re good to go.

You probably have to require your licensees to use something like “this app is based on X, published by Y” in their app or web descriptions, for the consumer’s sake, though.


Building up in-house digital product development is risky and expensive, especially at the start when you have to build up the necessary expertise and tools to do the job and change your organisations implicit value network.

The problem is that changing an organisation’s value network is next to impossible without firing everybody (yourself included) and replacing them with different people. Adding individuals who have different values from those prevalent in your organisation won’t change the value network. It’ll just make your new hires miserable before they quit or get fired. Which means that building a top notch, in-house digital product development team is going to be difficult for most publishers.

So, either partner up or build the team that is isolated and sandboxed from the rest of the company’s incompatible values. For most publishers, anything else is unlikely to work.

The various types of readers

(This is the sixth post in a series on the publishing industry’s new product categories.)

(Before I start, I’d like to make sure you know this is all speculation and probably wrong.)

My guess is you can break book consumers into broadly five different kind of behaviours. Emphasis here is on consumers so this doesn’t cover corporate, institutional, or similar professional purchases at all.

  1. Heavy reader. People who buy several books a month, read most of them, and still have a mile-high ‘to read’ list. This is relatively small number of people who have an outsized impact on the market and have mostly converted to ebooks.
  2. The literate reader. People who read anything from six to twelve books a year. How big this group is depends on the language and culture. In 2010 in Iceland, for example, an extensive survey pegged this group at over half the adult Icelandic-speaking population (PDF). For most countries that proportion will be lower. This group has partially switched to ebooks but at a much, much lower rate than the heavy readers.
  3. Blockbuster reader. The reader who only reads one book a year and then only a bestseller. These are the people that only buy authors like Dan Brown, J K Rowling, and whoever the dude is that writes those Jack Reacher novels.
  4. Super fans. They like this here one thing and aren’t ashamed of it. E.g. Twilight fanatics who haven’t read anything else in their lives. Harry Potter nutters. They’ve found that one thing they like and feel no need to branch out. More likely to reread and re-buy that one thing than to read something new.
  5. Gift givers. For whatever reason, these types have decided to forgo the universally accepted traditional gift of ‘cash in an envelope’ and foist their cultural selections upon undeserving relatives and acquaintances.

Group 1, heavy readers, is the one that has been driving most of the growth of the ebooks market so far. They’ve probably either completely switched over to ebooks or will have soon.

Group 2, is, in theory, the next major growth area for ebooks and also the one where ebooks are likely to stall. My guess is that most people in group 2 don’t read on the commute (if they did, they’d probably read more than 6–12 books a year) and so aren’t that affected by the bulk of your average book. A lot of the books they read are lent on or borrowed and so don’t cause major storage issues.

These people read a few books a year and share them with their friends. They have a lot to lose from switching to ebooks and little to gain. Ebooks in general are objectively more ugly. They can’t be shared easily with your friends. They require an expensive device that in many cases is shared across the household (i.e. to read on the iPad you have to take it away from your kids who are using it to play games). The specialised ereader devices cost as much as this reader’s entire year’s worth of reading (i.e. as much as six to twelve paperbacks would cost, but without the benefit of lending). Certain segments of Group 2 do benefit immensely from ebooks (those with poor sight who prefer bigger font sizes and those who do read on the commute). They are also the ones who have probably already switched.

Many members of group 3 will only ever buy an ebook by accident. If you do something only once a year you damn sure want a souvenir. I can’t imagine this group switching in big numbers. Nor should they.

Group 4 will probably buy their favourite book as an ebook, and a hardcover, and paperback, and the UK edition, and the Japanese edition off Ebay. They’ll hunt down a copy from the first print run. They’d kill for a copy of the limited first run from that small publisher before the title got picked up by the big publisher. They’ll read and write fan-fiction (so much fan-fiction). They’ll buy the book in Kobo, iBooks, and Kindle and compare the three but they won’t buy any other title because it isn’t what they love.

Group 5 is unlikely to ever give ebooks. Why give an ebook when you can just as easily buy an iTunes/Amazon gift card which you can then pretentiously wrap? Why give a gift card when you can give real cash? Why give cash when you can just confess that you don’t love the recipient enough to give their gift selection some thought, and tell them to just fuck off and not bother you again?

How these five groups divide the industry between themselves is going to vary wildly from market to market, genre to genre, and ebooks aren’t going to shift that composition in any major way.

Moreover, one person can belong to different groups depending on the market. Here’s Hypothetical Karen.

She:

  • Is a huge SFF fan. Reads several titles a month.
  • Is a semi-regular reader of literary fiction. About six titles a year.
  • Only reads other genres when a mega-blockbuster comes along.

If my theory above is true, Hypothetical Karen’s SFF fiction library would be mostly ebooks, her literary fiction novels would mostly be hardbacks, while the blockbusters would all be paperbacks, probably borrowed from a friend, with the exception of the few that she bought cheap as ebooks. Her shelves would be dominated by SFF favourites—some that pre-date ebooks, some that are just too good to just own in digital—and literary fiction.

But most readers won’t belong to more than one group. I think it’s likely that Hypothetical Karen and her ilk have already had an outsized impact on the market as early ebook adopters but are too small a group to influence future developments to any substantial degree. If this is true then ebooks might have to cross a second chasm after crossing the early adopter chasm since the early majority group might well be smaller than expected and the late majority group could well be more recalcitrant than expected.

Of course, like everything else in this post, this is blatant speculation and probably wrong.


My theory is that these are the four basic reader archetypes (plus one buyer archetype) and that the split between these five groups varies dramatically from genre to genre, title to title. Romance novels are probably dominated by group 1 with a smattering of group 2. Since romance readers are a large collection of heavy readers, it’s unsurprising that the genre is an ebook powerhouse.

Genre readers, in general, are likely to be of group 1 or 2 with group 3 coming in occasionally with individual titles. Most mainstream fiction and non-fiction (like celebrity biographies and autobiographies) are dominated by group 2 with only a smattering of groups 1 and 2.

And a title that is almost exclusively bought by gift givers is likely to tank in digital unless the publisher lucks out in some way and it gets adopted by a niche audience of some sort.

Even though some market segments may well have a much lower percentage of ebook buyers than others, sales successes are likely to boost the sales of all of a title’s formats. A blockbuster in an ebook-light genre is going to sell more ebooks than a mid-list title in an ebook-heavy genre. Big sales trump customer mix every time. The problem is that blockbusters are unpredictable and somewhat random while building a solid genre mid-list catalogue is in theory less so. Which suggests that if you have capital, you should focus on blockbusters and lottery stakes, but if you don’t have capital but do have in-house expertise, you should focus on solid genre offerings.

Of course, this is all conjecture and probably wrong. (“This is all make believe!”)


Figuring this out for real

What you really need to do is to figure this out for your readership. Exactly how to do that is tricky.

You need to find out how reading activity is distributed among your readers (i.e. how many are light, moderate, or heavy readers). You need to figure out their past format choices. Don’t ask them their preferences; they don’t know and will make shit up—people lie. Ask them what they’ve actually done in the past, preferably the recent past. You need to find out how much of what they read they buy themselves. You need to know what genres they’ve bought in the past. You need to find out what they want from you, because that might not correlate with their past choices.

If it doesn’t correlate, then take it with a grain of salt. Only trust customer suggestions that they are willing to immediately back with money. You don’t have to take the money, but their willingness to part with it is an important indicator.

How do you find this out? Beats me. Almost every realistic and economically viable way of getting trustworthy information about your readers will be biased towards either heavy readers and super fans or towards digital readers.

If you figure out a way, let me know.

The unevenly distributed ebook future

(This is the fifth post in a series on the publishing industry’s new product categories.)

Data serves the status quo.

Anything new or undiscovered by definition does not have a data footprint. Existing data collection and filtering techniques have biases that do not take the unknown or unfamiliar into account.

Unless you have a clear theory and a well-designed experiment to prove or disprove it, the only thing more data will tell you is that your preconceptions and existing biases are correct. With enough noise, your brain will find it easy to ‘discover’ patterns and correlations that support whatever it is you want supported. Data, on its own, serves your worldview.

This is the problem with almost all analytics systems in common use. Unless you are running a tightly controlled experiment, the only thing data will do is paint you a general picture of the status quo; it’ll give you the shape of, say, your web traffic—the ‘sources’ of the nameless mass that fills your comment threads with tripe—but it won’t help you discover any of the ‘whys’. Why are they here? Why did they read it? Why did they comment? Why did they (or didn’t) come back?

Why didn’t they buy my book?

To pretend that an A/B test can tell you why a reader decided not to buy the ebook edition of a footballer’s biography is to accept a worldview that is incompatible with the very act of publishing longform prose in the first place.

For a simple A/B test to be able to tell you why a reader made the decision not to choose a book or a format you have to believe that the human mind is a simplistic machine, driven entirely by pre-programmed responses to external stimuli, to be hacked by an enterprising grifter. A mind like that is never going to comprehend, let alone enjoy, extended piece of text. A humanity like that would never have risen out of the mud to read or write books.

You can A/B test small theories and small issues, but it is not an experimental model that will help you find answers to complex questions or understand complex problems.

Before we do anything else, when we have an issue, we need to come up with a theory—an idea for how things work that you can then explore and try to prove or disprove.

Then you need to figure out an experiment that specifically disproves that theory, which is sometimes next to impossible because, we in publishing don’t have access to the environment where the experiments need to be implemented and run.

If this method seems slow and awkward (the only conclusive result you can have is partial disproval, not confirmation), then that’s because it is. It’s also the only way to know. Anything else is guesswork.


It’s a classic quote that is tailor-made for the modern internet: short, facile, glib, simplistic to the point of being useless.

The future is already here — it’s just not very evenly distributed.

—William Gibson

The problem with the line is that it’s using the term future as a shorthand for technology and the changes it engenders—equating it with progress.

It has a simple message: progress remains a two-dimensional timeline (past → present → future), but that places, markets, and cultures are unevenly distributed along that timeline. Crap countries are stuck in the past. Good countries have a head start on the future.

As such it isn’t much of an improvement over the standard progress myth. In fact, it makes it worse by adding a dollop of neo-colonialism into the mix. “They are savages because they just haven’t had their share of our ‘future’ yet—not because a broken global economic system is holding them in debt-slavery”.

The publishing industry has bought into this idea wholesale. Some publishing markets are, according to this worldview, further ahead on the progress timeline than others. It also implies that advancement along the timeline is inevitable, even if it progresses at varying speeds. Romance and other genre fiction tend to dominate ebook sales and so must have more ‘future’. Non-fiction less so and must therefore have less ‘future’ and more of that crippling ballast called ‘past’. Big mainstream titles hit the ebook market in seemingly unpredictable ways. Some garner decent ebook sales while others seem to sell only in print. There, the ‘future’ seems to be randomly distributed, like a stress nosebleed over a term paper.

This, obviously, implies that the ebook will either eventually dominate universally or at least capture the same large percentage uniformly across the market.

I don’t think that’s going to happen.

The various publishing markets differ in fundamental ways that won’t be changed by ebooks. As others have said, ‘ebooks are terrific and haven’t changed a thing’.

Some will switch entirely to ebooks. Some partially. Some almost not at all.


If you’re going to generalise about readers, try not to generalise too much and stick to specific tastes and behaviours. Anybody claiming or even implying that an entire age group or economic class broadly behaves in the same way clearly hasn’t been observing book buyers for a long time. Claiming that those under twenty-four prefer print or that the more affluent prefer ebooks is useless even if it were true (probably isn’t) because those categories are too broad for us to guess what sort of books they are buying. Knowing that buyers of a specific genre prefer one format over another is clearly more useful than finding out that two-thirds of the young people who couldn’t avoid your survey didn’t like ebooks. One is actionable. The other isn’t.

It would be even better if we were able to make an educated guess of how a genre’s readers break down into behaviour groups:

  • Does a single kind of reader dominate? (casual readers, heavy readers, blockbusters only, etc.)
  • Or, is the readership more varied than that?
  • Is the distribution of the kinds of readers reliable across the genre or do sub-genres or individual titles differ substantially?

We are largely working blind here and unless you manage to get a critical mass of readers to buy from you directly and then read the books in an environment you control (good luck with that), it will be impossible to get even vaguely accurate guesses.


Some titles aren’t going to sell well as ebooks and there isn’t anything we can do about it except pray they turn into blockbusters. Because, if the title does turn out to be a blockbuster, you can always pay for a proper ebook version once the money starts rolling in.

The converse also holds true for ebook-heavy genres where the credo “ebook-first, print if popular” might well be printed above the door of every publisher (self- or other-) in the future.

If you have a title that is:

  • Visually rich.
  • Or, poses in some way to be an ebook production challenge.
  • And, is likely to appeal mostly to a print buying audience (this can happen for a variety of reasons).

Then the logical action to take is to quite simply not make an ebook version. Unless a high quality ebook is an almost free byproduct of your production workflow spending money on creating an ebook version of a title like that is likely to be a waste of money.

Conversely, print will not be viable for some markets within the industry, generally those dominated by ebook readers or have been thoroughly disrupted by apps and websites.

Either way, the single biggest concern publishers should have is to figure out ways to either discover or change the composition and shape of their readership. Making decisions on digital production will be next to impossible without that knowledge.

Sex, violence, and stílbrot

(This is the fifth Stumbling into Publishing post.)

One flaw (out of many) that’s endemic in my writing is that I tend to introduce detail that’s either irrelevant, too early, or too late. Like describing the cut and pattern of somebody’s clothes before the cultural origin of said design becomes important or adding superfluous technical detail to a post when you only need the general idea. A lot of the time adding detail detracted and obfuscated the effect I was driving for.

This tendency of mine was exacerbated in the stories by my decision to have an objective third person narrator and not quite doing it properly.

The perspective I used in the stories was modelled after the Icelandic Sagas. All of the sagas have in common a narrator who can see almost everything that happens but cannot read minds. Or, more specifically, what is described is whatever could plausibly have been relayed by a third party witness because the sagas were pretending to be historical documents of actual events—stories about real people relayed down a couple of generations. E.g. if a murder happens in the story where there aren’t any witnesses likely to blab about it (like in Gíslasaga Súrssonar) you can’t describe the murder, nor can you say for sure who did it. When somebody dies alone you can’t describe their final moments. And clearly, you shouldn’t be able to have any scenes at all featuring a character alone unless you can be sure they told somebody about it afterwards. So, anything that can be witnessed is fair game but you can’t dip into character’s minds or be sure of their motivations.

So, it’s a saga-specific twist on the common objective third person narrator pattern.

I bent and broke this rule several times in the stories in minor ways, but in each case I figured that the character’s motives or actions would be so obvious that stylistic consistency would take care of itself. Of course, that meant the end result was closer to a quirky style of my own than intended, which was probably for the best.

A corollary of this stylistic approach is that you don’t shy away from describing what happens. If, as happens in Gíslasaga (my favourite saga), a character has to tighten his belt to keep his innards from flopping out of a belly wound, you describe that. If another character gets his head and torso split in two down to the navel, you describe that.

And that’s where my self-censorship kicked in again.

Which is a statement I know surprises a lot of those who provided feedback on the stories while I was writing them. It’s plain that if I hadn’t dialled back the violence in places some of the stories would have been intolerable to read to those who were kind enough to be beta readers.

But to assume that the problem lay in the detail of the descriptions is to mistake a symptom for the actual problem. The real issue with these stories is simply that they were much too oriented on, well, fights, which was a consequence of too faithfully following the tropes of the sword and sorcery genre. Fighting people, monsters, undead, whatever, it was all too much. Violence should be disquieting and discomfiting. Reading about violence should make you feel bad—at the very least unnerve you. And if violence takes over the story that’s because there’s too little story, not because the action is described in too much detail.

(At least, in this specific case. Different things apply to different stories and styles, I’m sure.)

I can forgive myself for that. I chose to fall into the generic sword and sorcery crap trap. Beating things into oblivion is part and parcel of that particular snake pit. It wasn’t what we call in Icelandic Stílbrot or a break in style. (Breaking style is one of bigger writing sins a writer can commit according to Icelandic literary tradition.)

Shying away from depicting sex, however, definitely was a break in style. Which embarrasses me as an Icelander because, as I said above, not committing stílbrot is one of the Icelandic literary commandments.

Thou shalt not break style.

Instead of plainly and pragmatically describing what was going on, almost every time something sex-related happened I chose to, in filmic terms, fade to black or pan away as if I were a 1950s prude trying to adhere to the Hays Code. Or, which is worse, when the story would have benefited from it, I often avoided it completely.

The stories definitely suffered because of this omission. A sex scene between Cadence and her husband in the first story would have told the reader so much about how they managed to navigate their admittedly now loveless relationship with at least some care and emotional investment. It would have given their relationship an added dimension, made them less like caricatures, and transformed their fate into a proper tragedy. They had been in love once.

A sex scene between Cutter and Parell in the fourth story would have provided a much needed contrast to the violence and highlighted how unfair Cera’s situation was.

The sex doesn’t have to be pornographic for it to work. The love life is an important dimension to a love affair or a relationship and omitting it completely flattens the relationship’s depiction.

Say you have two characters whose marriage is one of convenience. You present them as almost strangers, a certain distance in their every conversation—their coexistence nothing more than a necessary formality. But if you manage to add a sex scene between them where they engage with each other as peers—a scene that is full of compassion, tenderness, and negotiation—that changes how people see their relationship. They are clearly not in love. They may not even be friends. That still doesn’t mean they don’t share something flavoured with arousal and tender feelings.

Or, to be even more crass, you can have a couple who are loving and affectionate in all other scenes but where the sex scene is one-sided and almost brutal where one partner dominates the other, that changes how readers see their relationship in ways that you can’t with any other kind of scene.

A sex scene lets you add a series of sensations and an emotional dimension to the relationship that you couldn’t have otherwise depicted.

I gained nothing by omitting sex—prudes are never going to be reading sword and sorcery stories nor are they likely to enjoy anything I write, not even the blog posts—but by avoiding sex I diminished the stories’ capacity for emotion and sensation.

Which were two things the stories could have done with a lot more of.