Ebooks and cognitive mapping

The following series of tweets by @seriouspony (Kathy Sierra) tie into a subject I’ve been thinking about a lot lately, namely cognitive mapping.

The problem

To quote Wikipedia:

A cognitive map (also: mental map or mental model) is a type of mental representation which serves an individual to acquire, code, store, recall, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment.

The spatial environment in this case is the book itself. It is a three-dimensional object that lends itself very well to this kind of mental model.

Like so many other readers, cognitive mapping is a major part of how I remember what I read and it’s a technique that’s utterly useless when you’re reading an ebook. I read a book and a part of how I remember what I remember is spatial mapping. I remember roughly how far into the book the passage was and where on the page. Combined with copious usage of small post-its this makes it very easy to remember passages and reacquire them when needed.

I know that for a lot of other people (my students, for example, back when I was a teacher) cognitive mapping helps them keep track of the book’s ideas and argument as they go forward. Without it a lot of them lose the thread they are following through the book and, as Kathy put it, ‘fail to carry context forward’.

The other key to the memory puzzle is to stop on a regular basis, close the book, and think about what you’ve just read. Go over the salient points in your head and if you have that nagging feeling that there’s something you’re missing, go back and look for it. This tactic fortunately works for ebooks.

It’s not that I have problems remembering things. I can remember most things I read without any effort (much to my sister’s annoyance who says that I have a ‘glue-brain’). What these techniques help me to do is to contextualise the information. The cognitive map preserves for me the overall context of the book (where a passage appeared, where it was in the extended argument, what line of thinking preceded and succeeded it, etc.). Closing the book and going over the book’s points places the book’s ideas in my own intellectual context; I remember them better because I have connected them with my own preexisting ideas.

I rely on these techniques so much that when it comes to reading books that contain ideas I want to remember, I have become hesitant to read them as ebooks. For example, I’ve been reading through John Gray’s books lately (False Dawn, Straw Dogs, and Black Mass so far, all excellent, although Straw Dogs was the best of the three) and I made sure to buy them in print. I made the mistake of buying and reading Taleb’s Antifragile on the Kindle and will probably end up buying the print version and reading it again so that I can retain more of it.

I’ve actually done this several times: bought an ebook, read it, then bought it again in print to read properly. (Bundling would be awesome for me. Read the book in print and still have access to full-text search.)

It takes a lot to make a longterm ebook fanatic, one who genuinely loves the format, to lean towards print but, when it comes to remembering stuff, ebooks are for me a bit crippled. A good search facility in an ebook reading app helps to reacquire a passage you already remember but not for carrying context forward or remembering the passage’s context.

The solution

Kathy’s perspective on this is trying to figure out how to make awesome books and in that context she is absolutely right. New books intended to provoke skills development in the reader should be written to remove the need for techniques such as cognitive mapping. They should absolutely carry the context forward through writing and design. We know so much more now about how memory is formed, how skills are developed, and how the mind works than we used to and we have an obligation to use this knowledge to make new books better. There is no downside to that.

However…

(You knew there would be a however, right?)

We can’t rewrite old books. You can’t rewrite John Gray and add sections at the start of every chapter that carries context forward. You can’t add this stuff to Seneca or Schopenhauer. So, I have to respectfully disagree with Kathy on the need to find a tech solution. The point is that this isn’t an either-or choice. We both have the duty to make better books and content and we need to improve the experience for reading older books. Our duty to preserve existing thought is equal to our duty to make better books.

The problem is that I haven’t the faintest idea on how to address this in the ebook reading app. It’s a problem that requires a lot of research because our instincts on what helps memory formation are likely to lead the app developer astray.


ETA:

To those of you who doubt cognitive mapping in the print book context:

Have you never read through a book, then held it in your hands, recalled a passage, and been able to guess roughly where it is in the book, down to whether it was on the right hand or left hand page? And you’ve been able to do this without remembering what exactly the passage looked like?

That’s cognitive mapping. It isn’t an abstract phenomenon or an artificial mnemonic technique but a side effect of our interaction with a three-dimensional object. It’s an aid. Your memory isn’t crippled or compromised without it, but can and does improve recall and help you keep track of things as you read. Especially if you practice it deliberately and add aids such as small post-its regularly as you read. Those post-its aren’t just markers but also landmarks, you’re able to remember where a passage was in relation to its nearest post-it.

Also note that your memories in this case aren’t entirely visual, as in you don’t necessarily remember what the passage looked like. This is one of the many reasons why progress bars in ereaders can’t serve the same purpose. Those progress bars aren’t visible all of the time (so chances are with any given passage that the bar wasn’t visible when you read it) and even if it was, you aren’t likely to remember it unless you are one of those rare persons with extremely strong visual memory.

Ebook silos, update

Yesterday, I wrote a post on ebook silos and missed opportunities.

Some people seem to be missing the core criticism in the post. The problem, as I see it, is in the infrastructure and in the market we have in place. This is not a lament for how lazy people must be or how stupid existing developers are for not implementing these things already.

Existing ebook reading apps are bland out of necessity. The ones implemented by silo owners need to appeal to and be useful to everybody. They can’t not be generic. Specialisation is not an option when you need to appeal to the lowest common denominator.

The independents have a different problem. They need to both implement detailed support overly complex file formats and they need to target one of the silos (generally Adobe’s playground, which is a fraction of the overall market).

Because the market involved is so small and the end result is that their apps have to have broad appeal. A specialised app targeting a fraction of a fraction of a market is not economically viable when you are employing a team of developers.

The solution to this conundrum is both simple and difficult (like so many simple things are).

Generally speaking, the way to promote variety in any given software genre is to make sure that it’s possible for a solo developer or a small team of developers to create and maintain an app in the genre. Only a solo developer (maybe a duo) can hope to make a living by addressing a fraction of a fraction of a market.

I’m convinced that, if a solo developer found a way to implement one a specialised ereader that was useful and valuable to readers, they’d be able to make a decent living from said app.

One such example is Marvin. It’s an ereading app that focuses on configurability, text analysis, Dropbox support, proper annotations export, and a lot lot more. It is quite different from most other ebook reading apps in the market and is quite excellent. It’s not for everybody, but it is unique enough for there to be a large group of people who both have been waiting for an app like this and are eager enough to use it that they are willing to break the DRM on their existing library to use it.

Anybody who is in any way dissatisfied with their current ebook reading app owes it to themselves to try it out.

Which brings me to the compromises needed to make the byzantine mess that are ebook file formats manageable by a small team. Marvin offers a few pointers in that direction:

  • No DRM support.
  • Only EPUB 2.0 support to begin with.
  • Limited support for ebook design and styling, which is okay for a small app because all of its users opt in to no styling. They know what they are getting.
  • No support for video or audio.
  • No FXL support.
  • No PDF support.
  • No attempt to achieve full spec coverage but instead a laser-like focus on implementing features that crop up in customer feedback.

Now, Marvin owes a bit of its success to luck, but I still think it offers us hope for how things could go in the future. Solo developers and small teams will experiment with apps. Some will fail. Some will succeed.


—Couldn’t Readium SDK help?

Possibly, but that’s not what it’s for. As I see it, the primary goal of the Readium SDK is to get large corporations who are competitors to collaborate on improving their support for the various incredibly complex flavours of EPUB (2, 3, FXL). This is hard enough without trying to address the needs of solo developers as well.

The cost of acquiring a license usable in a commercial product bears this out. After September you need to pay $60 000 to join and get a license or you need to contribute work and source code. Both are going to be way beyond the means of a solo developer attempting the already difficult and risky task of a specialised ebook reading app.

And that’s okay. Readium SDK has an already unenviable goal (achieve full format support in all major big-corp generic readers). Maybe once that’s achieved and all major apps have shipped with full EPUB3 support, maybe then the foundation will reassess the licensing terms. Before then they are more than justified in focusing on trying to solve the problem they set out to solve.


—What would really help?

More publishers going DRM-free would help a lot, especially those catering to less price-sensitive specialist subjects. That would open up the market for a developer to enter with a specialised app catering to those readers.

In fact, O’Reilly’s pioneering work with DRM-free might well be an opportunity for a developer-oriented ebook reading app, one with built in REPLs and consoles for various languages. Imagine being able to run and play with example code in a variety of common languages from a dev ebook just with a tap. Imagine having access to each platform’s documentation as you read a book on a specific problem area; having access to the official documentation for a method with just a tap.

More comics publishers following the example of Image Comics (they sell DRM-free comics direct, not enough of them yet, but it’s a start) or Thrillbent might open up the field for a specialised comics-oriented ebook reader, one that only supports FXL ebooks and PDFs. They might even be justified in only supporting PDFs and CBRs.


I’m convinced you don’t need full EPUB3 or DRM support to create an excellent app that is sufficiently useful to enough people to be a viable business for a tiny team.

The app would have to be specialised and solve very specific problems for very specific group of people with very specific needs. Just making an ebook reader with cool and unusual features isn’t going to work again, even if it did work for Marvin. You need to both delight and remove pain. Delight alone will not do.

I’m not saying it would be easy. It’s clearly a difficult problem. But it would be interesting.

Ebook silos and missed opportunities

ETA: I’ve posted a followup to this post that hopefully clarifies things and offers a few suggestions: Ebook silos, update.


Ebooks can be transformed by context. Print books cannot. No matter where you take the print book, no matter what room you read it in, it will remain in the same form and have the same affordances as it did on the day it was first stacked in the bookstore.

An ebook could, in theory, be reformed, rebound, and recast at will. A writer who is reading for research could open it in an ereader specifically designed to enable writing and integrates directly with writing tools. A student could use a specialised ereader that is full of mnemonic tools, structured note-taking, and export functions that integrate with common reference management software. A genre fiction reader might use an app that stamps all ebooks into the same aesthetic template, configured by the reader into the form they consider ideal.

It’s easy to imagine how these would work:

The writer’s ereader wouldn’t atomise the reader’s annotations but would present the annotations for a book as a single, freeform, document that could be edited, extended, filled with notes and exported in formats compatible with common writing tools (Word, Scrivener, etc.). Every set of annotations would be a commonplace book in machine-readable form. Bonus points for automatically syncing to Evernote and Dropbox.

The student’s ereader would integrate directly with Endnote and offer mnemonic features such as regular pauses for recollection, forcing the reader to note down what they thought the preceding pages were about (research shows that if you stop, close the book, and try to recollect what the preceding pages were about your recall will subsequently be improved). There’s a lot of research on learning styles and tools that would be a goldmine for this sort of UI experimentation.

Then there’s the potential for a specialised comics app. Fixed layout formats require dramatically different UIs for optimal treatment of annotations, clipping, highlighting, and browsing. It’s easy to imagine a comics ereader that makes it easy to clip and visually annotate sections from the comic book. It’s also easy to an app that specialises in the format would offer a much better experience than the schizophrenic status quo.

The genre reading app does not have to limit itself to design configurability. A reading app for crime and mystery stories could integrate an extensive database of firearms, poisons, historical crimes, police terminology, and common codes for crimes.

That’s without getting into the various features that nobody offers because everybody’s trying to be the same:

  • Scrolling instead of pagination. I’d jump on a well-implemented scrolling ebook reading app (iBook’s jerky crap is not it).
  • Autoplay/autoscrolling. Sometimes you want to force yourself to keep a reading pace.
  • In-text annotations. Sometimes I want to edit the actual ebook itself.
  • Dropbox syncing for my library and annotations.

None of these features require new standards or extensions to old ones. They could work with plain text if you wanted.

An app that is everything to everybody is bland. Generic is dull. Specialisation creates immense value.

But instead of specialisation and diversification among ereaders we see convergence. Ebook reading app become more similar with every release. They all aim to support the same rendering features, in roughly the same way (infuriating differences in style overrides notwithstanding), surrounded by the same constellation of widgets and tools.

  • Highlights? Check.
  • Highlights made more ‘natural’ by behaving like a highlighter? Check.
  • Notes? Check.
  • Notes mades social in some way, via sharing or a dedicated service? Check.
  • Highlights that lose all formatting whenever they are moved into another context? Check.
  • Offer a selection of four to five fonts (plus the ever present ‘publisher defaults’)? Check.
  • Sync all of this bland crap using a proprietary syncing service allowing no other alternatives.
  • Limit the export of all of this bland crap to something even blander and more useless than what you already offer like text-in-body email.
  • A neato brightness UI that makes people swoon because their Stockholm syndrome has lowered their expectations so far that they have to look up to see the bottom of the Mariana Trench.

Check. Check. Check. Any ebook reading app that doesn’t behave like this is aiming to. This is the ‘ideal’ they all seem to be striving towards and when pressed for answers on why they don’t try to solve hard problems (like proper annotations export), the only reply is that the standards for those hard problems aren’t there yet.

And they will never be there because the best standards are those that standardise existing best practice. Nobody offers proper annotations export and so there is nothing to standardise.

But just focusing on the individual features is the wrong way to look at the problem.

Which, obviously, is silos. You’re locked into the ecosystem you bought the ebook from. Nobody will ever create a specialised Kindle ebook reading app for writers or for students. There will never be much variety in how ebook reading apps based on Adobe’s RMSDK behave. The only app that will ever work with ebooks bought from the iBookstore is iBooks.

The one major and unique strength that ebooks have over print, flexibility and fluidity, the characteristic that has the biggest potential for adding value, has been thoroughly walled away by the silo mentality. Ebooks could have been a transformative sea change in how we read books but instead are nothing more than a second-rate alternative to cheap paperbacks.

Technology is not inherently good

I’ve never meet a self-proclaimed geek who understands this. Technology is not something that’s inherently good, where more of it solves more problems and improving it improves our lot. If we implement servile AIs and pervasive automation, that won’t be used to create a society of abundance and leisure but to make the rich richer while the unemployed starve. Technology is something that needs to be applied and generally reflect the economy and culture it was developed in.

This means that a society geared towards inequality and inequity will use technology to amplify them. This means that a police state will use it to decrease the freedom and privacy of the citizens. Theocracies will use technology to hunt unbelievers.

Technology does not make the unfair fair and it does not right wrongs. It is a tool and the only way to change the world is to first change the people who wield it.

ETA: Athena Andreadis made this here excellent point over on twitter:

Administrative note on baldurbjarnason.com and feeds

This just a short note to say that I’m planning on doing most of my blogging here at Studio Tendra. I will be pointing the RSS feed at www.baldurbjarnason.com at the one on this blog from now on so some of you might get double entries if you are subscribed to both.

I’ll put a similar note up on www.baldurbjarnason.com whenever I get around to it 🙂

Oh, and I wrote a blog post over on futurebook.net on Amazon-y stuff and nonsense.

Posted without comment

Quote

From Black Mass by John Gray:

Secular thinkers find this view of human affairs dispiriting, and most have retreated to some version of the Christian view in which history is a narrative of redemption. The most common of these narratives are theories of progress, in which the growth of knowledge enables humanity to advance and improve its condition. Actually, humanity cannot advance or retreat, for humanity cannot act; there is no collective entity with intentions or purposes, only ephemeral struggling animals each with its own passions and illusions. The growth of scientific knowledge cannot alter this fact. Believers in progress – whether social democrats or neoconservatives, Marxists, anarchists, or technocratic Positivists – think of ethics and politics as being like science, with each step forward enabling further advances in future. Improvement in society is cumulative, they believe, so that the elimination of one evil can be followed by the removal of others in an open-ended process. But human affairs show no sign of being additive in this way: what is gained can always be lost, sometimes – as with the return of torture as an accepted technique in war and government – in the blink of an eye. Human knowledge tends to increase, but humans do not become any more civilized as a result. They remain prone to every kind of barbarism, and while the growth of knowledge allows them to improve their material conditions, it also increases the savagery of their conflicts.

The inefficiencies of joy

The following is from Joseph A. Tainter’s paper Social complexity and sustainability (2006):

Subsistence farmers also tend to underproduce, so that labor is underutilized and inefficiently deployed. Posposil (1963) observed Kapauku Papuans of New Guinea, for example, working only about 2 h a day at agriculture. Robert Carneiro found that Kuikuru men in the Amazon Basin spend 2 h each day at agricultural work and 90 min fishing. The remainder of the day is spent in social activities or at rest. With a little extra effort such people could produce much more than they do (Sahlins, 1972)

And:

Even under the harsh conditions in which they lived, these Russian peasants underproduced. Those able to produce the most actually underproduced the most. They valued leisure more highly than the marginal return to extra labor.

Now, I’m normally a fan of Tainter’s thinking, more clearly than anybody else he has outlined how impossible a position our society is in regarding sustainability and energy use, but this quote highlights just how inhumane modern thinking has become.

Walk into your average pub and ask everybody in there if they’d be willing to accept a ten to fifteen years shorter life expectancy in exchange for a lifetime where you’d only have to work two to four hours a day, half of which is fishing.

I’d bet that most of the people in there would think you’re either describing a paradise or their ideal retirement plan.

And that’s if you buy into the idea that you’d have a much shorter life expectancy. It’s likely that the life expectancy of an adult wouldn’t be that different from that of an adult in the States, for example. What would probably skew the numbers would be a high infant mortality rate.

People want to ‘underproduce’ and lead a life of leisure.

This is what modernity and industrialisation has brought us: more work and less free time.

Of course technology and science has brought us a lot of joy, but the end goal should be to create a society where nobody has to work more than four hours a day and can spend most of their time at leisure, where being more productive means having more time for fun.

That’s what we should be aiming for, not a society that tries to maximise the productive value of every single person, where we’re treated like nothing more than cogs in the economic machine.

Winner takes all versus the Matthew effect

Winner takes all

There’s a vague notion going around. For some it’s a suspicion, for others it has become a certainty, the rest of us worry and hope it’s wrong.

It’s the idea that the internet exaggerates the sales inequality of media markets. That, by massively enabling word of mouth and social networking, the web means that we will only get mega-bestsellers or flops, with little to nothing in between. The market becomes just bestsellers leaving the long tail with scraps.

In theory, this should be a simple question to answer. Somebody with access to detailed numbers from the market could calculate the Gini coefficient for book sales revenue over the years. If it used to be lower and is now almost one (or higher, since it’s theoretically possible for books to have negative revenue through returns) then we probably have a winner takes all situation on our hands.

If, however the Gini coefficient has remained the same or stays broadly equal to society’s income Gini coefficient, then we probably just have a regular “the world isn’t fair, boohoo” situation and there’s no need to blame the internet.

My suspicion is that the book market is only about as unequal as the economy in general, that the sales difference between J.K. Rowling’s books and the rest is about the same as the wealth difference between the top 1% and the plebeian masses (us).

But, something has changed!

Everything changes, all the time. In uncontrolled circumstances you can’t reliably map specific changes and claim that one causes the other. A lot of the time when we do that we get it the wrong way around (if we’re lucky), and end up claiming that wet pavements cause rain.

There is a related concept that might explain some of the sales patterns we’re seeing but, again, it’s hard to come up with conclusive proof given that big data lets us see what we want in the numbers and a market doesn’t lend itself to double blind experiments.

It is something that has been observed in plenty of other systems.

The Matthew effect

The idea is very simple: the rich get richer and the easiest way to get more popular is to be popular in the first place.

How it would work in a market could be described like this: every sale of a copy of a book increases the probability of selling other copies independently of other variables in the market.

If you couple it with Reeds Law which states that “the utility of large networks, particularly social networks, can scale exponentially with the size of the network” then you get this:

If a book has sold twice the number of copies of another book, it will have four (22) times the sales clout of the lesser selling title. A book that has sold ten times more will have a hundred times the sales clout. And so on.

What does this mean for publishing?

If this theory is even remotely true this has several major consequences:

  • Minor and random variations early on in a title’s sales history can snowball it into a bestseller. There won’t be any logical rhyme or reason for this and predicting these successes will be impossible because they are completely stochastic.
  • Hopping onto known successes (i.e. pushing an already big snowball further down the hill) will have much bigger payoffs than building up sales from scratch. Why go with the lottery ticket probabilities of a new author or completely new title when you can earn so much more by turning a one million dollar bestseller into a ten million dollar blockbuster?
  • Since the big payoffs are governed by randomness and the moderate payoffs by hard work, publishers have an incentive to cut down on the hard work (editorial, acquisition, design) and focus exclusively on the logistics of printing and shipping shit-tons of Fifty Shades of Grey (or whatever the next big thing is).
  • Self-publishers and small publishers become responsible for the research, development, and discovery of new successful titles. And once a title is a proven thing, a big publisher will swoop in and buy it up, promising the author more money than they’d get doing it alone or with a small publisher.
  • The big losers are small to medium-sized publishers who do all of the R&D but don’t have the resources to scale sales up into the stratosphere when they hit a success.
  • The winners are self-publishers and self-pub coops who can build sales up the slow and hard way for as long as it takes and keep a lion’s share of the upside in the unlikely event that they do find a major success. The bargaining position of an already successful author has never been this strong in the history of publishing.

Or, of course, it could all just be wild speculation based on a wild theory with no basis in reality. Time will tell.

What you people read (on my websites)

One of the basic problems with website ‘analytics’ is that a lot of the data is just noise. We have no real insight into cause and effect—that traffic sources section is insidious because it often amounts to little more than misdirection, knowing where people come from almost never tells you why they came.

The scary and frightening fact is that the effectiveness of our online marketing and traffic generation tactics is probably due to random chance—spending time on a particular source of traffic is no different from just buying more lottery tickets. Sure, you’ve increased your chances, but its success is still just down to random chance.

Or, even when something you do does have a significant effect, it might just be the novelty effect. It’s not what you did that mattered, just that it was new.

That said, when you have a statistically significant difference over a lengthy period of time, you probably have a piece of data there you can count on.

For example, it’s pretty certain that most of you lot only read my ebook publishing, production, and analysis posts. If we discount the statistical anomalies (like my posts debunking a few myths on the Icelandic political situation which are the most popular pieces I’ve ever written, unfortunately) an ebook post tends to get more than ten times more traffic than a post on any other subject published at a roughly similar time of day and day of week.

Now, drawing any conclusions from this is risky. Ebooks are the subject that I’ve covered rather consistently throughout my career and they are my subject of expertise, so it makes sense that other subjects haven’t attracted a regular audience.

Still, I always find it a little bit disappointing that the popularity of my blog posts is inversely proportional to how much fun I had writing them.

Tolerating the heat, noticing the water

I’m not suited to this heat. I don’t know if it’s genetic or merely a side effect of being raised in Iceland but my comfort zone for outdoor temperature is anywhere between 10–18˚C, 15 degrees being ideal.

So, in an effort to keep going during very English heatwave (‘30˚C! how will we survive?’) I headed out to a café with a book (Black Mass by John Gray) intent on surviving on icy cold lemonade for the afternoon.

Sound plan, as far as it goes. And it didn’t get too far anyway. I hadn’t stepped out of the door before thoughts began to crumble into my headspace. Normally, I find it very easy to just pick a task and lock in on it—indeed lose myself in it so much that I have to rely on my phone alarms to let me know when to eat—but this time my mind was choosing its own topics. It definitely wasn’t keen on letting me read.

What was on my mind?

The state of being in between. Of being both and neither.

I haven’t felt completely Icelandic for a very long time now. And I’ll never feel English or British, no matter how long I stay here, no matter how vague my accent gets. Being partially removed from a place you know as well as a native gives you a perspective shared by neither the native nor the foreigner. You know the place and the culture well enough to understand the subtleties, forgive some of the foibles, and know the context to some of the things that just seem plain weird to foreigners. But you maintain enough of a distance for you to see some of the larger patterns and the cultural artefacts the locals don’t even notice. You become a fish aware of the water. And the other fish don’t see the water so you never quite blend in.

I go to Iceland and I see things they don’t see. I go to Britain and I hear things they don’t hear. At times it almost feels like you’re going mad—delusional.

But…

Then you turn around, see another fish noticing the water, and the both of you can laugh, nod to each other, and carry on, knowing that the ‘heatwave’ hasn’t driven you bonkers yet.