Don't let scanablility destroy your website

People don't read they scan -  Really?

Every single web design book, and limitless online articles all advise you that people do not read they scan web pages. They cite tons of studies with scary statistics such as people will click the back button within 2 seconds, they will not read sentences and they will quickly browse your material.

Because of these reasons all of the aforementioned articles recommend to re-write your body copy with this in mind, cut down the text as much as possible.

Why do readers scan and not read?

Is it the medium of web browsing that means you cannot read you have to scan? Many people point to studies showing that reading the same text on a computer takes longer than it does on paper. Well studies do suggest that this is true, but does this really mean that people only scan?

One book on web design I have read recommended performing a ruthless cut of the details of an articles body copy. It is commonly stated that you should
  • Start with the most important information and then leave less important information at the bottom
  • Use bullet point lists to emphasise key points
  • Ruthlessly cull your body copy
  • Assume that people will rarely read passed the first paragraph
They took an example of a poorly written 3 paragraph article on the history of a company. They stated that web users would see that block of text and immediately click away, however, if you culled and re-ordered the content it would keep users coming back to read more.

I noticed that in their re-written version that they had culled a couple of sentences about one of the founders claiming a important role in the creation of the company's first product, although he was normally considered "just an investor". I personally found this information interesting and its failure to be included stuck out like a sore thumb. The re-written version was clearer, shorter, allowed you to quickly find out the big pieces of information, but I preferred the original article as it simply had more information.

Is it the fault of advertising?

Perhaps website owners do not want people reading their pages, they just want them to quickly scan the page and then click on an advert? Perhaps this is the case, but sadly even sites like the BBC follow this path too.

You get used to scanning because there is too much noise!


The low publication costs of the web may lead to this scanning. So few pages have profession editors, copy writers etc (this is obviously also true of this blog :) ) that unlike the print medium there is a poorer filter on low quality content. People become used to spam, poor content etc and so scan pages quickly. The level of scan drops significantly when the quality of content is higher. Additionally many companies have poor web strategies, relying on one untrained person to maintain their web presence on top of their other duties.

Scannability is changing

E-readers, iPads, mobile phones are all allowing a more instant on experience that you currently have with a book. Further more you can increasingly put them down and restart at the exact same point you were at last, much like a book and a bookmark.  This makes in depth reading much easier.

Currently webpages over a certain length suffer from difficultly in bookmarking, re-finding that exact scroll position you were on that a physical book does not. It will not be too long before that is not as much of an issue.

Some times users want more information!

When I find an in depth piece of information I will read it. take for example I was recently wanting to find the UK release date of the Galaxy Note so that I could go to a local store and check it out. There were so many sites which were simply re-prints of the same press release written in almost identical style. After the first few I started scanning them rather than reading them as I could tell they did not have anything more to offer than the information I had already.

However, I would stop for any in depth information that did occur, unfortunately with that information being so few and far between I became a scanner rather than a reader.

Text is cheap

Yes you should improve your information structure; subheadings, bullet points and other similar constructs are great for getting your information across in an efficient manner.  However, ruthlessly culling perhaps unique information does not help you stand out, it helps you become part of the quickly scanned sites.

Re-tweeting and copying articles verbatim is common on the internet. This information overload of simple information hides the unique information on the same topic that may exist.

Text is a fantastic medium, in terms of network bandwidth and browser performance adding pages of text does not have a dramatic effect, it is compressed and normally a fraction of the network payload that a user actually downloads.

Until people stop focusing on scanability and start focusing on unique and well written content the internet will continue to be a useful reference, but a poor substitute to books for learning.

Please do not lose unique and interesting data just because you felt the article was too long for the web, sub link it, have it as the last block of information, but only throw away repetition...

What about reviews?

I believe reviews are perhaps the strongest indicator that people are happy to read. There are numerous articles on how important user reviews are on various e-commerce websites, in fact for some websites it is practically the buk of their content. These reviews can be wordy, poorly edited and certainly not designed with scanability in mind, but they are still important and often vital in a purchase decision, or revisitng a site.

In fact my biggest pet hate is that reviews on mobile versions of website can often be hard to access, require multiple clicks and are often paged in such a way as you need to load each review in turn.

Comments

Popular posts from this blog

IE9 Intranet compatibility mode in Intranet websites

User Interface Usability Checklist Part 2

Procedural VS Object Oriented