All programmers and testers share one weakness: they don't know what it's like to not be familiar with computers.
Easy to laugh - but in IT the equivalents to push and pull signs aren't so obvious.
(Cartoon from The Far Side, in case you live on another planet.)
I have a confession: for a long time, I couldn’t get the 3G internet to work on my smartphone. When I bought it six months ago, I could make calls and connect to wi-fi, but the mobile broadband stubbornly refused to work. I read the manual from beginning to end, trawled the internet, fiddled with every setting and swore at it, before I finally realised mobile broadband wasn’t switched on. After all the times I’ve been showing off making things look easy that other people struggle with, I can consider this a taste of my own medicine.
But, embarrassment aside, that was a good lesson in what
it’s like to not be a techie. As a late entrant into the smartphone market, I
was getting to grips with things that are second nature to most users. To
someone who is familiar with Android, checking 3G internet is activated is such
an obvious thing it’s not even worth mentioning,[1] any more than a locksmith would
consider it worth asking if you were pushing a door with a “PULL” sign. But
little things like this add up and can stop people using new products
completely. This is where usability testing comes in.
There’s a trap that both developers and testers frequently fall in to, which is assuming your users know the things you take for granted. I’m currently trying to get to grips with a load testing tool and spent days fixing one issue after another: all presumably straightforward to people who use this, but a nightmare for me. Open source software is another regular offender. The mainstream products like Libreoffice and Firefox aren’t too bad, but the documentation for less popular programs is often incomplete or missing completely – in some cases, you need as much background knowledge as the programmers to use it. The programmers can of course point out that they’re doing this for free and they don’t have time to write user-friendly manuals too, but that’s little consolation for anyone trying to use it.
Even the most mainstream products have problems. Microsoft
Office, supposedly the gold standard of ease of use, has plenty of features
like automatic bullet points or numbering geared at user-friendliness. But the
side-effect of the innocent-looking functions are confusing formatting changes
that are difficult to change back – I have lost count of the number of times
I’ve had to help rescue documents mangled up by auto-formatting. That’s not
intended as a dig at Microsoft, just a way of pointing out how difficult
usability is, even for the companies with the deepest pockets.
There are endless ways to be caught out. Are you sure
copy-pasting is easy, or were you using CTRL-C and CTRL-V? Because that might
be common knowledge to you, but it isn’t to other people. And if you
thought of that pitfall, there’s another, and another, and another, and it’s
hard to anticipate all of them. There’s usability guidelines and usability training
out there, which is good practice, but this still suffers from the weakness of
tech-savvy people telling other tech-savvy people devising things for everyone
else. How can you, as a software programmer or developer, really put yourself
in the shoes of a regular user with no background or training in IT?
The answer, I suggest, is: you don’t. If you want to do
usability testing properly, you need to involve people who don’t know much
about computers. I know I’ve complained endlessly about people who don’t understand
computers imposing decisions on people who do, but it can be just as dangerous
to do it the other way round. If your customers or workforce don’t know how to
use a new system, it’s no use blaming them for not understanding the nice easy
interface you designed for them.
I’m not suggesting it’s as simple as bringing along some
non-techies and everything will be fine. It’s a tough call on when the best
time is to do usability testing. Do it too early, and non-technical users will
get bogged down in the inevitable beta-edition bugs (or do it really early when
the system only exists on paper, and they won’t know what to expect). Leave it
until everything else is ready to go, and usability testing could be too late.
When the system has been programmed, tested and stabilised, do you really want
to change half a dozen features found to be user-unfriendly? Usability testing
at any stage is useless if it’s treated as a rubber-stamping exercise. This
applies to any kind of testing, but if a project manager wants to believe the
new system is easy and intuitive, there is always a way of showing usability
testing confirms this irrespective of what people really thought.
But combined with other bits of good practice, people who
don’t know about computers is a valuable tool. In my last post I suggested that
it’s better to release a new system in several stages rather than a single “big
bang” release. This is partly to avoid feature creep-crippled projects, but it
also means that a usability issue found it one release can probably be fixed in
the next release. When I think I’ve found a usability issue, I make a habit of
asking someone in the office who isn’t a tester whether he knows how to work
the system rather than second-guess this myself. There are of course some
absolute clangers which any software tester will spot a mile off (like blue
underlined text in a web page that isn’t a link). But if you’re serious about
user-friendliness, you need to take your users seriously.
[1] In my defence, my
Samsung phone has two settings for 3G broadband in completely different places,
and both have to be switched on in order for this to work. All I can think is
that the less obvious switch was set to off in the shop when I was trying to
check if the wi-fi hotspot worked. But my point remains unchanged: if I
couldn’t work out what the problem was, I can’t see the average customer faring
any better.
No comments:
Post a Comment