Cognitive Bias: allowing for human fallibility in user research
Imagine how easy technology design would be if you could show some potential users a design and they clearly described what you should and shouldn’t change to make it fit for purpose. Unfortunately this isn’t the case. Humans are complex, illogical and generally bad at telling each other what they want or need. For me, one of the main challenges of design research is working around the fallibility of humans.
One well-researched aspect of human fallibility is that of cognitive bias – the tendency to make decisions that are based around pre-existing and inaccurate assumptions. Beyond the publicly debated biases like gender bias and racial bias, researchers have identified and described a vast range of biases that can influence how users and research participants respond. I’ll talk about some of the really important ones for design research here, but Wikipedia has a great section unpicking most of the biases that are recognised in modern psychology: https://en.wikipedia.org/wiki/List_of_cognitive_biases.
Self serving bias: “I ran into a few problems where the interface was confusing, but I completed all the tasks since I’m good with these sorts of thing”
This bias describes the tendency for people to attribute their own success to internal factors such as intelligence or skill and failures to external factors including unexpected events or poorly designed technology. So when a participant says they easily navigated your site but only because of their technical competence, take it with a grain of salt.
The halo effect: “I love my iPad and this Apple Watch is great too.”
The Halo effect can be a big problem when working with well known and liked brands. It can be hard to get users to critically assess a product if they have had positive experiences with other closely related technologies. This bias is just as apparent with negative sentiment, which is why it is important not to retest iterations of a product or service with the same users. Even if you change things drastically between iterations, you will tend to get similar feedback.
System justification: “that’s the way we’ve always done it.”
People will often go to great lengths to justify and defend the status quo, even if there is no rational reason to do so. Just look at the QWERTY keyboard you probably have on your computer. This layout, which was originally developed to stop mechanical typewriters from jamming, persists in the electronic keyboards of today, despite being neither efficient to use nor easy to learn. This effect can be a thorn in the side for those developing paradigm shifting technologies or simply trying to get their stakeholders to do things more effectively. Identifying and supporting change evangelists if a useful tool for working around system justification, as these individual can offer contextually grounded answers to the doubts of those more attached to the status quo.
It’s worth saying that researchers are just as susceptible to bias and this can be particularly problematic since people are generally bad at identifying and overcoming their own bias – a phenomenon known as bias blind spot. Of course an experienced researcher acknowledges their own tendency to be biased and allows for it with careful study design.
Get in contact if want to know more or just have a chat about your design research needs.