Easy to use interfaces should be the goal of all designers, but what does that actually mean? It’s easy to say something should be intuitive, but defining that goal can be challenging. It’s not enough just say “intuitive” means interactions are easily understood by your users. It’s technically correct (the best kind of correct), but is only really rephrasing the idea.
When we say intuitive, we really mean a number of things. Our goals encompass visual affordance, ease of use for device interfaces, assumptions of icons and graphics, correct site navigation and pathing, but also more ethereal concepts like immersion, excitement, or feelings of being in control. “Users” is also a filler word for describing the audience. We’ve essentially declared that we want anyone between teenagers in New York to grandmothers in Texas to intuit our site the same way, which just isn’t going to happen. In the end, being intuitive is a balancing act more than something with set rules to follow. Luckily, there are principles and knowledge bases to reference for guidance.
We should start first by understanding what affordance means. Affordance is the relationship between an object’s properties and the perceived ability to perform a specific action. For instance, a door with a bar handle suggests a pulling motion, while a door with a flat space suggests pushing. The person opening the door is more likely to associate grasping a cylinder with the action of pulling than they are with pushing. Most other cylindrical objects they’ve encountered like levers or dumbbells tend to be pulled, and grasping is more connected to pulling as well. In contrast, it’s not really possible to pull a door with without a handle, so pushing instead, becomes the natural conclusion. This decision can sometimes be wrong of course; many doors exist where a handle is turned as then pushed. Generally though, common patterns emerge and we should look to lining up our steps with the expected action.
Of course, digital objects don’t have the same real world properties as real world objects. All interactions were handled through an input device which, until touch screens came along, did not approximate real world actions. There’s just no physical equivalent to using a hyperlink to change pages or a mouse to draw a line. Most people now understand how to do these actions, but they’re relying on learned patterns and heuristics to a larger degree, than with our door example.
To illustrate what I mean with this, imagine a hyperlink on a website. Even though I know nothing about you as a user, I can reasonably predict a few things. The link is probably some text or an icon (in that order), is probably some shade of blue, and is likely underlined. Being a certain color or having an underline isn’t bound to the function of course; through training and consistency the hallmarks of hyperlinks became common knowledge.
But while we could just rely on established conventions, its important to understand why the conventions came about, and what goal it is accomplishing. Let’s start without any of the conventions, and assess our situation.
This example isn’t terribly intuitive. With all the text being the same style and color there’s nothing to base our heuristics on. I could probably guess asbout the navigation bar’s function at the top, but the other elements are given equal weight. We need to provide contrast between actions, and giving a specific type of control a specific trait is a good step towards that.
Here, it is much more apparent which elements are alike, and while nothing explicitly would tell you what a hyperlink does, the grouping, context, and placement on the page invite action. And since users’ actions are limited to input devices, we can reasonably assume a mouse click (or touch controls) will be tried early on.