Within my first week at Brite:Bill, I was sitting with a test user and a researcher in a usability lab near our offices in Dublin’s IFSC.
I had only just begun to work on an existing interface design, and here I was getting the opportunity to see it put through its paces. There was absolutely no better way for me to start in my new role.
As a user experience designer, when you work with an interface you think you get to know its foibles. There are aspects in the registration flow, or in the way the user settings section is set up, that you work on and pass to the dev team with a ‘please tweak how this process works’ or ‘please update to this improved design.’ You bring the full weight of your years of UI and web experience to bear thinking that you know exactly what the user needs in order to achieve their objectives.
There are some things you’ll never see though. Users don’t always behave the way you want them to, or in the way you expect them to.
The finest and most glorious example of this occurring in the world of e-commerce, is the well-touted tale of Jared Spool’s $300 million button. In that story, the design team in the client company had their own view of how their customers should behave. Once a customer had filled their shopping cart and was ready to check out, they were asked to either register or log in to continue. When this user flow was tested, it appeared that many real customers either didn’t want to register, couldn’t remember whether they’d registered before or couldn’t remember their password if they had. This unexpected demand on the user’s attention pulled their focus away from the task at hand (which was paying for the goods and getting on with their day) and created a point of friction in the flow that led to drop-offs and abandoned carts. As it turned out, these frustrated customers could have amounted to $300 million in revenue that year. Once this registration wall was removed, the following year’s takings improved by that amount.
So back to our usability test. The researcher was asking the user for their first impressions of an inbox-style screen we had planned to introduce to the product. I was familiar with it and had already been involved in the design. One of the useful features it provided was a tagging system. In a ‘family’ inbox where multiple users’ bills were presented together, this seemed a good way to categorise and filter the bills being shown. So you could have ‘#bob’ ‘#jane’ ‘#mum’ ‘#dad’ and so on. Written just like that on-screen.
Our test user read the tagging panel with what appeared to be some confusion. The researcher asked her what was on her mind. In the self-conscious way that I’ve seen in so many confused test users before, she waved her cursor over the various tags in the panel as she spoke: “I’m not sure what these are supposed to do — will they post my bills to Twitter?”
I was stunned.
Do you see what she did there? Suddenly it became clear to me. I had long since learned to suppress the instinctive internal voice crying “You’re doing it wrong!” My mind flipped straight to learning mode; of course she could think that! Why wouldn’t she think that? TV programmes flash hashtags with their opening credits to bind conversation about that programme in a single thread. The hashtag is a format at the very core of the ubiquitous Twitter platform. And we, in our early designs had picked up on the idea of identifying a tag by putting a hash on the front, not fully appreciating how it might later be read by users.
This is user testing gold.
I have participated in user tests many times before and since, but in its own small way, this is my $300 million dollar button. It didn’t make me any money, but I took away a new and deep appreciation of the fact that no matter how much experience I may gain in years to come, at any time a test user may walk through that lab door and teach me a thing or two about user interface design.
I went back to the office after the user test and removed the hash from the designs.