The dangers of not doing consumer research early in the product cycle
Today, we have a little story to tell. Some might call it a parable about the dangers of not doing consumer research and usability early in the product cycle. Others might describe it as a true cautionary tale about a past consulting project and a product that was killed due to poor user experience as a consequence of neglecting the user experience during the product development cycle.
The team, which we’ll refer to as the “Oxygen team” for this article, had everything going for them a high tech team could hope for. They had solid committed resources from an established company. They had a team of 40+ employees, comprised of very smart and talented people from all disciplines including dev, test, pm, marketing, and other disciplines. And they worked at one of the “hottest” companies at the time.
The Oxygen team was working on a web based application targeting small business owners. More specifically, the team was creating a web- based application for small business owners to create websites. It was an early web application that was unique at the time (in 2000) – most software was still box or CD based. Like many internet teams, speed was of the essence. They wanted to have the “first mover advantage” and be the first to market.
Does this story sound familiar?
The Company had invested significant time and money in this product
The team had been developing this product for more than a year and the total project costs were well in excess of a million dollars. My involvement with the team on this project was as a consultant. I was brought in for several weeks to help the team complete this project before moving on to consult with another team the following month. After a week of team interviews, assessment and preparation, we moved on to the actual usability test.
The first day of the usability test arrived. The team was quite interested in the usability study since they had never seen one before. My supervisor was in the room the entire study, observing me too. So I was nervous to say the least! We were all sitting on the “observer” side of a typical expensive usability lab set up at a big company (i.e. two rooms separated by one way glass). Imagine an eight square foot observer area, with six of us crammed into that space to observe.
Drama in the Usability Lab
The first participant, Joe, walked in. We did all the usual things we all do in a usability study. We built rapport so he would feel comfortable giving honest feedback. We did a little pre-interview about his background and experience in business. And then we started the study.
And quickly we discovered that the build was not quite ready for prime time. It kept crashing so our participant was not able to complete the tasks because the software was so unstable. This prototype was highly interactive and the team was relatively deep into the actual coding and development so we were using a working build rather than a paper prototype.
Ideally, if user research had been brought into the product development process much earlier, inexpensive paper prototyping would have been a valuable and cheap way to conduct an iterative test right from the start. The development team decided they needed to tweak things a little bit more. So I went into the room and asked the participant if he would be willing to stay for another session and he agreed.
Fast forward an hour and we’re starting the test again. Everyone seemed a bit keyed up and nervous tension filled the air. The team had been working so hard and for so long, but they had still never actually observed a customer using the product. Joe started the test and again was struggling. About 30 minutes into the session, he clicked on a link to go to a page and suddenly there was a LOGIN window on the computer screen!
At this point I was confused. To be honest I was probably sweating a little bit too. “Uh, what is that?” I asked Janet, the lead developer sitting beside me on the observer side of the usability lab. “Oh, that’s the login screen. We designed the system to time-out after 30 minutes for security reasons,” she replied. This was the first time I’d heard or seen of this “feature.” Janet went into the room to help log the participant in and Joe started asking her why there was a login screen.
So Janet walked over to the participant side of the usability lab. And that created an unusual and very interesting situation; suddenly we were observing not just the participant interacting with the website. But we were observing the frustrated participant now face-to-face with the person who created the website. And the person directly responsible for their frustration! It would have been humorous if the tension in the air was not so strong. And it was quite interesting to be observing the end user actually interact directly with the person who created the product.
A Very Frustrated Usability Test Participant
Aside from the technical issues, the product was quite confusing and he was experiencing usability problems that were frustrating to him. Joe was so frustrated at this point that he was a powder keg ready to blow. He was quite irritated and it was actually quite entertaining to see him sharing that frustration directly with the developer, rather than to the usability professional as is usually more common.
As an aside, many usability practitioners have had the experience of conducting some really solid research and then being met with some level of skepticism or defensiveness by the developers. So it is always nice when the team can experience that feedback directly from the end users.
We went on with the study and Joe continued with the list of tasks that represented the core things that a customer would want or need to do in order for this product to be successful. Since the prospective customers were creating a website these tasks were fairly straightforward, they had to be able to create some pages and format them to look like some sample pages they were shown. There was one “little” problem though, the product was very confusing.
In order to create a website, of course, the first thing users needed to do was to create a single web page. When Joe started using the site, however, he was confused by the formatting options. So he was not able to select the right type of page to create. Then after he had selected the wrong type of page to create and realized it didn’t look right, he wasn’t able to fix it!
Overall, the success rate for selecting the right type of page to create was only 50%
That meant that HALF of the participants could not choose the right kind of page to make. And then after they created the page, they often were not able to format the page correctly. They experienced a major slow down in their task flow as they tried to decipher what the final page layout would look like from reading the accompanying text descriptions because the user interface only offered text descriptions instead of visual cues like a mocked up layout so users could visually see how the layout would appear. Users had to read through text descriptions and it took them a very long time to find a page that matched the description of what they were trying to do. All of the participants, except one, selected the wrong page to build. Later, they tried to “fix” the visual appearance of the page and tried to change the layout, but the real problem was that the page type was incorrect. The user interface was so ineffective that the participants could only make five to seven pages in a 90 minute session.
Warning Sign – Scathing User Feedback
At the end of the study, we asked the participants to do some user interface ratings. These were standard questions usability professionals often incorporate into their research plans, including user experience and satisfaction questions. These ratings were on a seven point Likert scale. And the ratings for this study averaged nearly two points lower than any other study I have conducted personally or been involved with during the last thirteen years! People are usually positive when evaluating products in the usability setting, so the ratings by themselves were quite indicative of a major underlying problem with the product.
During the post test debrief, Joe had scathing feedback:
Anytime a user tells you they need a manual to use a website, or even a software product, it is obviously a bad sign! And the emotional tone and depth in this one sample comment was also a bit alarming. Needless to say, it was crystal clear that the product was not ready for prime time.
Follow up: What Happened After The Usability Study?
After this project, I was scheduled to move on to another project team. But this test really made an impression on me and I remained quite curious about what the outcome of this project would be and how the research would be utilized in the product development process. Several months after the study, I learned that the company pulled the plug on this product and the “Oxygen” team was dissolved. At the end of the day, even a company with extensive resources and deep pockets was unable to move a product so severely flawed forward. It was simply easier to pull the plug than to try to salvage this effort.
Most companies don’t have anywhere near the level of resources (time, money and people) that this company had, so the issue would probably have surfaced much earlier in a smaller company with earlier customer feedback.
What Is the Moral Of the Story?
As history shows us, while it’s great from a marketing perspective to get to market first, it’s even more important to get a great user experience to market. Apple is a great example of doing the right thing versus rushing to market. For many years they were criticized for not having a mobile phone. They did the right thing and only shipped when they were ready. The result as we all know was a huge game changer and they have achieved market dominance in a very short period of time.
The moral is: incorporate a user centered design throughout the product development cycle, as early as possible. Usability testing is one part of a solid user centered design research approach and should not be your only methodology in a user centered design process. When you do incorporate usability testing, test early and test often. The type of usability failure described in this article is entirely avoidable – don’t let this happen to your team or product or company.
A case study like this one of “usability gone wrong” helps to remind us of what happens when user experience research is not a part of the lifecycle of a product or service from the very beginning. If you pick up an old text book in human factors they always talk about how it “used to be” so expensive and time consuming to conduct usability testing back in the 1960’s or so.
But its 2012 now and there is no excuse for not doing usability testing and other forms of consumer research, even at the earliest stages of your product development. Many people are turning to low cost cheap user testing in the last few years – a path that has many significant dangers that we’ll be writing about shortly.
Does this story resonate with you? Have you had a similar experience with “usability gone wrong”? Please leave a comment and contribute to the conversation.
Note: This article is an updated version of my chapter in UX Storytellers back in 2010. You can find the UX Storytellers book here.
No related posts.