Contact Me: 484-601-2502 | Email Me | My Resume

my blog

Performing a Heuristic Evaluation

Wikipedia defines a heuristic evaluation as a usability inspection method for computer software that helps to identify usability problems in the user interface (UI) design.

How many websites and applications that you use routinely could benefit from this form of testing?

More often than not, usability problems negatively affect sales, result in larger than necessary bounce rates, and even increase operational costs in the form of unnecessary phone/email/chat support.

Yet, this evaluation process is a simple one: Involve a small group of testers to examine the user interface (UI) and identify problems in design (programmatic or aesthetic) so that they may be remedied accordingly.

From my own experience I have found that heuristic evaluations are best carried out when you provide your evaluators with the primary goals of the website and allow them to develop their own tasks in order to achieve these goals. Others may choose to be more rigid in their methodologies.

For a website I will typically create an evaluation checklist and bucket this list into simple categories. I typically choose the following categories: Navigation, Organization, Accessibility, Compatibility, Aesthetic Layout, Typography, Content, and On-Site Tools. However you may choose to add any additional category that may be relevant to your website or application.

Within each of these categories you may assign a series of questions for your evaluators to test. Be sure that the answers to each of these questions are simple Yes, No, or N/A answers. An example question for Navigation may be: "Is there always a clear indication of your current location within the entire site?" Having yes/no answers will allow you to definitively compare results and act accordingly instead of receiving subjective results. That being said, such evaluations can allow evaluators the ability to leave comments to describe their answer selection.

Lastly, I feel it is essential to evaluate all answers against a severity rating. Based on a evaluator's feedback, I will typically assign one of the following (4) selections to each of their answers.

  • CRITICAL - A showstopper!. There is no way to avoid this bug. Normal usage or navigation of the site is not possible until a fix is made.
  • HIGH - A critical bug. This bug appears frequently or is persistent and greatly hinders normal usage or navigation of the site.
  • MED - A non-critical bug. This bug impairs normal usage or navigation of the site though can be avoided.
  • LOW - A non-critical bug. This severity rating can even be reserved for general questions/considerations of the evaluator.

Once you have received feedback from your evaluators it is important that you carefully consider how to remedy the various issues that were encountered. Remember, when it comes to such evaluations, "there is no such thing as user error." If one evaluator found an element within your site confusing, it's highly likely that others do too.

Labels: , ,

1/01/2010 04:27:00 PM Subscribe to RSS FeedSubscribe to RSS Feed

Comments: 0

Post a Comment


Return to Top