Heuristic Evaluation, User Testing: Campus Rec Website

In a group of four, I performed heuristic evaluations and user testing of the workout booking process available on CampusRec.unc.edu. This project fulfilled requirements for a Usability Testing graduate course at UNC-Chapel Hill.


Purpose:

This project aims to evaluate the features of CampusRec.unc.edu that allow users to book workout sessions. Using this site is an essential step in using the Campus Rec facilities at UNC, and the site has had lots of hang-ups and frequent changes throughout the pandemic.


Process:

Our group outlined the tasks we wanted users to perform. That lead us to examine the website in two ways:

  1. Heuristic Review:

    Our group of four each performed our chosen tasks, taking note of featured that used or violated the Nielsen Norman Group’s usability standards for web design.

  2. User Testing:

    We tested the chosen tasks with four testers. I served as an interviewer for 2/4 of the user testing sessions. Metrics we measured included time spent on tasks, number of errors, and post-task interviews. We saw consistent issues (i.e. high number of errors and reported frustration levels) on one particular task, which lead us to suggest design improvements for that task in particular. The data table I made demonstrating the number of errors is seen below.

data table where tasks 1 and 4 show consistently low numbers of errors, and tasks 2 and 3 show vasty variable numbers of errors across participants

This table is a sample of how I reported data in our presentation; it shows the number of times participants made errors in completing their tasks. Tasks 2 and 3 showed varying levels of error, whereas tasks 1 and 4 did not. In retrospect, I should have better labeled the table to remind viewers what each task was.

We performed analyses on these results, including calculating the confidence intervals of some results, as demonstrated below.

bar graph depicting that task 2 had significantly high rating for "difficult logic of task," and its confidence interval is huge

This is a graph from the presentation we created from our project. We see that the confidence interval for task two is huge — again, this reinforces how difficult the task was.

Note: the graph was made with data I collected, but I did not make the graph.


Outcomes:

Through our findings, we were able to tell a story about which primary action (task 2) users take on the Campus Rec website shows distinctive issues in the number of errors and the level of frustration our participants exhibited. We presented our findings from our two testing methods, along with suggestions for design improvements.


Lessons Learned:

This was an elaborate project among four people, which would have benefited from having a single project manager leading the group. With that being said, we were still very accomplished and timely in our work. We learned that for our presentation, we could have been more clear about the visuals we added.

Our professor specifically lauded us on how we presented results: I started off describing the errors participants made, before I got into discussing their responses to our survey or their comments in our interview. In this way, I was able to communicate a single story (that task 2 was hard) as a thread that viewers could follow throughout the data tables, graphs, and presentation as a whole.

Previous
Previous

Low-Fidelity Prototype: Improvements to Etsy.com

Next
Next

Systems Analysis: Modeling Apartment Package Pick-Up