Thursday, September 4, 2014

On-Campus Reconnaissance

I recently participated in user testing for one of the departments on campus revamping its heavily-used website.  I won’t identify the department to protect its study and work, but I will say that the site is an important resource and instructional tool on campus.  I’m a frequent user of the site, and as someone involved in the next usability study for the Purdue OWL, I wanted to provide what I hoped would be helpful feedback and learn about the usability work happening elsewhere on campus. 

I like participating as a user and giving feedback.  My first introduction to usability was as an early adopter and usability participant for the TOPIC project at Texas Tech University.  Since then, I’ve become an advocate, a learner who knows a few things though not an expert.  I’m generally the type who will respond to improvement and marketing surveys or provide user reviews, but usability is something else and something more.  When I have a chance to do more than respond to a site’s color scheme or some low-level feature, I will take the opportunity because it’s a learning experience for me. 

So I agreed to give an hour of my time to meet with the researcher and provide information about the department’s existing site and upcoming changes.  I was given a short demographic survey and then asked to perform a series of tasks on the site, mostly related to locating specific pieces of information or resources on the site.  I relied heavily on the search tool (which makes sense given this particular site), and I was introduced to features I didn’t know about.  The researcher made an audio recording of my responses and also recorded my movements on the screen.  The tasks were most interesting, and it became clear to me that the study was aimed at the site’s search functionality and menu structure. 

I tried to be as clear and detailed as possible in my responses because I know how difficult and yet valuable it is to know what users find helpful or not, what features they want and need, and what parts of the site have simply been overlooked.  It’s a vulnerable moment for site administrators, webmasters, designers and other staff when presented with users’ feedback.  Data can offer surprising results, including the fact that a feature lovingly created and nurtured by a designer is not helpful to—or worse, vehemently hated by—users.  Feedback can result in the need for significant revisions.  These are some reasons why usability research is given just a passing, suspicious glance by designers. 

Yet usability is extremely important, and because I valued the campus website I was asked to test, I wanted my feedback to lead to beneficial changes.  Nevertheless, I was very much aware that I had two purposes for participating in this study:  as a legitimate user and as someone doing reconnaissance for her own usability project. 

One of the most important things I learned from this experience is that test subjects can find usability studies as difficult as researchers do.  It’s not always possible to articulate a reason for a preference or a process, so researchers need to be clear and offer alternative explanations without leading the research subject toward a certain (preferred) response. Also, recording subjects’ responses can be helpful, since researchers cannot take detailed notes while paying close attention to what users are doing and saying, what their eyes focus on, or what the mouse clicks.  And tracking users’ clicks and mouse movements with with software can reveal information that may not be easily detectable by researchers.

And finally, usability is time-consuming work, even with technologies that can make tracking, recording, card sorting, and other activities easier.  The researcher spent nearly an hour with me just administering the tests.  He and his co-researchers will then have to compile data gathered from me and other participants.  They will have to wade through all that information to see what it yields about site users’ and their preferences and needs.  They will have to give that information to site designers and programmers.  Someone, somewhere will have to decide what revisions should be made to the site.  And then someone else has to make those changes. 


The amount of work, time, and resources involved is not surprising, especially after the previous OWL usability study.  Like the campus website I tested, the OWL is a heavily trafficked and complicated resource with many different stakeholders to consider.  I know from experience that we’re in for several semesters’ worth of work, and with that in mind, I eagerly but patiently await any changes to the site I tested, knowing that results can take time.  And I look forward to figuring out what this next OWL usability project will be like.