Web guide – June project update

Author: 
Jacinta - Web Guide Team
Category: 
The Department of Finance Archive

The content on this page and other Finance archive pages is provided to assist research and may contain references to activities or policies that have no current application. See the full archive disclaimer.

 

We’ve been fairly busy in the past month.  You’ve seen some of the work we’ve been discussing on the blog regarding information architecture, metadata and content, as well as our current thinking on blogging itself.

But behind the scenes, we’ve been working on yet other areas.

Reporting

Reporting is an interesting area for a website.  Websites can with ease report of the number of pages or visits (although what this actually is reporting on is another question) that it receives.  But the main questions we have in this area – did the user a) find the information they sought? and b) was it any use to them? – are harder to answer than simply looking at a list of top ten pages.  A page might appear in the top ten simply because of where it was located on the site, it may have a catchy title.  Users may get to that page and then realise it didn’t meet their needs at all.

So, we’re looking at implementing both quantitative and qualitative measurement systems. Australia.gov.au uses a ’60 second survey’ to gather quantitative information about a user’s experience and we are thinking of doing something similar.  We’d be interested in your experiences with this style of measurement – what have you done?

User Acceptance Testing (UAT)

While UAT is normally one of the last stages of any project, in the last month we’ve updated the acceptance criteria and written the UAT strategy. The acceptance criteria and the requirements document forms the basis of UAT, we need to test the system functions in the way that we expect it should.

This UAT strategy defines:

  • The scope of testing to be performed in UAT (and any required regression testing);
  • The process for testing, including:
    • The entry and exit criteria for each testing phase;
    • The resources required to conduct each testing phase, their roles and responsibilities;
    • The methods of testing applicable to each phase of testing, and
    • Overall management of the testing phases.
  • The procedures used during the preparation and execution of each phase of testing.
User Centred Design

We’ve mentioned this once or twice in previous posts, we’ve been planning on taking our design work to user testing.  This month, this one pretty much overtook everything else.

We’ve had two sets of card sorting exercises, then three rounds of one-on-one testing.  The plan was to do this iteratively, which meant we used feedback from previous rounds to develop new versions for further rounds.  With only three days between rounds and two days of testing, the technical development team were pushed fairly hard to get the feedback incorporated and present a new version up for testing.

We were really pleased by the number of people willing to help out with this.  We suspect that given our audience is web developers and the like, requesting help on the various web communities around meant we were easily able to find our audience.  Then we discovered people were willing to forward the message onto those who weren’t in the community groups we targeted.  Thanks to the online community for your help!

Comments (2)

Reporting IS an interesting one, and I am convinced it will be more and more important in the future. Surveys generally need to be extremely short and sweet if you want people to take them, almost like polls. On transaction sites, you can watch the dropout rate (few people make it past step n). And don't forget e-mail, which will give you the richest feedback. But always beware people's tendency to complain rather than praise!

Comments on this post are now closed. Please let us know if you would like to discuss this post.

Last updated: 27 July 2016