Matthias, thank you for sharing this with the community! Very helpful to see 
not just the summary of findings, but also your entire protocol.

I'm picking out two major issue groups in need of improvement:

1) Navigation to/from widget store + workflow for adding gadgets
In our internal testing, we've seen quite a wide variety of mental models that 
people approach gadget-based interfaces with. Even folks with experience in 
other portal systems may not instinctively know which parts of the page are 
modifiable, and how to adjust the page contents vs page layout. We've tried 
video tutorials and first-time-use hints, but can't quite claim a silver bullet 
just yet.

Another challenge is the separation between the widget store and the actual 
layout; it simply puts a barrier in the way of direct interface manipulation. 
I've been thinking through some solutions that show the user what's available 
without taking them off the page, but screen real estate becomes an issue. If 
you or anyone else have seen innovative solutions, please do share with the 
list. 

2) Feedback for actions
Yes, no feedback (or feedback that the user can't see) is a usability death 
spell. Same goes for long loading time. Thank  you for noting that, and I agree 
with your recommendations 100%.

Couple of unsolicited thoughts on the report itself - in case you're interested 
in what worked/didn't work for us at MITRE, or if there are other usability 
nerds on the list:
- We find it very helpful to sort findings by severity (saving the per-task 
findings for the protocol section). Attaching recommendations, as you have 
done, is very helpful.
- At least in the US, everyone is very sensitive about protecting the identity 
of participants. We try to take out any personally-identifiable information - 
even gender - out of all parts of the report, and refer to participants as P1, 
P2, and so on.
- For studies with small to medium numbers of participants, we prefer to use 
ratios (3 out 10 users) in reporting, instead of percentages (30%). If anyone 
still has questions about sample sizes compared to traditional marketing 
approaches - well, there's an educational opportunity.

Your notes and time-on-task measurements are very detailed - what recording 
methods do you use? We used to be very heavy into TechSmith Morae, but with the 
time demands of agile projects find ourselves moving more and more towards 
discount usability methods...

Stan Drozdetski
MITRE

-----Original Message-----
From: Matthias Niederhausen [mailto:[email protected]] 
Sent: Monday, November 12, 2012 10:53 AM
To: [email protected]
Subject: Usability evaluation report

Hello,

 

I'm Matthias from T-Systems Germany and part of the Omelette project that
uses Rave to develop an open platform for building mashups for the telco
domain. In this project, we have recently done a usability evaluation, and
to improve the Rave portal, I want to share the results with you here.
Hopefully, I will also be able to more directly participate in the Rave
project in the future, but for now, I can only leave you with our findings.

 

The evaluation report has been posted on Google Docs so you can easily see
it online at [1]. Most interesting for you will be the overall report that
contains condensed data, while the evaluation protocol contains detailed
transcripts of the evaluation runs we did with the individual users. The
report contains problems and suggestions for the Rave portal from an end
user perspective on his PC as well as on smartphone and tablets, but also
some  data on the administrative view. 

 

Hope you find this helpful! If you have any further questions, do not
hesitate to ask me.

 

 

Best regards,

Matthias

 

[1] https://docs.google.com/folder/d/0B-a9jhuDMF3tYlFZT1pQQ25FM0k/edit

 

---

Dipl.-Medieninf. Matthias Niederhausen

T-Systems Multimedia Solutions GmbH

Innovation Services

 

Phone: +49 351 2820 2099

E-Mail: [email protected]

 

Reply via email to