On Fri, Apr 15, 2011 at 1:48 AM, Bryce Harrington <br...@canonical.com> wrote: > On Fri, Apr 15, 2011 at 03:00:31AM +0100, Matthew Paul Thomas wrote: >> * 8/10 people could find a window's menus, but 7/8 of them learned to >> * Only 4/11 worked out how to change the background picture. >> * 6/10 could easily find and launch a game that wasn't in the >> * Only 1/9 (P4) easily added that game to the launcher. >> * 9/11 people could easily close a window. >> * 8/9 easily copied text from one document into another. >> * Only 5/10 could easily delete a document > > These seven items in particular seem like really basic tasks that ought > to be testing at >90%, so these stats seem a lot lower than I'd expect.
What would be telling is if the people who didn't figure out the task the first time remembered how to complete it the next time. That shows that the task is learnable, which is acceptable if the curve matches the difficulty of the functionality. Unfortunately very rarely do you use the same users for multiple tests (in fact it is usually discouraged unless you are doing longitudinal studies) except in longitudinal studies. > > I'm curious whether these stats are higher/lower/same-as with Classic > Desktop. IOW this needs a control group so we can tell if the new > design brings improvement or regression. > > Also, these tests measure usability, but not their overall impression. > Did they like it? Find it frustrating/confusing? Do people in the UK use the System Usability Scale, NASA TLX, or Modified-Cooper Harper? Theyre assessment surveys that measure satisfaction and cognitive load, but I've only seen human factors engineers use them. > > Bryce > > -- > ubuntu-desktop mailing list > firstname.lastname@example.org > https://lists.ubuntu.com/mailman/listinfo/ubuntu-desktop > -- Celeste Lyn Paul KDE Usability Project KDE e.V. Board of Directors www.kde.org -- ubuntu-desktop mailing list email@example.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-desktop