Wow, I just want to say thank you to everyone who weighed in on this.
This is why I love IxDA :)

I always struggle in explaining these sorts of situations without giving
away details about the actual application, but let me make an attempt.
Katie, it's not quite as dire as you might think, ha! The dependencies
aren't even *that* complex, it's just that I'd like to make the test
situation as realistic as possible. (I am wondering if this is simply
overly ambitious.)

Basically, in Section A, you define a widget. You give all sorts of
information about the widget, and are offered a bunch of different
widget options based on the information you enter.

In Section B, you're managing your widgets and various sub-widgets. (Oh
man, "sub-widgets", sorry, sigh.) What I'm struggling with is that in
almost all cases, the person who DEFINES the widget is the person
MANAGING the widget. And in the process of defining the widget, you get
all sorts of important background information about how widgets work. It
feels unrealistic to throw someone into the management of the widget
totally blind. I worry that they might struggle on certain tasks because
they don't have proper background information, background they would
almost certainly have if they defined the widget.

(Perhaps this points to other flaws in my design, ha. I will give it a
once-over and see if there are any other ways I can introduce this
background information.)

Anyway, they might very well define the widget and come back several
days later to manage it -- so option b) I described is reasonable,
although our recruiting company might hate us :), and I fully expect
some participant drop-off (the 1/3 over-recruitment strategy is a great
recommendation).

I am also strongly considering a), and letting them walk through the
widget definition flow by themselves 15 minutes before the 60 minute
test of widget management. I worry that if we prepare a prereq knowledge
document, or have a facilitator walk them through a scenario, we may
artificially emphasize important information -- information they might
actually have missed if they just read it through on their own.

Anyway, you've all given me lots of good food for thought. I will
continue pondering this. Thanks so much for your perspectives. And Paul,
whatever we do, an early pilot is DEFINITELY in order, thanks!

Meredith

> -----Original Message-----
> From: Melvin Jay Kumar [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 19, 2008 10:45 PM
> To: Meredith Noble
> Cc: [EMAIL PROTECTED]
> Subject: Re: [IxDA Discuss] advice on usability testing for complex
sites
> 
> Hi Meredith,
> 
> Agree on B.
> 
> But would like to add, that if you know what the dependcies are from A
> to B, that would help a great deal.
> 
> So from understanding the dependencies and based on the resources and
> time available ( which I rarely have enough) I would narrow down on
> the most important tasks / scenarios that go from A to B.
> 
> So once that is done, then what would happen is that, you have one
> session in which users get to do A and then continue to do B.
> 
> Also, if you have do test various scenarios, then you would have to
> break the resources .
> 
> My 2 cents based on the simplistic understanding of your requirements.
> Hope it helps.
> 
> Regards,
> 
> Jay Kumar
> 
> 
> 
> On 3/20/08, Meredith Noble <[EMAIL PROTECTED]> wrote:
> > Can anyone recommend methods for performing usability tests on
large,
> > complex applications with lots of conceptual dependencies?
> >
> >
> >
> > We're running into issues in our design of a test. We want to test
> > "Section B" of our application, but "Section B" doesn't make a lot
of
> > sense unless you've already been exposed to "Section A". The trouble
is,
> > Section A is pretty complicated in itself. They're definitely too
big to
> > test together in a single 60 minute test.
> >
> >
> >
> > What to do, what to do...! So far I've thought of (with drawbacks in
> > parentheses):
> >
> >
> >
> > a) Have a facilitator walk them through Section A for 15 minutes
before
> > they do the 60 minute Section B test (perhaps a bit overwhelming,
hard
> > to digest)
> >
> >
> >
> > b) Ensure the participants who test Section A come back and test
Section
> > B (good in theory, but difficult to schedule)
> >
> >
> >
> > c) Test the two back-to-back in a 120-minute-long test (participants
> > might fade)
> >
> >
> >
> > d) Pretend the dependencies don't exist and have them test Section B
> > with no background knowledge (not realistic, but hey, maybe the
others
> > are too ambitious)
> >
> >
> >
> > Surely other people have had experience with this sort of thing -
any
> > recommendations on what has and hasn't worked well? Am I approaching
it
> > all wrong?
> >
> >
> >
> > Thanks so much,
> >
> > Meredith
> >
> >
> >
> > ________________________________________________________________
> > Welcome to the Interaction Design Association (IxDA)!
> > To post to this list ....... [EMAIL PROTECTED]
> > Unsubscribe ................ http://www.ixda.org/unsubscribe
> > List Guidelines ............ http://www.ixda.org/guidelines
> > List Help .................. http://www.ixda.org/help
> >
> 

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to