Re: [IxDA Discuss] Looking for Ethnographic Researcher in Japan

2009-10-02 Thread Arthur Moan
Hi James,

Try http://www.humaninterface.co.jp/english/index.html 

I have used them on many occassions over the years and never been
disappointed!

Good luck,
Arthur


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46211



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Wizard/Form Design Inspiration

2009-10-02 Thread Jiri Mocicka
Hi Donna,

I would suggest start with name UIE.com of Jared M. Spool and his
podcast Podcast: Luke Wroblewski on Strategy By Design.
Also have a look at http://www.lukew.com/ which is basically best
form advice I've ever heard.

Hope it helps.
Jiri


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46202



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] What Other Fields Could UXers Steal From?

2009-10-02 Thread pdxWebDr
Yes, I would definitely say Psychology.

Also, I'm reading Designing for People by Industrial Designer Henry
Dreyfuss and have found a wealth of insightful similarities.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46168



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread James Page
It is dependent on how many issues there are, the cultural variance of your
user base, and the margin of error you are happy with. Five users or even 10
is not enough on a modern well designed web site.

The easy way to think of a Usability Test is a treasure hunt. If the
treasure is very obvious then you will need fewer people, if less obvious
then you will need more people. If you increase the area of the hunt then
you will need more people. Most of the advocates of only testing 5 to 10
users, experience comes from one country. Behaviour changes significantly
country by country, even in Western Europe. See my blog post here :
http://blog.feralabs.com/2009/01/does-culture-effect-online-behaviour/

If your client website has 1 million visitors a year, a usability issue that
effects 10% of the users would be unlikely to be discovered on a test of
only 5 to 10 users, but would give 100,000 people a bad experience when they
visit the site.

Can you find treasure with only five or ten users. Of course you can. But
how sure can you be that you have found even significant issues.

A very good argument in why 10 is not enough is Woolrych and Cockton 2001.
They point out an issue in Nielsen formula in that he does not take into
account the visibility of an issue. They show using only 5 users can
significantly under count even significant usability issues.

The following powerpoint from an eyetracking study demonstrates the issue
with only using a few users.
http://docs.realeyes.it/why50.ppt

You may also want to look at the margin of error for the test that you are
doing.

All the best

James
blog.feralabs.com

2009/10/1 Will Hacker willhac...@sbcglobal.net

 Chris,

 There is not any statistical formula or method that will tell you the
 correct number of people to test. In my experience it depends on the
 functions you are testing, how many test scenarios you want to run
 and how many of those can be done by one participant in one session,
 and how many different levels of expertise you need (e.g. novice,
 intermediate, and/or expert) to really exercise your application.

 I have gotten valuable insight from testing 6-10 people for ecommerce
 sites with fairly common functionality that people are generally
 familiar with but have used more for more complex applications where
 there are different levels of features that some users rely on
 heavily and others never use.

 I do believe that any testing is better than none, and realize you
 are likely limited by time and budget. I think you can usually get
 fairly effective results with 10 or fewer people.

 Will


 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
 Posted from the new ixda.org
 http://www.ixda.org/discuss?post=46278


 
 Welcome to the Interaction Design Association (IxDA)!
 To post to this list ... disc...@ixda.org
 Unsubscribe  http://www.ixda.org/unsubscribe
 List Guidelines  http://www.ixda.org/guidelines
 List Help .. http://www.ixda.org/help


Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Steve Baty
If your client website has 1 million visitors a year, a usability issue
that
effects 10% of the users would be unlikely to be discovered on a test of
only 5 to 10 users, but would give 100,000 people a bad experience when they
visit the site.

Actually, that's not true. You'd be fairly likely to discover it with only
5-10 users - in the 65%+ range of 'likely'. Manufacturing quality control
systems and product quality testing have been using such statistical methods
since the 20's and they went through heavy refinement and sophistication in
the 60's, 70's and 80's.

It's also worth repeating the message both Jakob  Jared Spool are
constantly talking about: test iteratively with a group of 5-10
participants. You'll find that 65%+ figure above rises to 99%+ in that case.

Again, doesn't change your basic points about cultural diversity and
behaviour affecting the test parameters, but your above point is not
entirely accurate.

Cheers
Steve

2009/10/2 James Page jamesp...@gmail.com

 It is dependent on how many issues there are, the cultural variance of your
 user base, and the margin of error you are happy with. Five users or even
 10
 is not enough on a modern well designed web site.

 The easy way to think of a Usability Test is a treasure hunt. If the
 treasure is very obvious then you will need fewer people, if less obvious
 then you will need more people. If you increase the area of the hunt then
 you will need more people. Most of the advocates of only testing 5 to 10
 users, experience comes from one country. Behaviour changes significantly
 country by country, even in Western Europe. See my blog post here :
 http://blog.feralabs.com/2009/01/does-culture-effect-online-behaviour/

 If your client website has 1 million visitors a year, a usability issue
 that
 effects 10% of the users would be unlikely to be discovered on a test of
 only 5 to 10 users, but would give 100,000 people a bad experience when
 they
 visit the site.

 Can you find treasure with only five or ten users. Of course you can. But
 how sure can you be that you have found even significant issues.

 A very good argument in why 10 is not enough is Woolrych and Cockton 2001.
 They point out an issue in Nielsen formula in that he does not take into
 account the visibility of an issue. They show using only 5 users can
 significantly under count even significant usability issues.

 The following powerpoint from an eyetracking study demonstrates the issue
 with only using a few users.
 http://docs.realeyes.it/why50.ppt

 You may also want to look at the margin of error for the test that you are
 doing.

 All the best

 James
 blog.feralabs.com

 2009/10/1 Will Hacker willhac...@sbcglobal.net

  Chris,
 
  There is not any statistical formula or method that will tell you the
  correct number of people to test. In my experience it depends on the
  functions you are testing, how many test scenarios you want to run
  and how many of those can be done by one participant in one session,
  and how many different levels of expertise you need (e.g. novice,
  intermediate, and/or expert) to really exercise your application.
 
  I have gotten valuable insight from testing 6-10 people for ecommerce
  sites with fairly common functionality that people are generally
  familiar with but have used more for more complex applications where
  there are different levels of features that some users rely on
  heavily and others never use.
 
  I do believe that any testing is better than none, and realize you
  are likely limited by time and budget. I think you can usually get
  fairly effective results with 10 or fewer people.
 
  Will
 
 
  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  Posted from the new ixda.org
  http://www.ixda.org/discuss?post=46278
 
 
  
  Welcome to the Interaction Design Association (IxDA)!
  To post to this list ... disc...@ixda.org
  Unsubscribe  http://www.ixda.org/unsubscribe
  List Guidelines  http://www.ixda.org/guidelines
  List Help .. http://www.ixda.org/help
 
 
 Welcome to the Interaction Design Association (IxDA)!
 To post to this list ... disc...@ixda.org
 Unsubscribe  http://www.ixda.org/unsubscribe
 List Guidelines  http://www.ixda.org/guidelines
 List Help .. http://www.ixda.org/help




-- 
Steve 'Doc' Baty | Principal | Meld Consulting | P: +61 417 061 292 | E:
steveb...@meld.com.au | Twitter: docbaty | Skype: steve_baty | LinkedIn:
www.linkedin.com/in/stevebaty

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. 

Re: [IxDA Discuss] Unusable things

2009-10-02 Thread fecsx
i believe in dynamic products which have multiple touchpoints and
tactility is compatible/synchronized with interface to serve multiple
scenarios.
eventually everything connects as eames said.

1. elevator buttons are weird..aren't they? u have to push buttons
whenever u want to travel vertical:F
2. door handles.. could be self-cleaning
3. if it burns u it's bad design.. u might redesign it by using a
cup holder.. hacking objects can be cool


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46113



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread William Hudson
Chris -

I wrote an article on this topic for the SIGCHI Bulletin (while it was
still a printed publication sent to all SIGCHI members). It's at
http://www.syntagm.co.uk/design/articles/howmany.htm

Regards,

William Hudson
Syntagm Ltd
Design for Usability
UK 01235-522859
World +44-1235-522859
US Toll Free 1-866-SYNTAGM
mailto:william.hud...@syntagm.co.uk
http://www.syntagm.co.uk
skype:williamhudsonskype 

Syntagm is a limited company registered in England and Wales (1985).
Registered number: 1895345. Registered office: 10 Oxford Road, Abingdon
OX14 2DS.

Confused about dates in interaction design? See our new study (free):
http://www.syntagm.co.uk/design/datesstudy.htm

12 UK mobile phone e-commerce sites compared! Buy the report:
http://www.syntagm.co.uk/design/uxbench.shtml

Courses in card sorting and Ajax interaction design - Las Vegas and
Berlin:
http://www.nngroup.com/events/

 -Original Message-
 From: discuss-boun...@lists.interactiondesigners.com [mailto:discuss-
 boun...@lists.interactiondesigners.com] On Behalf Of Chris Ryan
 Sent: 01 October 2009 7:06 PM
 To: disc...@ixda.org
 Subject: [IxDA Discuss] Article on Number of Usability Test
 Participants
 
 I have been looking, unsuccessfully, through back issues of
 interactions magazine for an article, published a few years back,


Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Microsoft Courier ...

2009-10-02 Thread James Haliburton
I also think mid 2010 is unlikely. There are some major issues to
overcome, not the least of which is technical implementation.

The slickness of the video serves a purpose beyond validating design
however. It becomes a probe. A kind of artefact that elicits feedback
- from people like us, competitors, and the market and consumers at
large. 

I wholeheartedly agree with the Buxton approach to sketching user
experience, and making things disposable. But when you're reaching
out/creating visions for other stakeholders who aren't familiar with
participating in the roughness of sketches, polish goes a long way.
This will just test the high level concept, of course.

Internally in their team, I hope they've got heaps of paper and
lo-fi prototypes otherwise there are a lot of pure low level
usability issues they won't address.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=45951



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread James Page
Steve,

The real issue is that the example I have given is that it is over
simplistic. It is dependent on sterile lab conditions, and the user
population been the same in the lab and in the real world. And there only
being one issue that effects 10% of the user population. One of the great
beauties of the world is the complexity and diversity of people. In the
sterile lab people are tested on the same machine (we have found machine
configuration such as screen size has a bearing on behaviour), and they
don't have the distractions that normally effect the user in the real
world.

Actually, that's not true. You'd be fairly likely to discover it with only
 5-10 users - in the 65%+ range of 'likely'.

 For 5 uses that is only 41% (1-(1-0.1)^5), and for 10 it is 65%. This is
far off from Nielson number that 5 users will find 84% of the issues.
(1-(1-0.31)^5)

If I was manufacturing and there was a 45% chance that 10% of my cars leave
the production line with a fault, there is a high chance that consumers
would stop buying my product, the company would go bust, and I would be out
a job. From my experience of production lines a sample size of 10 for a
production of one million units would be considered extremely low.

We have moved allong way since 1993 when Nielsen and Landauer's paper was
published. The web was not arround, and the profile of users was very
different. The web has changed that. We will need to test with more people
as websites traffic increases, and we get better at web site design. For
example if we assume that designers of a web site have been using good
design principles and therefore an issue only effects 2.5% of users. Then 10
users in a test will only discover that issue 22% of the time. But using our
1 million visitors a year example the issue will mean that 25,000 people
will experience problems.

But we do agree that each population needs it's own test. And I totally
agree that testing iteratively is a good idea.

@William --  Woolrych and Cockton 2001 argument applies to simple task based
tests. See 
http://osiris.sunderland.ac.uk/~cs0awo/hci%202001%20short.pdfhttp://osiris.sunderland.ac.uk/%7Ecs0awo/hci%202001%20short.pdf

All the best

James
blog.feralabs.com

PS (*Disclaimer*) Due to my belief that usability testing needs not just to
be more statistically sound, but also be able to test a wide range of users
from different cultures I co-founded www.webnographer.com a remote usability
testing tool. So I am advocate for testing with more geographically diverse
users than normal lab tests.

2009/10/2 Steve Baty steveb...@gmail.com

 If your client website has 1 million visitors a year, a usability issue
 that
 effects 10% of the users would be unlikely to be discovered on a test of
 only 5 to 10 users, but would give 100,000 people a bad experience when
 they
 visit the site.

 Actually, that's not true. You'd be fairly likely to discover it with only
 5-10 users - in the 65%+ range of 'likely'. Manufacturing quality control
 systems and product quality testing have been using such statistical methods
 since the 20's and they went through heavy refinement and sophistication in
 the 60's, 70's and 80's.

 It's also worth repeating the message both Jakob  Jared Spool are
 constantly talking about: test iteratively with a group of 5-10
 participants. You'll find that 65%+ figure above rises to 99%+ in that case.

 Again, doesn't change your basic points about cultural diversity and
 behaviour affecting the test parameters, but your above point is not
 entirely accurate.

 Cheers
 Steve

 2009/10/2 James Page jamesp...@gmail.com

 It is dependent on how many issues there are, the cultural variance of your
 user base, and the margin of error you are happy with. Five users or even
 10
 is not enough on a modern well designed web site.

 The easy way to think of a Usability Test is a treasure hunt. If the
 treasure is very obvious then you will need fewer people, if less obvious
 then you will need more people. If you increase the area of the hunt then
 you will need more people. Most of the advocates of only testing 5 to 10
 users, experience comes from one country. Behaviour changes significantly
 country by country, even in Western Europe. See my blog post here :
 http://blog.feralabs.com/2009/01/does-culture-effect-online-behaviour/

 If your client website has 1 million visitors a year, a usability issue
 that
 effects 10% of the users would be unlikely to be discovered on a test of
 only 5 to 10 users, but would give 100,000 people a bad experience when
 they
 visit the site.

 Can you find treasure with only five or ten users. Of course you can. But
 how sure can you be that you have found even significant issues.

 A very good argument in why 10 is not enough is Woolrych and Cockton 2001.
 They point out an issue in Nielsen formula in that he does not take into
 account the visibility of an issue. They show using only 5 users can
 significantly under count even significant usability 

Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Steve Baty
James,

Excellent points.

Nielsen argues that 5 users will discover 84% of the issues; not that the
likelihood of finding a particular issue is 84% - thus the discrepancy in
our figures (41%  65% respectively).

(And I can't believe I'm defending Nielsen's figures, but this is one of his
better studies) The results from '93 were re-evaluated more recently for
Web-based systems with similar results. There's also some good theory on
this from sociology and cultural anthropology - but I think we're moving far
afield from the original question.

Regarding the manufacturing reference - which I introduced, granted - units
tend to be tested in batches for the reason you mention. The presence of
defects in a batch signals a problem and further testing is carried out.

I also like the approach Amazon (and others) take in response to your last
point, which is to release new features to small (for them) numbers of users
- 1,000, then 5,000 etc - so that these low-incidence problems can surface.
When the potential impact is high, this is a really solid approach to take.

Regards
Steve

2009/10/2 James Page jamesp...@gmail.com

 Steve,

 The real issue is that the example I have given is that it is over
 simplistic. It is dependent on sterile lab conditions, and the user
 population been the same in the lab and in the real world. And there only
 being one issue that effects 10% of the user population. One of the great
 beauties of the world is the complexity and diversity of people. In the
 sterile lab people are tested on the same machine (we have found machine
 configuration such as screen size has a bearing on behaviour), and they
 don't have the distractions that normally effect the user in the real
 world.

 Actually, that's not true. You'd be fairly likely to discover it with only
 5-10 users - in the 65%+ range of 'likely'.

 For 5 uses that is only 41% (1-(1-0.1)^5), and for 10 it is 65%. This is
 far off from Nielson number that 5 users will find 84% of the issues.
 (1-(1-0.31)^5)

 If I was manufacturing and there was a 45% chance that 10% of my cars leave
 the production line with a fault, there is a high chance that consumers
 would stop buying my product, the company would go bust, and I would be out
 a job. From my experience of production lines a sample size of 10 for a
 production of one million units would be considered extremely low.

 We have moved allong way since 1993 when Nielsen and Landauer's paper was
 published. The web was not arround, and the profile of users was very
 different. The web has changed that. We will need to test with more people
 as websites traffic increases, and we get better at web site design. For
 example if we assume that designers of a web site have been using good
 design principles and therefore an issue only effects 2.5% of users. Then 10
 users in a test will only discover that issue 22% of the time. But using our
 1 million visitors a year example the issue will mean that 25,000 people
 will experience problems.

 But we do agree that each population needs it's own test. And I totally
 agree that testing iteratively is a good idea.

 @William --  Woolrych and Cockton 2001 argument applies to simple task
 based tests. See
 http://osiris.sunderland.ac.uk/~cs0awo/hci%202001%20short.pdfhttp://osiris.sunderland.ac.uk/%7Ecs0awo/hci%202001%20short.pdf

 All the best

 James
 blog.feralabs.com

 PS (*Disclaimer*) Due to my belief that usability testing needs not just
 to be more statistically sound, but also be able to test a wide range of
 users from different cultures I co-founded www.webnographer.com a remote
 usability testing tool. So I am advocate for testing with more
 geographically diverse users than normal lab tests.

 2009/10/2 Steve Baty steveb...@gmail.com

 If your client website has 1 million visitors a year, a usability issue
 that
 effects 10% of the users would be unlikely to be discovered on a test of
 only 5 to 10 users, but would give 100,000 people a bad experience when
 they
 visit the site.

 Actually, that's not true. You'd be fairly likely to discover it with only
 5-10 users - in the 65%+ range of 'likely'. Manufacturing quality control
 systems and product quality testing have been using such statistical methods
 since the 20's and they went through heavy refinement and sophistication in
 the 60's, 70's and 80's.

 It's also worth repeating the message both Jakob  Jared Spool are
 constantly talking about: test iteratively with a group of 5-10
 participants. You'll find that 65%+ figure above rises to 99%+ in that case.

 Again, doesn't change your basic points about cultural diversity and
 behaviour affecting the test parameters, but your above point is not
 entirely accurate.

 Cheers
 Steve

 2009/10/2 James Page jamesp...@gmail.com

 It is dependent on how many issues there are, the cultural variance of
 your
 user base, and the margin of error you are happy with. Five users or even
 10
 is not enough on a modern well designed web site.

 

Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Thomas Petersen
It's also worth repeating the message both Jakob  Jared Spool are
constantly talking about: test iteratively with a group of 5-10
participants. You'll find that 65%  figure above rises to 99%  in
that case

I find this an absurd statement. The above can only have some merit
if we are talking about the actual product being tested.

If we are talking wireframes or any other replacements for the real
thing whatever you will find have very little if anything to do with
what you find in the end.

The real issues arise after the launch not before and the real
question is not how many participants but at what point participants
should be used.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46278



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Steve Baty
I'm not sure I understand your line of reasoning, Thomas. What issues are we
identifying in the wireframes if not those same issues that might otherwise
make it through into the final product? Certainly at a different level of
detail; and definitely our early tests aren't able to show up everything;
but that hardly makes it an absurd statement.

2009/10/2 Thomas Petersen t...@hellobrand.com

 It's also worth repeating the message both Jakob  Jared Spool are
 constantly talking about: test iteratively with a group of 5-10
 participants. You'll find that 65%  figure above rises to 99%  in
 that case

 I find this an absurd statement. The above can only have some merit
 if we are talking about the actual product being tested.

 If we are talking wireframes or any other replacements for the real
 thing whatever you will find have very little if anything to do with
 what you find in the end.

 The real issues arise after the launch not before and the real
 question is not how many participants but at what point participants
 should be used.


 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
 Posted from the new ixda.org
 http://www.ixda.org/discuss?post=46278


 
 Welcome to the Interaction Design Association (IxDA)!
 To post to this list ... disc...@ixda.org
 Unsubscribe  http://www.ixda.org/unsubscribe
 List Guidelines  http://www.ixda.org/guidelines
 List Help .. http://www.ixda.org/help




-- 
Steve 'Doc' Baty | Principal | Meld Consulting | P: +61 417 061 292 | E:
steveb...@meld.com.au | Twitter: docbaty | Skype: steve_baty | LinkedIn:
www.linkedin.com/in/stevebaty

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread James Page
Steve,

Woolrych and Cockton argue that the discrepancy is Nielsen's constant of
.31. Neilson assumes all issues have the same visibility. We have not even
added the extra dimension of evaluator effect :-)

Do you have a reference for the more resent paper? I would be interested in
reading it.

On the manufacturing side most of the metrics use a margin of error. With
just 10 users your margin of error will be about +/-35% (very rough
calculation). That is far better than no test, but still would be considered
extremely low in a manufacturing process.

In Anthropology most of papers I have read use far greater sample sizes than
just a population of 10. Yes it depends on the subject mater. The
Anthropologist will use techniques like using informers, which increases the
number of participants. And the Anthropologist is studying the population
over months if not years, so there are far more observations.

@thomas testing the wireframe will only show up what is already visible. But
if a feature has an issue, and it is implemented in the wireframe, then a
test will show it up. Discovering an issue early is surely better than
later. I think your statement iterates the idea that testing frequently is a
good idea.

All the best

James
blog.feralabs.com


2009/10/2 Steve Baty steveb...@gmail.com

 James,

 Excellent points.

 Nielsen argues that 5 users will discover 84% of the issues; not that the
 likelihood of finding a particular issue is 84% - thus the discrepancy in
 our figures (41%  65% respectively).

 (And I can't believe I'm defending Nielsen's figures, but this is one of
 his better studies) The results from '93 were re-evaluated more recently for
 Web-based systems with similar results. There's also some good theory on
 this from sociology and cultural anthropology - but I think we're moving far
 afield from the original question.

 Regarding the manufacturing reference - which I introduced, granted - units
 tend to be tested in batches for the reason you mention. The presence of
 defects in a batch signals a problem and further testing is carried out.

 I also like the approach Amazon (and others) take in response to your last
 point, which is to release new features to small (for them) numbers of users
 - 1,000, then 5,000 etc - so that these low-incidence problems can surface.
 When the potential impact is high, this is a really solid approach to take.

 Regards

 Steve

 2009/10/2 James Page jamesp...@gmail.com

 Steve,

 The real issue is that the example I have given is that it is over
 simplistic. It is dependent on sterile lab conditions, and the user
 population been the same in the lab and in the real world. And there only
 being one issue that effects 10% of the user population. One of the great
 beauties of the world is the complexity and diversity of people. In the
 sterile lab people are tested on the same machine (we have found machine
 configuration such as screen size has a bearing on behaviour), and they
 don't have the distractions that normally effect the user in the real
 world.

 Actually, that's not true. You'd be fairly likely to discover it with only
 5-10 users - in the 65%+ range of 'likely'.

 For 5 uses that is only 41% (1-(1-0.1)^5), and for 10 it is 65%. This is
 far off from Nielson number that 5 users will find 84% of the issues.
 (1-(1-0.31)^5)

 If I was manufacturing and there was a 45% chance that 10% of my cars
 leave the production line with a fault, there is a high chance that
 consumers would stop buying my product, the company would go bust, and I
 would be out a job. From my experience of production lines a sample size of
 10 for a production of one million units would be considered extremely low.

 We have moved allong way since 1993 when Nielsen and Landauer's paper was
 published. The web was not arround, and the profile of users was very
 different. The web has changed that. We will need to test with more people
 as websites traffic increases, and we get better at web site design. For
 example if we assume that designers of a web site have been using good
 design principles and therefore an issue only effects 2.5% of users. Then 10
 users in a test will only discover that issue 22% of the time. But using our
 1 million visitors a year example the issue will mean that 25,000 people
 will experience problems.

 But we do agree that each population needs it's own test. And I totally
 agree that testing iteratively is a good idea.

 @William --  Woolrych and Cockton 2001 argument applies to simple task
 based tests. See
 http://osiris.sunderland.ac.uk/~cs0awo/hci%202001%20short.pdfhttp://osiris.sunderland.ac.uk/%7Ecs0awo/hci%202001%20short.pdf

 All the best

 James
 blog.feralabs.com

 PS (*Disclaimer*) Due to my belief that usability testing needs not just
 to be more statistically sound, but also be able to test a wide range of
 users from different cultures I co-founded www.webnographer.com a remote
 usability testing tool. So I am advocate for testing 

Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Steve Baty
James,

More good points. I did some calculations a while back on the confidence
intervals for pass/fail user tests -
http://www.meld.com.au/2006/05/when-100-isnt-really-100-updated - the more
interesting part being the link to a paper on estimators of expected values.
Worth a read if you haven't seen it.

I'll try to dig up the more recent paper - working from memory on that one.

Regarding the anthropology  sociology references - I was referring more to
the notion of uncovering societal norms rather than the specific 'supporting
a sample size of x'.

Coming back to your first point: yeah, the use of the .31 is a
simplification for the sake of one of his free articles; it's a modal figure
based on (his words) a large number of projects. So, looking at a range of
figures, you would have some projects where more users were needed (to your
earlier point), and in some cases - few - you could get away with less
(although I admit that the use of less than 5 participants causes me some
concern).

Anyway, enjoying the discussion, and I still think we're violently in
agreement on the basic point :)

Cheers
Steve

2009/10/2 James Page jamesp...@gmail.com

 Steve,

 Woolrych and Cockton argue that the discrepancy is Nielsen's constant of
 .31. Neilson assumes all issues have the same visibility. We have not even
 added the extra dimension of evaluator effect :-)

 Do you have a reference for the more resent paper? I would be interested in
 reading it.

 On the manufacturing side most of the metrics use a margin of error. With
 just 10 users your margin of error will be about +/-35% (very rough
 calculation). That is far better than no test, but still would be considered
 extremely low in a manufacturing process.

 In Anthropology most of papers I have read use far greater sample sizes
 than just a population of 10. Yes it depends on the subject mater. The
 Anthropologist will use techniques like using informers, which increases the
 number of participants. And the Anthropologist is studying the population
 over months if not years, so there are far more observations.

 @thomas testing the wireframe will only show up what is already visible.
 But if a feature has an issue, and it is implemented in the wireframe, then
 a test will show it up. Discovering an issue early is surely better than
 later. I think your statement iterates the idea that testing frequently is a
 good idea.

 All the best

 James
 blog.feralabs.com


 2009/10/2 Steve Baty steveb...@gmail.com

 James,

 Excellent points.

 Nielsen argues that 5 users will discover 84% of the issues; not that the
 likelihood of finding a particular issue is 84% - thus the discrepancy in
 our figures (41%  65% respectively).

 (And I can't believe I'm defending Nielsen's figures, but this is one of
 his better studies) The results from '93 were re-evaluated more recently for
 Web-based systems with similar results. There's also some good theory on
 this from sociology and cultural anthropology - but I think we're moving far
 afield from the original question.

 Regarding the manufacturing reference - which I introduced, granted -
 units tend to be tested in batches for the reason you mention. The presence
 of defects in a batch signals a problem and further testing is carried out.

 I also like the approach Amazon (and others) take in response to your last
 point, which is to release new features to small (for them) numbers of users
 - 1,000, then 5,000 etc - so that these low-incidence problems can surface.
 When the potential impact is high, this is a really solid approach to take.

 Regards

 Steve

 2009/10/2 James Page jamesp...@gmail.com

 Steve,

 The real issue is that the example I have given is that it is over
 simplistic. It is dependent on sterile lab conditions, and the user
 population been the same in the lab and in the real world. And there only
 being one issue that effects 10% of the user population. One of the great
 beauties of the world is the complexity and diversity of people. In the
 sterile lab people are tested on the same machine (we have found machine
 configuration such as screen size has a bearing on behaviour), and they
 don't have the distractions that normally effect the user in the real
 world.

 Actually, that's not true. You'd be fairly likely to discover it with
 only 5-10 users - in the 65%+ range of 'likely'.

 For 5 uses that is only 41% (1-(1-0.1)^5), and for 10 it is 65%. This is
 far off from Nielson number that 5 users will find 84% of the issues.
 (1-(1-0.31)^5)

 If I was manufacturing and there was a 45% chance that 10% of my cars
 leave the production line with a fault, there is a high chance that
 consumers would stop buying my product, the company would go bust, and I
 would be out a job. From my experience of production lines a sample size of
 10 for a production of one million units would be considered extremely low.

 We have moved allong way since 1993 when Nielsen and Landauer's paper was
 published. The web 

Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Will Sansbury
Thomas Petersen said:
 If we are talking wireframes or any other replacements for the real
 thing whatever you will find have very little if anything to do with
 what you find in the end.

Hi, Thomas,

Are we talking about design issues or defects? Apologies if I totally
misread you, but it sounds like you're talking about defects.

I've run into that misconception a few times lately--that usability testing
is an extension of quality assurance, intended to surface bugs or defects in
the product. In reality, usability testing is best suited for sussing out
problems with the strategic level of the design--are mental models
appropriate and intuitive enough that people can easily complete the
principal tasks associated with the product. And testing wireframes  or
prototypes is a fantastic way to flush out mental model problems at an early
enough stage that course correction is financially feasible (not so if the
first usability test occurs when the product is thought to be completed).

To find the tactical level issues of implementation--the kind that you'd
find after launch--you need a robust QA process. Usability testing is a poor
substitute for quality assurance.

Will Sansbury

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] Leaving Las Vegas...I mean the website site.

2009-10-02 Thread Brian
So I have seen a bunch of  redesigns come up recently like
http://www.uxbooth.com/ (which I love that site) and I wanted to
bring this up as something simple that just sort of caught me as
being rather strange. Maybe I am missing the boat or just not seeing
something but I have been working in web design for a little over 10
years now and I have learned that when you have a link that is
internal you have that link open within the same window. On the other
hand, I was taught that any link that you have that is a reference to
an external website you should have the link open in an external
window or tab if the user has that set up. This allows for the user
to not actually leave your site and allows for your site to remain
open in the background. It is a simple target=_blank property in
the anchor tag that accomplishes this.

In my past experience I would hear the term from marketing people
stickiness and this would be something that would apply to that.
Now I don't mean to call sites out but I really see this happening
quite a bit to high profile sites now where they are not paying
attention to this rule anymore including this site we are on
right now. 

Like I said earlier, did I miss something? 



Brian Durkin
#company {nickelodeon:parentsconnect.com;}
#company .job {developer:web-monkey;}




Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread James Page
Totally agree with your article

 So you can get a much narrower range for your estimate, but 30+ users is a
 significant undertaking for a usability test.

One of our own findings from a study was that  people got bored with testing
more than about 8 users.

James

2009/10/2 Steve Baty steveb...@gmail.com

 James,

 More good points. I did some calculations a while back on the confidence
 intervals for pass/fail user tests -
 http://www.meld.com.au/2006/05/when-100-isnt-really-100-updated - the more
 interesting part being the link to a paper on estimators of expected values.
 Worth a read if you haven't seen it.

 I'll try to dig up the more recent paper - working from memory on that one.

 Regarding the anthropology  sociology references - I was referring more to
 the notion of uncovering societal norms rather than the specific 'supporting
 a sample size of x'.

 Coming back to your first point: yeah, the use of the .31 is a
 simplification for the sake of one of his free articles; it's a modal figure
 based on (his words) a large number of projects. So, looking at a range of
 figures, you would have some projects where more users were needed (to your
 earlier point), and in some cases - few - you could get away with less
 (although I admit that the use of less than 5 participants causes me some
 concern).

 Anyway, enjoying the discussion, and I still think we're violently in
 agreement on the basic point :)


 Cheers
 Steve

 2009/10/2 James Page jamesp...@gmail.com

 Steve,

 Woolrych and Cockton argue that the discrepancy is Nielsen's constant of
 .31. Neilson assumes all issues have the same visibility. We have not even
 added the extra dimension of evaluator effect :-)

 Do you have a reference for the more resent paper? I would be interested
 in reading it.

 On the manufacturing side most of the metrics use a margin of error. With
 just 10 users your margin of error will be about +/-35% (very rough
 calculation). That is far better than no test, but still would be considered
 extremely low in a manufacturing process.

 In Anthropology most of papers I have read use far greater sample sizes
 than just a population of 10. Yes it depends on the subject mater. The
 Anthropologist will use techniques like using informers, which increases the
 number of participants. And the Anthropologist is studying the population
 over months if not years, so there are far more observations.

 @thomas testing the wireframe will only show up what is already visible.
 But if a feature has an issue, and it is implemented in the wireframe, then
 a test will show it up. Discovering an issue early is surely better than
 later. I think your statement iterates the idea that testing frequently is a
 good idea.

 All the best

 James
 blog.feralabs.com


 2009/10/2 Steve Baty steveb...@gmail.com

 James,

 Excellent points.

 Nielsen argues that 5 users will discover 84% of the issues; not that the
 likelihood of finding a particular issue is 84% - thus the discrepancy in
 our figures (41%  65% respectively).

 (And I can't believe I'm defending Nielsen's figures, but this is one of
 his better studies) The results from '93 were re-evaluated more recently for
 Web-based systems with similar results. There's also some good theory on
 this from sociology and cultural anthropology - but I think we're moving far
 afield from the original question.

 Regarding the manufacturing reference - which I introduced, granted -
 units tend to be tested in batches for the reason you mention. The presence
 of defects in a batch signals a problem and further testing is carried out.

 I also like the approach Amazon (and others) take in response to your
 last point, which is to release new features to small (for them) numbers of
 users - 1,000, then 5,000 etc - so that these low-incidence problems can
 surface. When the potential impact is high, this is a really solid approach
 to take.

 Regards

 Steve

 2009/10/2 James Page jamesp...@gmail.com

 Steve,

 The real issue is that the example I have given is that it is over
 simplistic. It is dependent on sterile lab conditions, and the user
 population been the same in the lab and in the real world. And there only
 being one issue that effects 10% of the user population. One of the great
 beauties of the world is the complexity and diversity of people. In the
 sterile lab people are tested on the same machine (we have found machine
 configuration such as screen size has a bearing on behaviour), and they
 don't have the distractions that normally effect the user in the real
 world.

 Actually, that's not true. You'd be fairly likely to discover it with
 only 5-10 users - in the 65%+ range of 'likely'.

 For 5 uses that is only 41% (1-(1-0.1)^5), and for 10 it is 65%. This
 is far off from Nielson number that 5 users will find 84% of the issues.
 (1-(1-0.31)^5)

 If I was manufacturing and there was a 45% chance that 10% of my cars
 leave the production line with a fault, there is a high 

Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Dana Chisnell


On Oct 2, 2009, at 9:34 AM, James Page wrote:


Totally agree with [Steve's] article

So you can get a much narrower range for your estimate, but 30+  
users is a

significant undertaking for a usability test.

One of our own findings from a study was that  people got bored with  
testing

more than about 8 users.

James

I've found this with teams, too. Jared Spool calls this reaching the  
point of least astonishment, and I think he's right. After you start  
seeing similar problems repeat a few times, it's enough to know you  
have a problem to solve, you've learned a ton about users, and it's  
time to go make some inferences about what the issues are and iterate  
design. For most formative usability tests -- that is usability tests  
early in the design cycle where the team is still testing out ideas --  
having more than 5-10 participants is just punishing for the team.


Instead, learn about users, see what they do with your design, and  
move on to learn more on another round.


Dana

:: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: :: ::
Dana Chisnell
415.519.1148

dana AT usabilityworks DOT net

www.usabilityworks.net
http://usabilitytestinghowto.blogspot.com/






Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] REMINDER: AIGA | PhillyCHI Joint Meeting - Nathan Shedroff: Sustainable Design Strategy, Wednesday, October 7, 2009

2009-10-02 Thread Dave Cooksey
AIGA | PhillyCHI Joint Meeting - Nathan Shedroff: Sustainable Design  
Strategy, Wednesday, October 7, 2009


Please join AIGA Philadelphia and PhillyCHI in welcoming Nathan  
Shedroff to Philadelphia. Nathan will help kick off DesignPhiladelphia  
by presenting a discussion of design, business, and sustainability,  
and the connections between them, in a clear and easy way that will  
inspire you to integrate these perspectives, techniques, and tools  
into your practice. This event is free and open to the public.


Date: Wednesday, October 7, 2009
Time: 6:00PM - 8:00PM (Social time from 6:00-6:30PM)
Location: Tyler School of Art, Room B004
Temple University
2001 N. 13th Street
Philadelphia, PA 19122
Public transportation: Tyler is conveniently located a few blocks from  
both the Temple University SEPTA station and the Susquehanna-Dauphin  
subway stop

Parking  Parking Map: http://www.temple.edu/parking/main.html and 
http://www.temple.edu/parking/pdfdocuments/2007%20Main%20Campus%20Map.pdf
RSVP: philly...@gmail.com


About the Presentation
Sustainable Design Strategy
The worlds of design (and design thinking), business (and strategy),  
and sustainability are drawing ever more tightly together. To approach  
either perspectives without the others is becoming less possible and  
more important. Nathan Shedroff will make these domains, and the  
connections between them, clear and easy to follow and inspire you to  
integrate these perspectives, techniques, and tools into your practice.



About the Presenter
Nathan Shedroff is one of the pioneers in Experience Design, an  
approach to design that encompasses multiple senses and requirements  
and explores common characteristics in all media that make experiences  
successful, as well as related fields, Interaction Design and  
Information Design. He speaks and teaches internationally and has  
written extensively on design and business issues, including,  
“Experience Design 1” and “Making Meaning.” Nathan is a serial  
entrepreneur, works in several media, and consults strategically for  
companies to build better, more meaningful experiences for their  
customers. His latest book, “Design is the Problem” from Rosenfeld  
Media, is an exhaustive primer on sustainable design.


Lately, Nathan has spent a lot of time building strategies for start- 
ups and established firms using a variety of online technologies to  
deliver new solutions that enhance and evolve their online brand,  
develop new offerings, and enhance their customer relationships. He  
also consults on corporate strategies for sustainability since earning  
his MBA in Sustainable Management.


You can learn more about Nathan at: http://www.nathan.com/.


About Our Sponsors
Graphic and Interactive Design Program at Tyler School of Art
GAID at Tyler School of Art is known for the quality and rigor of its  
BFA and MFA programs. GAID’s mission is to educate designers who  
aspire to practice design for communication at the most creative  
levels of the profession. Tyler’s graduates are working in all areas  
of the design industry.


The GAID facilities in Tyler’s new home on Temple’s main campus are  
spacious and state-of-the art. Tyler students have ready access to a  
rich array of academic courses. The proximity to students from  
Temple’s many programs in academics and the arts offers the breadth of  
experience that is essential to the designer’s education. To learn  
more about GAID, visit: http://www.temple.edu/tyler/gaid.html.



BresslerGroup
Bresslergroup develops innovative product solutions via integrated  
user research, interaction and industrial design and production  
engineering. Since its founding in 1970, Bresslergroup has won more  
than 80 major design awards including IDEA, MDEA, iF Hanover and  
Appliance Design. Bresslergroup has also authored more than 120  
patents. For more information, please visit: http://bresslergroup.com/.



About AIGA Philadelphia
AIGA, the professional association for design, is the premier place  
for design—to discover it, discuss it, understand it, appreciate it,  
be inspired by it.


AIGA’s mission is to advance designing as a professional craft,  
strategic tool and vital cultural force. AIGA stimulates thinking  
about design through journals, conferences, competitions and  
exhibitions; demonstrates the value of design to business, the public  
and government officials; and empowers the success of designers at  
each stage of their careers by providing invaluable educational and  
social resources.


Founded in 1914, AIGA remains the oldest and largest professional  
membership organization for design. AIGA now represents more than  
22,000 design professionals, educators and students through national  
activities and local programs developed by 64 chapters and 240 student  
groups. AIGA is a nonprofit, 501(c)(3) educational institution.



About PhillyCHI
PhillyCHI is the Philadelphia region’s chapter of the ACM SIGCHI, an  

[IxDA Discuss] REMINDER: PhillyCHI Social - Tech Design Quizzo!, Thursday, October 8, 2009

2009-10-02 Thread Dave Cooksey

PhillyCHI Social - Tech  Design Quizzo!, Thursday, October 8, 2009

In celebration of DesignPhiladelphia, PhillyCHI is pleased to host a  
technology and design-themed Quizzo night. Trivia questions will be  
based on technology, design, architecture, and other related topics.  
Prizes will be awarded to the first and second place teams. So come  
socialize with design and user experience professionals, try your hand  
at trivia, and (hopefully) learn a thing or two, all over tasty food  
and drinks.


Date: Thursday, October 8, 2009
Time: 8:00PM - 10:00PM (Play starts at 8PM, so come early to get a  
table)

Location: National Mechanics
Address: 22 S. 3rd Street Philadelphia, PA 19106
Web: http://www.nationalmechanics.com/
Map  Directions: http://nationalmechanics.com/philadelphia-hours-directions
RSVP: Not necessary but nice: philly...@gmail.com


About DesignPhiladelphia
Moving into its fifth year, DesignPhiladelphia is the largest national  
celebration of its kind. This city-wide cultural event spotlights all  
things design from architecture to interior design, fashion to product  
design, textile to graphic design. It's a journey exploring  
exhibitions, workshops, studio tours, lectures, special events and  
product roll-outs that inspire, engage, excite and delight.


Check the website regularly as the calendar continues to evolve, calls  
for entry are posted, and news items are shared: http://designphiladelphia.org/ 
.



About PhillyCHI
PhillyCHI is the Philadelphia region’s chapter of the ACM SIGCHI, an  
interdisciplinary academic and professional group interested in Human- 
Computer Interaction, User Experience, Usability, and other related  
disciplines. PhillyCHI holds monthly meetings and socials to network  
and discuss current topics in HCI. Learn more at http://phillychi.acm.org 
.





..

Dave Cooksey
Founder  Principal
saturdave
information architecture, taxonomy, user research, usability

713 Pine Street 1R
Philadelphia, PA 19106

email: d...@saturdave.com
phone: +1.215.219.8960
web: http://saturdave.com


Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Leaving Las Vegas...I mean the website site.

2009-10-02 Thread Robert Hoekman Jr

 [...] and I have learned that when you have a link that is
 internal you have that link open within the same window. On the other
 hand, I was taught that any link that you have that is a reference to
 an external website you should have the link open in an external
 window or tab if the user has that set up.


This practice has been debated for years. There are pros and cons to either
solution, and like every rule, there are exceptions even if the practice
works well for you. The context, as usual, matters most. The best bet is to
come up with a standard for your site and then keep your eye out for
exceptions.

The usability of window management is always a concern, and until there's a
target=_tab attribute built into HTML and into every browser, and likely
even beyond that point, this will continue being a source of contention.

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Leaving Las Vegas...I mean the website site.

2009-10-02 Thread Bryan Minihan
Just wishful thinking...wouldn't it be nice to say target=_iphone or
target=_pda or target=_maps?  

Bryan Minihan


-Original Message-
From: discuss-boun...@lists.interactiondesigners.com
[mailto:discuss-boun...@lists.interactiondesigners.com] On Behalf Of Robert
Hoekman Jr
Sent: Friday, October 02, 2009 10:55 AM
To: Brian
Cc: disc...@ixda.org
Subject: Re: [IxDA Discuss] Leaving Las Vegas...I mean the website site.

The usability of window management is always a concern, and until there's a
target=_tab attribute built into HTML and into every browser, and likely
even beyond that point, this will continue being a source of contention.


Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Leaving Las Vegas...I mean the website site.

2009-10-02 Thread John Yuda
Personally, I tend to get really irritated when a site tries to force
behavior on me like that. If I want to keep their page open and
follow the link in a new window or tab, I'll do that. Otherwise,
stop cluttering up my desktop.

I generally try to avoid impressing my personal preference onto
users, but this is one case where I simply cannot. That said, since
it is always possible for a user to open a page in a new tab or
window but not possible to suppress this behavior if they don't want
it, I think avoiding target=_blank is the better way to go anyway.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46314



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] Defining a UX vision

2009-10-02 Thread Brian Mila
Does anyone have any samples of a good UX vision statement?  Everyone
talks about how important it is to have a good UX vision, but I
can't find anything that defines exactly what a good UX vision
is.  I've been trying to develop one from scratch and I'm not sure
how it should be structured or what it should contain.  Should it
contain specifics like we need a dedicated UX group in our company
that all our design teams can use?  I don't want it to be
product-centered, because we build a whole range of applications.  
And I want something more tangible than just We want to deliver
high-quality, responsive applications that are developed with a
user-centered focus.   

Brian

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] [Event] Agile East 2009 Conference

2009-10-02 Thread Alla Zollers
Interested in Agile? Want to hear Martin Fowler speak?

Join me at the end of October for Agile East 2009, where a variety of
speakers will fill you with insights on agile. 

What: Agile East 2009 Conference
When:
October 29 - Philadelphia
October 30 - New York 

Website: http://connect.thoughtworks.com/agileeast/

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Adam Korman
I just re-read this and want to clarify what I meant by ...usability  
testing isn't a good way to measure (or improve) product quality... I  
meant this in the sense that it's an inefficient way to find defects  
in the execution, but a good way to find defects in the decision  
making (it's broken vs. it doesn't make sense).


On Oct 2, 2009, at 10:02 AM, Adam Korman wrote:

There are a couple of points I wanted to follow up on in this  
discussion:


Will Sanbury talked about how usability testing is not meant as a  
replacement for QA. I think this is a really important point --  
usability testing isn't a good way to measure (or improve) product  
quality, but it is a good way to find out if you built the wrong  
thing. In this context, using terms like sample size and margin  
of error are just not that meaningful.


My practical experience has been that usability testing just a few  
participants usually uncovers enough issues to keep the development  
team plenty busy. If you test with 5 people, 80% of them encounter a  
bunch of the same issues, and it takes the team several weeks to fix  
those issues, what good does it do to keep running the same test on  
another 25+ people to identify additional issues that only 10% will  
encounter that the team doesn't have the capacity to work on? As  
Steve Baty said, it's much more effective to test iteratively with  
small numbers than run big, infrequent studies.


On Oct 2, 2009, at 4:51 AM, Thomas Petersen wrote:


If we are talking wireframes or any other replacements for the real
thing whatever you will find have very little if anything to do with
what you find in the end.


I basically agree with this, except I would say that testing  
wireframes isn't really usability testing.


-Adam

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] Interactive web charting tools?

2009-10-02 Thread dnp607

Hi All,

Apologies if this question has been asked and answered before - I've  
been searching for a canned toolkit or a complete package that I can  
recommend to our development for building an interactive web graph.  
The interaction would be specific to a zoom feature, allowing a user  
to increase or decrease the span of time and thus the data within the  
graph. Data for the graph is provided through a home-rolled database  
(it's not static).


We've looked at jFreechart, and some of the GWT Google public tools,  
but they aren't sufficient to do this type of interaction from what  
we've seen.


Must this be coded from scratch, in flash, java or similar - or does  
something exist to speed our efforts?


Many thanks,
-Dan


Dan Peknik
NASA Ames Research Center
Moffett Field, California


Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] [EVENT] Agile Experience Design Meetup NYC: Improving Drupal Usability with Buzzr

2009-10-02 Thread Anders Ramsay
Hi Everyone,

We've got a great event lined up for our next Agile Experience Design
Meetup on Wed. Oct. 14.  Ed Sussman, CEO of Buzzr, and Karen McGrane,
co-founder and partner at Bond Art  Science, will talk about their
collaboration on the Buzzr project and improving Drupal usability.

Event Details at http://is.gd/3SIIw

Space is limited, so if you'd like to attend, pls RSVP soon.

Looking forward to seeing you at the next Meetup!

-Anders
Agile Experience Design Meetup Organizer

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] Invitations to our new survey product

2009-10-02 Thread Alok Jain
Hi All,

I am happy to introduce our new offering - Insightify. It's a new
age survey product and we are bringing it out as an invitation only
beta right now. You can read about us at
http://insightify.com/home.html

Here are some invites for this group
http://insightify.com/signup/new/ixdastart

We would be release more in coming weeks, so if you miss the above,
please just request one at: http://insightify.com/signup.html


For any questions/comments, please write to me at
supp...@insightify.com - we read every comment and will get back to
you quickly. 


Thanks
Alok Jain
---
http://twitter.com/insightify



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Thomas Petersen
I have made this point before.

I really don't in general see the usage of testing during the design
process.

I see great benefit in testing before starting on the actual design
process in order to figure out what kind of problems, issues and
tasks users want. But testing usability in an environment that is not
final is IMO a waste of both time and money. Only if we are dealing
with entire new paradigms do I see any reason to test.

Most people who call them selves either information architects or
UX'ers or designers should be able to deliver their part without
needing to involve the users once the problems, tasks and purpose
have been established.

It is my claim that you can't really test usability before you
launch the final product and that you should factor this in instead.
I find the current state of UCD troubling to say the least.

Jakob Nielsen is to me someone to read to get an understanding of
users in general.

But i just need a look at his website and then look around at other
sites and applications to understand that his work as great as it is
is only a fraction of the whole story.




. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46278



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Interactive web charting tools?

2009-10-02 Thread dnp607


Tia, Jason,

Thank you! These look like decent solutions. There's actually an  
example that's close to what we want in the Flot area. Fusioncharts  
doesn't have a similar idea, but I'll keep looking. It seems robust.


All the best,
-Dan

Dan Peknik
NASA Ames Research Center
Moffett Field, California


On Oct 2, 2009, at 10:22 AM, dnp607 wrote:


Hi All,

Apologies if this question has been asked and answered before - I've  
been searching for a canned toolkit or a complete package that I can  
recommend to our development for building an interactive web graph.  
The interaction would be specific to a zoom feature, allowing a user  
to increase or decrease the span of time and thus the data within  
the graph. Data for the graph is provided through a home-rolled  
database (it's not static).


We've looked at jFreechart, and some of the GWT Google public tools,  
but they aren't sufficient to do this type of interaction from what  
we've seen.


Must this be coded from scratch, in flash, java or similar - or does  
something exist to speed our efforts?


Many thanks,
-Dan


Dan Peknik
NASA Ames Research Center
Moffett Field, California


Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help




Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Katie Albers

My comments are interleaved...

Katie Albers
ka...@firstthought.com



On Oct 2, 2009, at 12:42 PM, Thomas Petersen wrote:


I have made this point before.

I really don't in general see the usage of testing during the design
process.


Well, that's unfortunate.


I see great benefit in testing before starting on the actual design
process in order to figure out what kind of problems, issues and
tasks users want. But testing usability in an environment that is not
final is IMO a waste of both time and money. Only if we are dealing
with entire new paradigms do I see any reason to test.


I'm not entirely sure what you mean by paradigms in this context.  
Perhaps you mean a function we've never seen before? In any case, you  
will generally find that very few users want problems or issues. They  
want functions. They want to be able to find those functions, and  
perform them with minimal exertion. And that's why we test.



Most people who call them selves either information architects or
UX'ers or designers should be able to deliver their part without
needing to involve the users once the problems, tasks and purpose
have been established.


Of course, they can, as long as they have the users' input. What  
appears to be a completely reasonable process, or an obvious button,  
or a clear name to someone working on the creation of an interface is  
likely to turn out to be obscure, hard to follow or incomprehensible  
when you put it in front of actual users. I suspect that everyone who  
tests throughout the process has had the experience of a test in which  
the perfect element turns out to be something that *none* of the  
users gets.



It is my claim that you can't really test usability before you
launch the final product and that you should factor this in instead.
I find the current state of UCD troubling to say the least.


Can you test the usability of the product? no. You don't have a  
finished product. But you can test all the elements that are going in  
to the product. If no one notices the critical button on the second  
step even though your visual designer went to great lengths to  
position it and color it and so forth, precisely to make it obvious,  
it's better to know that before you've built an entire product that  
relies on users pressing that button.



Jakob Nielsen is to me someone to read to get an understanding of
users in general.

But i just need a look at his website and then look around at other
sites and applications to understand that his work as great as it is
is only a fraction of the whole story.


Jakob's site is built to highlight Jakob's group's expertise. It does  
so admirably. To generalize from that very particular example to what  
Jakob thinks all sites should be like is foolish in the extreme.


As for the rest of your statement here: Of course it's only a fraction  
of the story. But it is a piece of the story. Testing as you go is a  
central tenet of  all aspects of development. Software developers test  
pieces of their code to make sure they do the right thing. Design  
engineers test screens to make sure that everything shows up properly  
and in the correct space. UXers test the aspects and versions of the  
product to make sure they are producing the desired results.


In each of these cases the goal is the same: It's a lot cheaper to  
find something wrong on a piece or earlier in the process and correct  
it then than it is to have to go back and redevelop the whole product  
to set things right that you should have corrected months ago. It's  
like building a house on an improperly laid foundation. It's cheaper  
to fix the foundation alone than it is to fix the whole house.




Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Defining a UX vision

2009-10-02 Thread Thomas Petersen
Then you need to define principles for what constitutes good product
design from a UX point of view.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46323



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Defining a UX vision

2009-10-02 Thread Nathaniel Flick
Not much out there, it's true, probalby because Interaction Design as
a discipline is relatively new. However, all is not lost - two ideas:

1. This site has a good explanation and a basis for a vision:
http://www.ixda.org/en/about_ixdg/what_is_interaction_design.shtml

2. Go to http://www.cooper.com and digest all the content there. 

Where I work we've created a vision statement but it's particular
to our situation which is medical software. I think that's why there
aren't many sources for what you're looking for, it's like a
business plan because it's unique to your design group and unique to
your company.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46323



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Thomas Petersen
Well, that's unfortunate. 

Not really.

I'm not entirely sure what you mean by paradigms in this context.
Perhaps you mean a function we've never seen before? In any case,
you will generally find that very few users want problems or issues.
They want functions. They want to be able to find those functions,
and perform them with minimal exertion. And that's why we test.

Who talks about wanting problems? They HAVE problems/issues and you
need to understand what those are.

Of course, they can, as long as they have the users' input. What
appears to be a completely reasonable process, or an obvious button,
or a clear name to someone working on the creation of an interface is
likely to turn out to be obscure, hard to follow or incomprehensible
when you put it in front of actual users. I suspect that everyone who
tests throughout the process has had the experience of a test in which
the perfect element turns out to be something that none of the
users gets. 

Which could might as well be a problem of testing an unfinished
product. None the less personally I have found much better value in
testing the actual product/service rather than a pseudo scenario. 

It seems that many UCD proponents completely ignore how big an impact
the actual real environment have on the experience of usability and
are more intersted in the process leading up to the design and
development.

A button might not make sense when you experience it on a screen but
if it's experienced in the actual context things often change quite
drastically. A roll over or other choreopgraphy or a well designed
layout can do all the difference.

But you can test all the elements that are going in to the product.
If no one notices the critical button on the second step even though
your visual designer went to great lengths to position it and color
it and so forth, precisely to make it obvious, it's better to know
that before you've built an entire product that relies on users
pressing that button.

You are assuming that when the visual designer goes to greath
length they don't understand anything about usability in general
otherwise the above example is absurd. 

Why should the user know better where the button should be
positioned? 

It is obvious that if you really where in such a situation where a
button you went to great extent to figure out where should be
positioned by highlighting it, still don't do the trick you are
dealing with a completely different problem that have nothing to do
with asking the users, but rather doing AB tests to figure out where
you have most success.

Jakob's site is built to highlight Jakob's group's expertise. It
does so admirably. To generalize from that very particular example to
what Jakob thinks all sites should be like is foolish in the
extreme.

When did I say that Jakob Nielsen said anything about how all sites
should look like? Can you at least respond to what I write instead of
creating claims I never made.

In each of these cases the goal is the same: It's a lot cheaper to
find something wrong on a piece or earlier in the process and correct
it then than it is to have to go back and redevelop the whole product
to set things right that you should have corrected months ago. 

All that would make sense if testing would rid us of bad
products/services. Yet what often happens is that the process becomes
such a piece of committee work that it loosens clarity and focus. UCD
is not by any means an insurance against bad feature decisions it's
not even an insurance against bad usability.

It's like building a house on an improperly laid foundation. It's
cheaper to fix the foundation alone than it is to fix the whole
house.

It's nothing at all like building a house, since building a house
doesn't mean having the users of the house testing the foundation.
They wouldn't know the difference most of the times. That is why you
have experts with experience who know what they are doing.


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46278



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] [EVENT] National Design Week in NYC is October 18-24, Objectified Film screenings The Biz of Design w/ Bill Moggridge (FREE), who\'s in?

2009-10-02 Thread Fritz Desir
Just received this in email newsletter and it looks quite interesting.
I'd like to rally up some IxDA'ers to go not necessarily as an
official meeting but to represent and possibly chat afterwards. 


THE BUSINESS OF DESIGN
October 20, 2009, 8:30 – 10:00am

FREE EVENT
 
Business leaders discuss how design impacts their overall strategy
and affects their bottom line. Moderated by Daniel H. Pink, the
panelists will discuss the role of design thinking in the workplace
as it relates to consumer goods as well as overall company vision.

At The Times Center (242 W 41st Street, NY, NY)  Continental
breakfast buffet from 8:00 – 8:30am.

Moderator: Daniel H. Pink, author of A Whole New Mind: Why
Right-Brainers Will Rule the Future

Panelists include:
Bill Moggridge, Lifetime Achievement Award Winner, co-founder of
IDEO
Sam Lucente, Vice President of Design, Hewlett-Packard Company
Jeanne Liedtka, Professor, University of Virginia's Darden Graduate
School of Business


OBJECTIFIED: Film Screenings
October 21, 2009 1:30  3:30pm


Anybody in NYC interested? Let me know, looking forward to hearing
from you guys. Thx.  - Fritz

More info  to register here: http://j.mp/Po84W



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help

Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Todd Zaki Warfel


On Oct 2, 2009, at 12:42 PM, Thomas Petersen wrote:

I really don't in general see the usage of testing during the design  
process.


Whoa! Red flag alert!

Usability testing helps evaluate a design concept that tries to  
address a design problem. That testing can be a baseline test,  
something you do on production system or it can be used as a  
validation mechanism on a newly proposed design/prototype.


To think that usability testing is only useful to find problems or  
holes with a current production system, but not your proposed design  
solution is short sighted. Any given problem has multiple design  
solutions. How do you know you've selected the right one?


I see great benefit in testing before starting on the actual design  
process in order to figure out what kind of problems, issues and  
tasks users want. But testing usability in an environment that is  
not final is IMO a waste of both time and money. Only if we are  
dealing with entire new paradigms do I see any reason to test.


It's not April fools and this isn't the Onion, but... ;)

It can be an exploration technique, this one of the ways we use it, to  
find out what users/consumers want, but that's really more exploratory  
research than usability. Usability is more about identifying whether  
or not the product/service meets the needs of the user/consumer,  
enables them or impedes them, and gives them a satisfying experience.  
Those measures apply to any system, production, or prototype.


Most people who call them selves either information architects or  
UX'ers or designers should be able to deliver their part without  
needing to involve the users once the problems, tasks and purpose  
have been established.


Big mistake in doing this. That's how we got into the problem in the  
first place. Someone designed the system w/o inviting users to kick it  
around for a test drive. How do you know it wasn't a designer who did  
it in the first place?


We do usability testing as part of our design process and as a  
separate service offering to our clients. I can say that in both  
cases, when we've designed something or our clients have designed  
something, we find opportunities for improvement through testing.


Thinking that because you're a designer you know the right design, you  
have the right decision, and it doesn't need validation is arrogant,  
short-sighted, and ignorant. The best designers and the best systems  
use a validation and feedback loop. Usability testing is one of those  
feedback loops that's really important.


It is my claim that you can't really test usability before you  
launch the final product and that you should factor this in instead.  
I find the current state of UCD troubling to say the least.


The current state of UCD is troubling, I'll agree with that, but it's  
because so many people in charge of designing systems are leaving out  
validation. The attitude that it's only good for finding problems on  
existing production systems and not validating your proposed solution  
is only going to make that worse. I'm a bit shocked, frankly, that you  
don't see the flaw in the claim that you can't really test usability  
before you launch the final product.


Perhaps your definition of usability testing needs to be tested?


Cheers!

Todd Zaki Warfel
Principal Design Researcher
Messagefirst | Designing Information. Beautifully.
--
Contact Info
Voice:  (215) 825-7423
Email:  t...@messagefirst.com
AIM:twar...@mac.com
Blog:   http://toddwarfel.com
Twitter:zakiwarfel
--
In theory, theory and practice are the same.
In practice, they are not.





Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


[IxDA Discuss] CFP: CHI2010 Workshop on Researcher-Practitioner Interaction

2009-10-02 Thread Elizabeth Buie
The CHI2010 Workshop on Researcher-Practitioner Interaction now has a date!  

This one-day workshop will be held on Sunday, April 11, 2010, at the beginning 
of the CHI 2010 conference.  It will bring together researchers and 
practitioners of HCI to explore whether and to what extent difficulties exist 
between them -- and, if so, will endeavor to identify the dimensions of the 
problems and propose possible solutions. On the one hand, we will work to 
articulate factors that may render the research literature inaccessible or 
irrelevant to practitioners and to suggest potential improvements and 
approaches. On the other hand, we will also strive to learn from researchers 
how their research could benefit from practitioner input. 

We invite practitioners and researchers to submit a position statement and a 
short bio. Send contributions by email to me, by 5pm EDT on 30 October 2009.

Please forward this message to anyone you think may be interested.  Please 
refer them to the workshop web page (see below) and do NOT post my email 
address on the web unless you disguise it the way the workshop page does.  
Thanks!

http://bit.ly/CFP-CHI2010-RPI

See you at CHI 2010!

Elizabeth
-- 
Elizabeth Buie
Luminanze Consulting, LLC
tel: +1.301.943.4168
www.luminanze.com
@ebuie

Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread Jared Spool


On Oct 2, 2009, at 12:42 PM, Thomas Petersen wrote:


I have made this point before.

I really don't in general see the usage of testing during the design
process.


Yah.

It didn't make any sense then. Still doesn't.



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread mark schraad

I am dumbfounded... wow.


On Oct 2, 2009, at 12:42 PM, Thomas Petersen wrote:


I really don't in general see the usage of testing during the design
process.



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help


Re: [IxDA Discuss] Article on Number of Usability Test Participants

2009-10-02 Thread David Drucker
Talking to users, testing prototypes (paper, screen, etc.) and  
analyzing their feedback teaches a designer what they don't know about  
the problem at hand. To ignore these is to proceed at your own peril.


The wise man knows he doesn't know.
 - Lao Tzu





On Oct 2, 2009, at 12:42 PM, Thomas Petersen wrote:


I really don't in general see the usage of testing during the design
process.



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help



Welcome to the Interaction Design Association (IxDA)!
To post to this list ... disc...@ixda.org
Unsubscribe  http://www.ixda.org/unsubscribe
List Guidelines  http://www.ixda.org/guidelines
List Help .. http://www.ixda.org/help