A great discussion appears to be happening under the original topic. I wanted 
to further focus it into something I have been thinking a lot about recently. 
So I have hijacked it and created this new thread. My apologies to Oleh.

I find all of the applications coming out to do quick real time testing of 
alternate design solutions a great tool to have in our toolbox. But it is just 
another tool, not a replacement. I see this as a complement (not a supplement) 
to traditional user research and testing.

If you where to just use these tools to iterate and improve your design, I am 
sure you will be able to get the best design possible considering the point you 
started from. Maybe, however, if you started with a very different design your 
end iteration would perform even better. So, if the only tool you used was a 
micro-design evolution approach, then you may end up not with the best design 
possible, but with the best design considering the point you started from. (In 
mathematical terms, a local maximum 
http://en.wikipedia.org/wiki/Local_maximum). 

So how do you get the very best design possible?
If you had all the time any money you wanted, you would build the best designs 
you could dream of that would cover as much of the design space as possible. 
You would send small, but significant traffic to each and keep tweaking the 
design until it was the best it could be. Eventually designs would begin to 
drop-out because other designs would be performing better. Finally, when you 
are left with a single design performing better then the rest, then you know 
you have the best design for that design space.

Of course we have real world issue of making bad first impressions with users 
(not to mention the unlimited money part). So we can not follow a model like 
this.

Instead, what we can do, is to develop prototypes of the best design 
possibilities we can dream up. We run small samples of users through these 
prototypes and we see which performs the best and which appears to have the 
best potential upside and most design flexibility. It may not be the design all 
the users said they liked the best. (What a user says and what they actually 
think are very different things). 

My opinion (In summary)
These analytical tools that are coming out to do micro evaluations of design 
options are great. They will help you climb that hill to get your design to the 
best it can be! But, they will not help you pick the tallest hill to climb. It 
is too expensive to try and climb them all so you need a low-cost method to 
explore and evaluate the potential of each... a method like low-fidelity 
prototypes with usability test evaluations. 

We worked with one our client's, Orbitz, to do a presentation on this subject 
at Gartner's web innovation summit. 
http://agendabuilder.gartner.com/WEB1/WebPages/SessionDetail.aspx?EventSessionId=833.
 Its spin was more focused on the business topics of the issue and less on the 
design and design process side of things.

Nick Iozzo
Principal User Experience Architect

tandemseven

847.452.7442 mobile

[EMAIL PROTECTED]
http://www.tandemseven.com/
________________________________________________________________
*Come to IxDA Interaction08 | Savannah*
February 8-10, 2008 in Savannah, GA, USA
Register today: http://interaction08.ixda.org/

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to