Chris,
You hit what I was trying to say on the nail. There are
the qualitative science and the quantitative sciences. With USD we are
mixing the two, and I have no problems with the two forms been mixed.

The issue is as you put is people being "sloppy" in application.

Your argument was far clearer than the one I was making.

Yes, I was spurred on by the debate from the Marketing world, about the poor
performance of certain research methods. If people have not been following
the argument, it is that certain research methods restrict creative thought
and a good idea could be thrown out because research shows it will not work.
Examples given for this argument include the "Dove campaign for real beauty"
(which shows real women as beautiful, instead of models), the walkman,
etc.... The proponents of the argument include some largest names in
Marketing including Uni Leaver, the vice chairman of Ogivy... etc....

They are not making an argument that research is bad, just that some forms
are harmful. For a good posting on the subject see:
http://lbtoronto.typepad.com/lbto/2007/05/pretesting.html

I was not trying to have a foundationalist argument, but just to defend
Popper a tiny bit, he was not against the social sciences. What he was doing
was showing that t(truth) could not be proven. That something could
only shown not to be true. There is only negative truth, and no way to prove
something positively.

What all science is trying to do build the evidence up, it is never
complete. Both approaches can help build that evidence up. But they must be
used in a safe manner, otherwise we will throw away the next Dyson
vacuum cleaner,
or the next Walkman.

Your example of Journalism is that what gets reported is the findings
without the caveats. There are those two nasty errors in statistics, Type 1
and Type II. What you read in the newspaper is that the Turkey gets fed
every day. You don't read that there is a 1 in 365 chance of that finding
been wrong.

>From the foundationalist view Persona are unsafe, and I think you are making
an argument that from the non-foundationalist view that they are unsafe.

I don't believe you are going there, from what you reference at the very
> end, but that makes me want to push on where you are going.


I am all for good research and it can help inform the development of a
product. The challenge is that all the methods have limitations that need to
be known. If the theory side of USD can developed it would strengthen the
industry, and would reduce the risk of bad research creating bad products.

With Ethnography (and my only qualification here is that my father was an
Anthropologist) I believe we need to look more closely in how we present the
rich data collected from the participants (not fictionalised), and also use
more participant observers (immersion with the participants), use informers.
Keep studying the participants while the product is been developed. Do not
think of it as waterfall process but an ongoing one. Ethnography is about
how people relate to each other and objects. If we are developing a new
object then the peoples interaction with the new object will change. How do
you change the interaction between the personas, how do you know that you
have got it correct?

Is there a way of designer immersion into the participants while the
designers are building the product?

James


On Wed, Nov 19, 2008 at 10:37 PM, Christine Boese <[EMAIL PROTECTED]
> wrote:

> James,
>
> I love where you're going here, and recourse to rigor (and Popper) seems
> like a neat antidote to a lot of research methods (I'm thinking more about
> the field of marketing, than anything) that get more than a bit sloppy in
> their applied versions.
>
> So I'm just nailing down some stuff at the end, cuz I wasn't sure exactly
> where you were going, so forgive me if I've misunderstood what you wrote.
> Might be good to tease it out a bit more, cuz that's where it gets
> interesting.
>
> Too often, reference to the "sciences" and properly rigorous research
> methods equates (in some people's minds) to overly foundationalist
> assumptions that require generalizable, quantitative data only. I don't
> believe you are going there, from what you reference at the very end, but
> that makes me want to push on where you are going.
>
> Descriptive, rich, qualitative methods are by definition NOT generalizable.
> That would be the whole point. One can inductively triangulate data, amass
> evidence that reinforces emerging categories of data, develop heuristics,
> and even conduct parallel studies and discover points of intersection
> between similar qualitative or ethnographic-type studies.
>
> So replicate to some extent, but generalize, never. True, people doing
> multi-modal studies are trying to work with qualitative and quantitative
> methods in tandem, so you may get some cross pollination there. Content
> analysis, linguistics, these are rich areas for combining methods, again, to
> triangulate, or to use emerging qualitative data to develop quantitative
> hypotheses.
>
> Quantitative heads tend toward more restrictive, or limited definitions of
> what is "real" research, which methods are most rigorous, yield the best
> data, and so on. They like to tout generalizability as some kind of Holy
> Grail that only they can claim, like it gives them some kind of
> foundationalist claim to capital T Truth. Blah.
>
> Not all of them think like this, however. You tend to get that kind of POV
> more often in journalists who write stories about "science," and bias their
> coverage toward methods they more readily grasp or can easily convert into
> sound bites. (you'd be amazed at how widespread this POV is among
> journalists at CNN, for instance--I can speak from experience)
>
> To this end, then, journalists unconsciously tend to reinforce
> misconceptions about real research, real science, which mushes up the whole
> pseudo-science problem more, as they often tout as authoritative
> quantitative studies that are so severely limited and short-sighted in their
> hypothesis development or baseline assumptions as to render their so-called
> valid and generalizable data utterly worthless.
>
> Which is worse: to have a method that is rigorously generalizable, but
> doesn't actually fit what we find in the world, or a rich data set which
> hews closely to the actual behaviors of actual people with deep insight and
> understanding, but should not ever be generalized beyond that level of
> detail?
>
> Which is closer to real small-t truths?
>
> Chris
>
> On Wed, Nov 19, 2008 at 9:49 AM, James Page <[EMAIL PROTECTED]> wrote:
>
>> Hi All,
>> I think it may help people here if I inject some theory into
>> this discussion.
>>
>> The first point is that people keep making claims that the method has some
>> scientific validity. For example Liz says that "Hopefully my quick
>> elucidation about the original persona creation methodology helps you to
>> see
>> that the mapping of individuals to dimensions of interest is a relatively
>> scientific method"
>>
>> Either a method is scientific or pseduscientific. There is no middle
>> ground.
>> Is the distinction only important as academic argument? The answer is no,
>> and it helps to understand a small bit of history to see why.
>>
>> The distinction between science or psedu science came about because there
>> where two political movements that claimed that they where scientific, and
>> by following them would lead to improvement to everybody in society. The
>> two
>> political movements where communism, and national socialism.
>>
>> Karl Poppers, who you could say his early life was upturned by both
>> movements, thought that it is important to qualify what is scientific and
>> what is pseduscientific. [DISCLAIMER] Some members of my family where
>> slightly put out by these movements as well] He came up with the idea that
>> unless a theory has a negative hypothesis and is replicable, then it is
>> not
>> scientific. In one sweep he had disqualified both Marxism,
>> and communism from claim of being scientific.
>>
>> One of sciences that wiped out by this definition was Eugenics. Eugenics
>> was
>> one of the academic justifications of Nazism. Another science to disappear
>> was
>> biotype. This is the
>> idea that you could predict if somebody was a criminal by their body
>> measurement.
>>
>> Back to Persona's and Liz's presentation. She gives an example about Tom.
>> By
>> my count there are at least 21 bits of data points about Tom. Using the
>> example given by Chapman and Milham (which again uses at least 21 data
>> points.
>>
>> http://cnchapman.files.wordpress.com/2007/03/chapman-milham-personas-hfes2006-0139-0330.pdf
>> That would show that "Under those assumptions, the composite data for
>> "Patrick" would represent (0.5)21 * 100% = 0.000048% of the population, or
>> approximately 134 people in the
>> United States." [If your target population (lets say Car Mechanics) is
>> smaller than that then the number of people your Persona
>> could represent approaches zero]
>>
>> They go on further to say that  :-
>> The key point is "there is essentially no way to generalize from
>> a well-specified persona to a population of interest, and thus no way to
>> say
>> anything about the users of interest. There is no way to distinguish
>> which characteristics of a given persona are indicative of users and which
>> are irrelevant
>>
>> The point is that unless you can show that you are designing for Users and
>> not something fictional then it is hard to call it User Centred Design
>>
>> Is there a way out of the theory Trap. I think yes there is. Idea one is
>> to
>> treat a design as a Hypothesis and test it. Idea Two is to go back to the
>> Sciences that have contributed methods to UCD, like Anthropology, and see
>> how they overcome some of the Theory Challenges. For example many people
>> on
>> the list complain about the time that it takes to go through the research,
>> and to distil the ideas. Ethnography was developed as a descriptive
>> language.
>> Or go  to Activity Theory which is another descriptive process.  If you
>> use
>> either the language of Ethnography or the methods of AT it will save you
>> time. Forget about trying to get data to jump out at you. This is called
>> Grounded Theory and it is time consuming and very hard to follow
>> correctly.
>> Also come up with some ideas before the research and then test them (in a
>> negative wayi.e.... my theory is not true if.....), again this will both
>> save you time and can be quite reliable.
>>
>> All the best
>>
>> James
>>
>> On Tue, Nov 18, 2008 at 6:05 AM, Jarod Tang <[EMAIL PROTECTED]> wrote:
>>
>> > Hi James,
>> >
>> > > We are told so many times not to use us for our designs, ourselves, or
>> > our
>> > > mothers as the target for a design. But surely this is better than
>> > something
>> > > that is purely fictional.
>> > Persona is/should be based on user research data underneath (at least
>> > for design). This is defined from early practitioners like Alan Cooper
>> > (and he proved why the instantiation of persona should based on
>> > concrete user research in his books). To say it's fictional, one may
>> > miss the point of persona usage for design .
>> >
>> > Regards,
>> > Jarod
>> >
>> > --
>> > http://designforuse.blogspot.com/
>> >
>> ________________________________________________________________
>> Welcome to the Interaction Design Association (IxDA)!
>> To post to this list ....... [EMAIL PROTECTED]
>> Unsubscribe ................ http://www.ixda.org/unsubscribe
>> List Guidelines ............ http://www.ixda.org/guidelines
>> List Help .................. http://www.ixda.org/help
>>
>
>
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to