He has published on it. This paper, and papers that have subsequently
referenced it, might be a good starting point (although he's not
looking at programmers).

http://portal.acm.org/citation.cfm?id=1357054.1357127

A journal version of the paper can be found here:
http://brynnevans.com/papers/elaborated-model-of-social-search.pdf



----
School of Interactive Computing
Georgia Institute of Technology
www.cc.gatech.edu/~yardi



On Thu, Dec 24, 2009 at 6:02 PM, Robin Jeffries <ro...@jeffries.org> wrote:
>
>
> On Thu, Dec 24, 2009 at 6:43 AM, Derek M Jones <de...@knosof.co.uk> wrote:
>>
>> Robin,
>>
>>> be gamed.  The Turker wants to minimize the work for the money (but if
>>> you
>>> pay more, they still  do the minimal work but get more money).  So
>>
>> My experience of running experiments with software developers is
>> that they generally tend to try and minimize the amount of work
>> they have to do, even though I they know that it is a 30 minute
>> experiment (or xx minutes).  I think it is part of developer make-up
>> to try and do things the easy way and in many ways this is
>> desirable behavior; who wants to hire a developer who tries to
>> do things the hard way?
>
> Yes, but if they can answer the question without doing the (easy) work you
> want, Turkers will do that. You need to think about the easiest way to get
> to the answer (which might be "flip a coin") and ask if you are OK with that
> strategy (it's ideal if this is the strategy you want them to use).  They
> seem to have some pride of work, or this is maintained by contracts that
> don't pay if you have too many wrong answers, so if the strategy you want is
> not much more work than the coin flip strategy, they will do the cognitive
> processing you want.
>>
>> For instance an experiment I ran this year asked developers to
>> remember write some code that involved them using either if or
>> switch (I was looking to duplicate the findings in figure 2 of
>> www.knosof.co.uk/cbook/accu09a.pdf) and remembering some unrelated
>> information.
>> It looks like they used a fixed strategy for selecting if/switch
>> (all but one always used one or the other), concentrating their
>> effort on remembering the unrelated information.
>>
>>> judgements of "which is better" (that are on the order of a sentence or a
>>> paragraph) work well. But even those need some sort of quality control
>>> (questions that let you judge whether people are even reading the task or
>>> just selecting one answer -- you can refuse to pay people who can't
>>> answer
>>> those questions right).   Multiple choice quizes work with that caveat.
>>>  You
>>> might be able to do "find the bug" with simple, short code snippets.
>>
>> I am wondering whether non-programmers would attempt the problems and
>> just guess the answers.  The monetary rewards are so small that they
>> are obviously not the primary motivation, or are we dealing with
>> third-world people here?
>
> I honestly don't know where they come from, but given that the typical price
> is about $.05/task, I'm guessing that a lot of them are 3rd world. Or maybe
> teenagers.  Ed Chi does say that some of the motivation is curiosity and
> pride of accomplishment, but that can only go so far.
> If you have some way to make sure the programmers aren't "cheating" (not
> sure that concept really exists in this domain, but not doing the processing
> you need), wouldn't that work as well for non-programmers?  I think that you
> can "qualify" people for your study, by asking background questions (which
> you pay them to answer, but if they answer the way you want, they get to be
> in later studies), but I have no idea how likely they are to lie about their
> background.  Maybe your background questions would be knowledge questions
> ("what does this function return?").
>>
>>> So the real question is, can you design your study so that you can get at
>>> your research questions with these sorts of tasks?
>>
>> My interest is in the cognitive issues that involve less than 10 seconds
>> of time, so the Mechanical Turk looks like it might be applicable.
>
> Sounds plausible to me.
>
>>
>>> For someone who has used Mechanical Turk for research purposes, you might
>>> look at Ed Chi's work.
>>
>> Thanks.  Do you mean this guy?
>> http://www2.parc.com/istl/groups/uir/people/ed/ed.htm
>
> Yep.  That publication list is way out of date.  I don't know how much of
> his Turker studies have been published (or whether he has published anything
> on the methodology).  He gave a talk at my work this week and talked a bit
> about how he has done the studies. He'd likely be responsive to an email
> asking for more information. He's a big advocate of Mechanical Turk as a
> research method.
>>
>>>
>>> Robin
>>>
>>> On Wed, Dec 23, 2009 at 6:31 PM, Derek M Jones <de...@knosof.co.uk>
>>> wrote:
>>>
>>>> All,
>>>>
>>>> Has anybody on this list used Amazon's Mechanical Turk
>>>> aws.amazon.com/mturk/
>>>> to run psychology of programming experiments?
>>>>
>>>> I have no idea how many programmers might be members of this
>>>> service.  The list of tasks does not look that technical.
>>>>
>>>> An interesting blog by somebody who has been following this
>>>> service:
>>>> behind-the-enemy-lines.blogspot.com/
>>>>
>>>> --
>>>> Derek M. Jones                         tel: +44 (0) 1252 520 667
>>>> Knowledge Software Ltd                 mailto:de...@knosof.co.uk
>>>> Source code analysis                   http://www.knosof.co.uk
>>>>
>>>
>>
>> --
>> Derek M. Jones                         tel: +44 (0) 1252 520 667
>> Knowledge Software Ltd                 mailto:de...@knosof.co.uk
>> Source code analysis                   http://www.knosof.co.uk
>
>

Reply via email to