On 15/06/2015, at 10:55 pm, Derek M Jones <de...@knosof.co.uk> wrote:

> Richard,
>> Concerning student subjects in SE experiments:
>> people use the subjects they can afford.  If you
> Then perhaps research based on these papers should come
> with a health warning that the work only exists to increase
> the authors paper count so they can get promoted and has no
> connection with industrial practice.

I should start by saying that a couple of years ago I *tried*
to do a software engineering experiment, yes, using students,
but failed.  Mind you, the experiment taught me something.
The students claimed to be
(a) unable to read a 2-page program presented as black
    marks on white paper with bold styling for keywords
    and italic styling for comments.  No, the minimal
    requirement was a syntax-colouring IDE with search.
(b) unable to detect a mistake in a program without
    running it.
This was (rather horrifying) news to me, so in a sense the
project succeeded, it just didn't answer my research question.
And for what it's worth, *experienced* practitioners would
have been quite unsuitable as subjects because of the "what
I know is the best possible" bias.

Had the results been usable, I'd have offered them for
publication.  Would there have been a health warning?
Yes/no.  The paper would have been explicit about what
kind of subjects I had and what their background was.
Readers would have been trusted to be intelligent enough
to draw their own conclusions from that.

As for connection with industrial practice, you might as
well say that experiments on mice have no connection with
cancer in humans.  The connection is not *simple*, true.
>> use experienced software engineers, they expect
>> to be paid for their time.
> My experience is that it is possible to get professionals for
> short periods of time for free, i.e., they will volunteer.

I have seen a couple of papers that did that, taking small
amounts of time and getting them to do utterly unrealistic
things having no connection with any kind of programming
practice, and produce no results of value.

> Large amounts of funding are obtained for all sorts of
> experiments. Software academics need to be more ambitious.

You did notice my e-mail address, right?
Ends with ".nz"?  Country with about 4.6 million people,
no heavy industry, major government IT projects outsourced
to other countries, innovative companies started here
regularly migrate to Australia or the USA.'

Let's see.  Suppose I wanted 30 people for 1 hour.
Taking NZD 100/hour as a typical sort of rate (from
Hudson's 2014 figures), there's NZD 3000.  Hey, that's
my research budget for the whole flipping year.

I you know where *I* can get "large amounts of funding"
to do research with practitioners, please tell me.
>> The money to work with professionals certainly isn't
>> going to come from central government.
> If governments are serious about improving software then
> they need to start paying for proper experiments to be run.

As the Spartans said to Phillip II, "if".

I commend the book "Dangerous Enthusiams: E-government,
Computer Failure and Information System Development".

If my government were serious about improving software,
they'd be *begging* researchers to crawl all over the
software and records of things like INCIS and NovoPay.

>> It also has to be said that if you want to tell how
>> readable alternative notations are, you don't WANT
>> experienced professionals, because then you will get
>> "how much does this look like Blub."
> Readability research in software has not yet started.
> With few exceptions 'readability' papers are examples in
> incompetent research.
> A real experiment:
> http://shape-of-code.coding-guidelines.com/2012/10/13/agreement-between-code-readability-ratings-given-by-students/

I've seen that.  I do not see any connection whatever
between "snippets" and what I mean by readability or
understandability.  I have that data set.  The main
effect is "mean readability decreases with length".

And that tells me right away that it wasn't measuring
what I mean by readability.  I've spoken to a psychologist
about this who said that yes, of course, "longer = harder
to read" is well known.  But what we need to know is
given that a program has to be roughly a certain size,
how do we make it easier to understand.

And this is a very different question from "how readable
is the context-less and syntactically incomplete fragment
containing 10 identifiers?"

I do not feel that the Buse and Weimer paper tells me
anything at all worth knowing.

> Perhaps this work will one day be considered the start of
> readability research in software:
> http://emipws.org/

Looks like good stuff.  "Upon signing for this event,
we will schedule an optimal time-frame for which you will
receive a loan eye tracking system, we have four systems
available."  I haven't had the cheek to ask if they would
ship one to New Zealand...

Frustratingly, both "report" links on http://emipws.org/resources/
point to the same "Novice's Gaze" paper.  The "Expert's Gaze" one
is at 

> Yes, but you are one of those weird academics who sometimes
> knows what he is talking about ;-)

Thank you, and I accept "sometimes".
>> The key point again is society's reward structure.
>> Universities reward *publications*, not working code.
> The problem is what low quality work software journals are
> willing to publish.
That is a problem too.  I have to break off now.
How close is "Empirical Software Engineering with R" to

You received this message because you are subscribed to the Google Groups "PPIG 
Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ppig-discuss+unsubscr...@googlegroups.com.
To post to this group, send an email to ppig-discuss@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to