Or something like MISRA:

https://en.wikipedia.org/wiki/MISRA_C

MISRA C
MISRA C is a set of software development guidelines for the C programming 
language developed by MISRA (Motor Industry Software Reliability Association...
Read more...<https://en.wikipedia.org/wiki/MISRA_C>





________________________________
From: Discuss <[email protected]> on behalf of 
Jonathan Strootman <[email protected]>
Sent: Monday, August 3, 2015 1:27 PM
To: Greg Wilson; Software Carpentry Discussion
Subject: Re: [Discuss] code review in the sciences

The specification for qualifying air-worthy software components is DO-178B 
(https://en.wikipedia.org/wiki/DO-178B).
This specification MUST be followed if any piece of software is to be run on an 
aircraft.

A huge part of this qualification is the qualification of the 
processes/methodologies themselves.

I understand that we are talking about academia, but my point is that the 
benefits of reviewing (and qualifying) development methodologies is well 
documented in the aerospace realm.

This may sound more/less simple on the surface, but there are many difficulties 
in getting a process like this up and going.

-= Jonathan Strootman

On Mon, Aug 3, 2015 at 12:24 PM Greg Wilson 
<[email protected]<mailto:[email protected]>> wrote:
Neil Chue Hong:
>>> I've talked to the SoftwareX editors previously, and I think we agree
>>> that actually the tricky thing here is providing the right tools to
>>> make reviewing software easier, and that's something where publishers
>>> can certainly make improvements.

Greg Wilson:
>> I believe that if we want scientists to start doing code reviews, we
>> have to persuade them to do those reviews *as the code is being
>> written*, in the same way that most open source projects do it - i.e.,
>> we have to get them to review small incremental patches as they're
>> written, so that (a) authors can fix problems before they waste time
>> using the code, and (b) the effort required is as small as possible.  If
>> this is right, changes to tooling alone aren't going to help - instead,
>> publishers should focus their efforts on changing the review process so
>> that it runs in parallel with coding and analysis, rather than afterward.

C. Titus Brown:
> I agree with everything up until the last sentence, which I don't see
> possibly working ever no way no how are you kidding what?
>
> But, rather than leaving things at that unhelpful statement, here's a helpful
> suggestion :)
>
> What about drawing an analogy to the pregistered study model --
>
> https://osf.io/8mpji/wiki/home/?_ga=1.189671019.1900172679.1438548591
>
> and basically saying that software publications are virtually guaranteed
> if the *methodology* of software development is reviewed as part of an
> initial submission? Then sometime later (after the software has reached
> most of its specified milestones) the publication can happen with only
> grammatical review of the writeup.

In other words, check the way the software was developed, rather than
the software itself?  Interesting - what pilot study could we do in 2016
to see if this actually achieves what we want?

> It doesn't necessarily work for early stage software projects where the 
> success
> or failure of the basic idea is in question, but I'd certainly do that for my
> lab's current software effort (khmer & screed) and my next project
> (tentatively named buoy).
>
Sure - we don't check every bottle of aspirin that comes off the
assembly line, but rather the assembly line itself.  I like it.

- Greg

--
Dr. Greg Wilson    | 
[email protected]<mailto:[email protected]>
Software Carpentry | http://software-carpentry.org


_______________________________________________
Discuss mailing list
[email protected]<mailto:[email protected]>
http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org
_______________________________________________
Discuss mailing list
[email protected]
http://lists.software-carpentry.org/mailman/listinfo/discuss_lists.software-carpentry.org

Reply via email to