Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul Homer
My take is that crafting software is essentially creating a formal system. No 
doubt there is some clean, normalized way to construct each one, but given that 
the work is being done by humans, a large number of less than optimal elements 
find their way into the system. Since everyone is basically distinct, their own 
form of semi-normalization is unique. Get a bunch of these together in the same 
system and there will be inevitable clashes. But given that it#39;s often one 
variant of weirdness vs another, there is no basis for rational arguments, thus 
tempers and frustration flair. In the long run however, it#39;s best to pick 
one weirdness and stick with it (as far as it goes). We don#39;t yet have the 
knowledge or skills to harmonize these types of systems.

Paul.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread John Carlson
I would hope that something like
http://en.wikipedia.org/wiki/Z_notationcould be used to specify
computer systems.  No, I've never used Z
notation.  It looks like WSDL 2.0 contains it.  Cucumber seems more
practical, but along the same lines.  I've never used Cucumber either.  The
problem is that specification languages seem to include extra overhead.
One has to prove productivity.  That is the question, how do we improve
productivity while decreasing bugs.  Don't do things twice seems to be a
good rationale, if the language/system can support it.

John


On Mon, Dec 31, 2012 at 9:51 AM, Paul Homer paul_ho...@yahoo.ca wrote:

 My take is that crafting software is essentially creating a formal system.
 No doubt there is some clean, normalized way to construct each one, but
 given that the work is being done by humans, a large number of less than
 optimal elements find their way into the system. Since everyone is
 basically distinct, their own form of semi-normalization is unique. Get a
 bunch of these together in the same system and there will be inevitable
 clashes. But given that it's often one variant of weirdness vs another,
 there is no basis for rational arguments, thus tempers and frustration
 flair. In the long run however, it's best to pick one weirdness and stick
 with it (as far as it goes). We don't yet have the knowledge or skills to
 harmonize these types of systems.

 Paul.

  --
 * From: * BGB cr88...@gmail.com;
 * To: * fonc@vpri.org;
 * Subject: * Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing
 Userspace Bug - Slashdot
 * Sent: * Mon, Dec 31, 2012 7:32:31 AM

   On 12/30/2012 10:49 PM, Paul D. Fernhout wrote:
  Some people here might find of interest my comments on the situation in
 the title, posted in this comment here:
  http://slashdot.org/comments.pl?sid=3346421cid=42430475
 
  After citing Alan Kay's OOPSLA 1997 The Computer Revolution Has Not
 Happened Yet speech, the key point I made there is:
  Yet, I can't help but feel that the reason Linus is angry, and fearful,
 and shouting when people try to help maintain the kernel and fix it and
 change it and grow it is ultimately because Alan Kay is right. As Alan Kay
 said, you never have to take a baby down for maintenance -- so why do you
 have to take a Linux system down for maintenance?
 
  Another comment I made in that thread cited Andrew Tanenbaum's 1992
 comment that it is now all over but the shoutin':
 
 http://developers.slashdot.org/comments.pl?sid=3346421threshold=0commentsort=0mode=threadcid=42426755
 
  So, perhaps now we finally twenty-years see the shouting begin as the
 monolithic Linux kernel reaches its limits as a community process? :-)
 Still, even if true, it was a good run.
 
  The main article can be read here:
 
 http://developers.slashdot.org/story/12/12/29/018234/linus-chews-up-kernel-maintainer-for-introducing-userspace-bug
 
  This is not to focus on personalities or the specifics of that mailing
 list interaction -- we all make mistakes (whether as leaders or followers
 or collaborators), and I don't fully understand the culture of the Linux
 Kernel community. I'm mainly raising an issue about how software design
 affects our emotions -- in this case, making someone angry probably about
 something they fear -- and how that may point the way to better software
 systems like FONC aspired to.
 

 dunno...

 in this case, I think Torvalds was right, however, he could have handled
 it a little more gracefully.

 code breaking changes are generally something to be avoided wherever
 possible, which seems to be the main issue here.

 sometimes it is necessary though, but usually this needs to be for a damn
 good reason.
 more often though this leads to a shim, such that new functionality can be
 provided, while keeping whatever exists still working.

 once a limit is hit, then often there will be a clean break, with a new
 shiny whatever provided, which is not backwards compatible with the old
 interface (and will generally be redesigned to address prior deficiencies
 and open up routes for future extension).

 then usually, both will coexist for a while, usually until one or the
 other dies off (either people switch to the new interface, or people rebel
 and stick to the old one).

 in a few cases in history, this has instead leads to forks, with the old
 and new versions developing in different directions, and becoming separate
 and independent pieces of technology.

 for example, seemingly unrelated file formats that have a common ancestor,
 or different CPU ISA's that were once a single ISA, ...

 likewise, at each step, backwards compatibility may be maintained, but
 this doesn't necessarily mean that things will remain static. sometimes,
 there may still be a common-subset, buried off in there somewhere, or in
 other cases the loss of occasional archaic details, will cause what
 remains of this common subset to gradually fade away.



 as for design and emotions:
 I 

Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul Homer
I don#39;t think a more formalized language really gets around the problem. If 
that were true, we#39;d have already fallen back to the most consistent, yet 
simple languages available, such as assembler. But on top of these we build 
significantly more complex systems, bent by our own internal variations on 
logic. It#39;s that layer that causes the problems. What seems like it might 
be successful is to pair our constructions with many languages that more 
closely match how people think. Now I know that sounds weird, but not if one 
accepts that a clunky, ugly language like COBOL was actually very successful. 
Lots of stuff was written, much of it still running. Its own excessive 
verbosity helps in making it fixable by a broader group of people. 

Of course there is still a huge problem with that idea. Once written, if the 
author is no longer available, the work effectively becomes frozen. It can be 
built upon, but it is hard to truly expand. Thus we get to what we have now, a 
rather massive house of cards that becomes ever more perilous to build upon. If 
we want to break the cycle, we have to choose a new option. The size of the 
work is beyond any individual#39;s capacity, combining different people#39;s 
work is prone to clashes, the more individualized we make the languages the 
harder they are to modify, and the more normalized we make the hard they are to 
use.

Paul.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Marcus G. Daniels

On 12/31/12 10:30 AM, Paul Homer wrote:
Now I know that sounds weird, but not if one accepts that a clunky, 
ugly language like COBOL was actually very successful. Lots of stuff 
was written, much of it still running. Its own excessive verbosity 
helps in making it fixable by a broader group of people.



For the waterfall approach used by designers behind COBOL 
implementations, the philsophy of measure twice; cut once is 
consistent with a formal approach.But if there are so many bugs that 
last for decades, and bugs that can just be fixed by superficial 
analysis as facilitated by COBOL's verbosity, then there is something 
seriously wrong with the design and/or verification in the process.


I think design without automated code generation and proof-checking 
support is passing the buck.  Ideally, the principles underlying a 
program should be in that program, and not just as comments.   If there 
are contradictions in the design, the program shouldn't compile.   As 
the design is fleshed out to make a useful program, the programmer (now 
also a designer) should have the tools to continue to prove all of the 
pieces.  At the end of the day, all bugs should be considered the 
responsibility of highest level designers.   There should be no `cut' at 
all.


Of course, there is rarely the time or incentive structure to do any of 
this.  Productive programmers are the ones that get results and are fast 
at fixing (and creating) bugs.  In critical systems, at least, that's 
the wrong incentive structure.  In these situations, it's more important 
to reward people that create tests, create internal proofs, and refactor 
and simplify code.  Having very dense code that requires investment to 
change is a good thing in these situations.  A `bad change' should be 
trivial in practice to identify:  The compiler and/or test suite would 
not let it through -- an objective fact, not an opinion of `experts'.


Marcus
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul Homer
Most programs are models of our irrational world. Reflections of rather 
informal systems that are inherently ambiguous and contradictory, just like our 
species. Nothing short of #39;intelligence#39; could validate that those 
types of rules match their intended usage in the real world. If we don#39;t 
build our internal systems models with this in mind, then they#39;d be too 
fragile to solve real problems for us. Like it or not, intelligence is a 
necessary ingredient, and we don#39;t yet have any alternatives but ourselves 
to fill it.

Paul.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Carl Gundel
+1

 

From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Paul
Homer
Sent: Monday, December 31, 2012 3:09 PM
To: fonc@vpri.org
Subject: Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing
Userspace Bug - Slashdot

 


Most programs are models of our irrational world. Reflections of rather
informal systems that are inherently ambiguous and contradictory, just like
our species. Nothing short of 'intelligence' could validate that those types
of rules match their intended usage in the real world. If we don't build our
internal systems models with this in mind, then they'd be too fragile to
solve real problems for us. Like it or not, intelligence is a necessary
ingredient, and we don't yet have any alternatives but ourselves to fill it.

Paul.

 

  _  

From: Marcus G. Daniels mar...@snoutfarm.com; 
To: fonc@vpri.org; 
Subject: Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing
Userspace Bug - Slashdot 
Sent: Mon, Dec 31, 2012 7:50:13 PM 


On 12/31/12 12:25 PM, Carl Gundel wrote:

If there are contradictions in the design, the program shouldn't compile.

 

How can a compiler know how to make sense of domain specific contradictions?
I can only imagine the challenges we would face if compilers operated in
this way.

 

In the case of numerical method development, the math is represented in
Mathematica (Maple, Sage, Macsyma, etc.) and simulations are done using the
same or Matlab (Octave, Scipy, R, etc.).  The foundational work is already a
program.   One practical way to advance the state of the art would be to
ensure that the symbolic math packages had compilers that created
executables that performed as well as Fortran. 

In general, I'm imagining more programmers adopting languages like Agda,
Coq, and ATS, and elaborating their compilers and runtimes to be practical
for programming in the large. 

Marcus



 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Alan Moore
I agree with Paul...

As programmers we have too many degrees of freedom, too many chances for
random variation that needs to be negotiated at every interface between
components, etc.

Imagine building a car with randomly varying bolt and nut sizes, with no
two cars following the same pattern.

Craziness I say... make it stop, make it stop! :-/

I think these things are better off simplified in the extreme and carried
out by automatons, get the humans out of the picture...

Humans should focus on higher level requirements not the nuts and bolts of
software construction.

Or perhaps a more stochastic process like biology and let order emerge from
chaos - maybe the logic will be less brittle that way... But that may be a
ways off yet.

Clearly whatever we are doing isn't working - it is pure madness to
continue in this way.

Alan Moore


On Dec 31, 2012, at 12:09 PM, Paul Homer paul_ho...@yahoo.ca wrote:

Most programs are models of our irrational world. Reflections of rather
informal systems that are inherently ambiguous and contradictory, just like
our species. Nothing short of 'intelligence' could validate that those
types of rules match their intended usage in the real world. If we don't
build our internal systems models with this in mind, then they'd be too
fragile to solve real problems for us. Like it or not, intelligence is a
necessary ingredient, and we don't yet have any alternatives but ourselves
to fill it.

Paul.

 --
* From: * Marcus G. Daniels mar...@snoutfarm.com;
* To: * fonc@vpri.org;
* Subject: * Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing
Userspace Bug - Slashdot
* Sent: * Mon, Dec 31, 2012 7:50:13 PM

  On 12/31/12 12:25 PM, Carl Gundel wrote:

 “If there are contradictions in the design, the program shouldn't compile.”



How can a compiler know how to make sense of domain specific
contradictions?  I can only imagine the challenges we would face if
compilers operated in this way.



In the case of numerical method development, the math is represented in
Mathematica (Maple, Sage, Macsyma, etc.) and simulations are done using the
same or Matlab (Octave, Scipy, R, etc.).  The foundational work is already
a program.   One practical way to advance the state of the art would be to
ensure that the symbolic math packages had compilers that created
executables that performed as well as Fortran.

In general, I'm imagining more programmers adopting languages like Agda,
Coq, and ATS, and elaborating their compilers and runtimes to be practical
for programming in the large.

Marcus


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Paul D. Fernhout

On 12/31/12 2:32 AM, BGB wrote:

in this case, I think Torvalds was right, however, he could have handled
it a little more gracefully.

code breaking changes are generally something to be avoided wherever
possible, which seems to be the main issue here.


While many people posting in the slashdot thread would agree with both 
your points, it seems that this particular kernel patch had problems for 
other reasons, and that distinction has been ignored by most commenters. 
According to the submitter of the patch being criticized, it seems like 
the patch was intended to move towards unifying the error codes issued 
by a set of related drivers. What seems to have happened was that the 
patch submitter by (IMHO poor) design used an error code internally to 
indicate an error condition that was not meant to leak out back the the 
user of the driver. The error code was supposed to be transformed as it 
passed the driver boundary, but was not in this case. So the error code 
leaked out of the driver (where the meaning of the specific numerical 
value would thus change as it passed the driver boundary), and then 
Linus interpreted this situation as an intent by the programmer's or 
maintainer's to provide an unexpected and inappropriate error codes 
(thus the strong language). But it was more like there was some sloppy 
programming on top of a problematical design choice (problematical 
because exactly this sort of thing could happen) on top of the 
maintainer not seeing that. Granted, that provides plenty of room for 
complaint, but it is questionable if Linus saw this when he replied. 
More specifics on that in a comment I posted with a link to supporting 
posts on the kernel list:

http://slashdot.org/comments.pl?sid=3346421cid=42417029

But here are direct links to related kernel mailing list posts by the 
patch submitter and the kernel maintainer:

https://lkml.org/lkml/2012/12/23/89
https://lkml.org/lkml/2012/12/24/125

So, the human emotions are on top of an issues that seemed incompletely 
understood. Psychological studies, like of novice vs. expert fire 
fighters, have shown that where the novice generally tries to use reason 
to solve a problem (What is going on here based on thinking about the 
details? What will be the consequences of various choices?), the expert 
tends to use pattern matching (I've seen this situation before and here 
is what we should do.) So, that is why experts can make quick 
judgements and prescribe successful strategies. The experts work in 
bigger chunks. Still, a downside to that sort of expert reasoning 
through pattern matching is that it is easy to leap to a conclusion and 
incorrect prescription when the underlying situation is very close to a 
common one but has some unexpected twist (as seems to be the case here).


Anyway, good to see this sparked some interesting discussion.

--Paul Fernhout
http://www.pdfernhout.net/

The biggest challenge of the 21st century is the irony of technologies 
of abundance in the hands of those thinking in terms of scarcity.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Simon Forman
On Mon, Dec 31, 2012 at 12:24 PM, Paul D. Fernhout
pdfernh...@kurtz-fernhout.com wrote:
 On 12/31/12 2:32 AM, BGB wrote:

 in this case, I think Torvalds was right, however, he could have handled
 it a little more gracefully.

 code breaking changes are generally something to be avoided wherever
 possible, which seems to be the main issue here.


 While many people posting in the slashdot thread would agree with both your
 points, it seems that this particular kernel patch had problems for other
 reasons, and that distinction has been ignored by most commenters. According
 to the submitter of the patch being criticized, it seems like the patch was
 intended to move towards unifying the error codes issued by a set of related
 drivers. What seems to have happened was that the patch submitter by (IMHO
 poor) design used an error code internally to indicate an error condition
 that was not meant to leak out back the the user of the driver. The error
 code was supposed to be transformed as it passed the driver boundary, but
 was not in this case. So the error code leaked out of the driver (where the
 meaning of the specific numerical value would thus change as it passed the
 driver boundary), and then Linus interpreted this situation as an intent by
 the programmer's or maintainer's to provide an unexpected and inappropriate
 error codes (thus the strong language). But it was more like there was some
 sloppy programming on top of a problematical design choice (problematical
 because exactly this sort of thing could happen) on top of the maintainer
 not seeing that. Granted, that provides plenty of room for complaint, but it
 is questionable if Linus saw this when he replied. More specifics on that in
 a comment I posted with a link to supporting posts on the kernel list:
 http://slashdot.org/comments.pl?sid=3346421cid=42417029

 But here are direct links to related kernel mailing list posts by the patch
 submitter and the kernel maintainer:
 https://lkml.org/lkml/2012/12/23/89
 https://lkml.org/lkml/2012/12/24/125

 So, the human emotions are on top of an issues that seemed incompletely
 understood. Psychological studies, like of novice vs. expert fire fighters,
 have shown that where the novice generally tries to use reason to solve a
 problem (What is going on here based on thinking about the details? What
 will be the consequences of various choices?), the expert tends to use
 pattern matching (I've seen this situation before and here is what we
 should do.) So, that is why experts can make quick judgements and prescribe
 successful strategies. The experts work in bigger chunks. Still, a
 downside to that sort of expert reasoning through pattern matching is that
 it is easy to leap to a conclusion and incorrect prescription when the
 underlying situation is very close to a common one but has some unexpected
 twist (as seems to be the case here).

 Anyway, good to see this sparked some interesting discussion.


 --Paul Fernhout


Thank you for elucidating the situation like you have. Your evaluation
agrees with my own, which I find gratifying, but even if it didn't I
am excited to see the social process happen publicly and
transparently.  (As opposed to the kind of primate politics that an
incident like this might trigger in, for example, a tradition
corporation or other institution.)

I see it as a win for open methods even if Torvalds might have a bit
of egg on his face.

~Simon Forman

-- 
http://twitter.com/SimonForman
http://www.dendritenetwork.com/

The history of mankind for the last four centuries is rather like
that of an imprisoned sleeper, stirring clumsily and uneasily while
the prison that restrains and shelters him catches fire, not waking
but incorporating the crackling and warmth of the fire with ancient
and incongruous dreams, than like that of a man consciously awake to
danger and opportunity.  --H. P. Wells, A Short History of the
World
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Pascal J. Bourguignon
Carl Gundel ca...@psychesystems.com writes:

 “If there are contradictions in the design, the program shouldn't
 compile.”

  

 How can a compiler know how to make sense of domain specific
 contradictions?  I can only imagine the challenges we would face if
 compilers operated in this way.

Contradictions are often not really contradictions.

It's a question of representation, that is, of mapping of the domain,
to some other domain, usually a formal system.

Now we know that a given formal system cannot be at the same time
complete and consistent, but nothing prevents an automatic system to
work with an incomplete system or an inconsistent system (or even a
system that's both incomplete and inconsistent).

The only thing, is that sometimes you may reach conclusions such as 1=2,
but if you expect them, you can deal with them.  We do everyday.

Notably, by modifying the mapping between the domain and the formal
system: for different parts of the domain, you can use different formal
systems, or avoid some axioms or theorems leading to a contradiction, to
find some usable conclusion.


Daily, we use formal rules, that are valid just in some context.  The
conclusions we reach can easily be invalidated, if the context is wrong
for the application of those rules.   If we tried to take into account
all the possible rules, we'd get soon enough inconsistencies.  But by
restricting the mapping of the domain to some contextual rules, we can
read usable conclusions (most of the time).  When the conclusion doesn't
match the domain, we may ask where the error is, and often it's just the
context that was wrong, not the rules.


We will have to embrace Artifical Intelligence, even in compilers, eventually.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Incentives and Metrics for Infrastructure vs. Functionality (was Re: Linus Chews Up Kernel Maintainer...)

2012-12-31 Thread Paul D. Fernhout

On 12/31/12 1:39 PM, Marcus G. Daniels wrote:

Of course, there is rarely the time or incentive structure to do any of
this.  Productive programmers are the ones that get results and are fast
at fixing (and creating) bugs.  In critical systems, at least, that's
the wrong incentive structure.  In these situations, it's more important
to reward people that create tests, create internal proofs, and refactor
and simplify code.  Having very dense code that requires investment to
change is a good thing in these situations.


Programming for the broadcasting industry right now (where a few seconds 
downtime might cost millions of dollars), I especially liked your point, 
Marcus. I live within this tension every day, as I imagine so do to an 
even higher degree aircraft software designers, medical system 
designers, automotive software designers, and so on where many lives are 
at risk from a bug. Certainly the more unit tests that code has, the 
more dense the code might feel, as the more resistant to casual change 
it can become, even as one may be ever more assured that the code is 
probably doing what you expect most of the time. And the argument goes 
that such denseness in terms of unit tests may actually give you more 
confidence in refactoring. But I can't say I started out feeling or 
programming that way.


The movie The Seven Samurai begins with the villagers having a big 
conceptual problem. How do the agriculturalists know how to hire 
competent Samurai, not being Samurai themselves? The villagers would 
most likely be able to know the difference in a short time between an 
effective and ineffective farm hand they might hire (based on their 
agricultural domain knowledge) -- but what do farmers know about 
evaluating swordsmanship or military planning? Likewise, an end user may 
know lots about their problem domain, but how can users tell the 
difference between effective and ineffective coding in the short term? 
How can users distinguish between software that just barely works at 
handling current needs and, by contrast, software that could handle a 
broad variety of input data, which could be easily expandable, and which 
would detect through unit tests unintended consequences of coding 
changes? That is meant mostly rhetorically -- although maybe a more 
on-topic question for this list would be how do we create software 
systems that somehow help people more easily appreciate or understand or 
visualize that difference?


Unless you know what to look for (and even sometimes if you do), it is 
hard to tell whether a programmer spending a month or two refactoring or 
writing tests is making the system better, or making the system worse, 
or maybe just is not doing much at all. Even worse from a bean counter 
perspective, what about the programmer who claims to be spending time 
(weeks or months) just trying to *understand* what is going on? And 
then, what if after apparently doing nothing for weeks, the programmer 
then removes lots of code? How does one measure that level of apparent 
non-productivity or even negative-productivity? A related bit of history:

  http://c2.com/cgi/wiki?NegativeLinesOfCode
A division of AppleComputer started having developers report 
LinesOfCode written as a ProductivityMetric?. The guru, BillAtkinson, 
happened to be refactoring and tuning a graphics library at the time, 
and ended up with a six-fold speedup and a much smaller library. When 
asked to fill in the form, he wrote in NegativeLinesOfCode. Management 
got the point and stopped using those forms soon afterwards.


If there is a systematic answer, part of it might be in having lots of 
different sorts of metrics for code, like in the direction of projects 
like Sonar. I don't see Sonar mentioned on this list, at least in the 
past six or so years. Here is a link:

  http://www.sonarsource.org/
Sonar is an open platform to manage code quality. As such, it covers 
the 7 axes of code quality: Architecture  Design, Comments, Coding 
rules, Potential bugs, Complexity, Duplications, and Unit tests


We tend to get what we measure. So, are these the sorts of things new 
computing efforts should be measuring?


Obviously, users can generally see the value of new functionality, 
especially if they asked for it. And thus there is this tension between 
infrastructure and functionality. This tension is especially strong in 
the context of black swan situations where chances are some rare thing 
will never happen, and if it does, someone else will be maintaining the 
code by then. How does one create incentives (and supporting metrics) 
related to that? In practice, this tension may sometimes get resolved by 
spending some time on refactoring and tests that the user will not 
appreciate directly and some time of obvious enhancements users will 
appreciate. Of course, this will make the enhancements seem to take 
longer than they might otherwise in the short term. And it can be an 
organizational and personal challenge 

Re: [fonc] Incentives and Metrics for Infrastructure vs. Functionality (was Re: Linus Chews Up Kernel Maintainer...)

2012-12-31 Thread Marcus G. Daniels

On 12/31/12 2:58 PM, Paul D. Fernhout wrote:
Unless you know what to look for (and even sometimes if you do), it is 
hard to tell whether a programmer spending a month or two refactoring 
or writing tests is making the system better, or making the system 
worse, or maybe just is not doing much at all.
Sometimes I see codes that have data structure traversal procedures cut 
and pasted into several modules without any apparent effort to abstract 
them into a map/operation high order form or a parameterized macro 
expansion.


I think for reasons like:

1. The logic to do the traversal is intermingled with the operation to 
be done at each site.  The programmer knows the outcome of the whole 
operation from the context they cribbed the code, but they don't want to 
think about the details.  And why should they `reduce their 
productivity' (e.g. lines of code per unit time) by refactoring the 
cribbed code if the original author couldn't be bothered?


2. The programmer has a belief or preference that the code is easier to 
work with if it isn't abstracted.
It's all right in front of them in the context they want it. Perhaps 
they are copying the code from foreign modules they don't maintain and 
don't want to.  They don't think in terms of global redundancy and the 
cost for the project but just their personal ease of maintenance after 
they assimilate the code.  If they have to study any indirect 
mechanisms, they become agitated or lose their momentum.


3. The programmer thinks they are smarter than the compiler and that the 
code will be faster if they inline everything and avoid function calls.


In any case, a manager confronted with this situation can choose to 
reward individuals who slow this type of proliferation.  An objective 
justification being an increase in the percentage of lines touched by 
coverage analysis profiles across test suites, unit tests, etc.


Marcus
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread J. Andrew Rogers

On Dec 31, 2012, at 1:21 PM, Pascal J. Bourguignon p...@informatimago.com 
wrote:
 
 Now we know that a given formal system cannot be at the same time
 complete and consistent, but nothing prevents an automatic system to
 work with an incomplete system or an inconsistent system (or even a
 system that's both incomplete and inconsistent).
 
 The only thing, is that sometimes you may reach conclusions such as 1=2,
 but if you expect them, you can deal with them.  We do everyday.
 
 Notably, by modifying the mapping between the domain and the formal
 system: for different parts of the domain, you can use different formal
 systems, or avoid some axioms or theorems leading to a contradiction, to
 find some usable conclusion.


In chemical engineering, you design complex, dynamic systems that look like 
distributed computing systems in software, except that you replace bits with 
molecules. In the abstract, they are incredibly similar. I've often stated that 
my chemical engineering education was a valuable foundation for my later work 
in distributed and parallel systems.

Chemical engineering systems commonly have an interesting property: despite 
being built from a system of physics equations and carefully measured facts 
about reality, you can reduce that single set of inputs to multiple mutually 
inconsistent models of system behavior with material differences. The 
discipline has a rich set of heuristics and methods for dealing with complex 
distributed system problems that have significant internal contradictions, and 
with obviously good results, but you rarely see explicit analogues applied in 
software systems.

Computer science, perhaps due to its direct mathematical derivation, is not 
comfortable with an equivalent state of affairs where 1=2. Nonetheless, 
models of real complex systems tend to have this property.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Marcus G. Daniels

On 12/31/12 1:44 PM, Paul D. Fernhout wrote:
So, it was a meta-bug in that sense about an unexpected meaning shift 
when a number leaked beyond a boundary that was supposed to contain it. 

[..]
I'm not sure what sort of automated systems could deal with that kind 
of unexpected semantic shift? Still, for that one case, probably one 
could come up with a way of defining symbols that could not leak 
across boundaries because of compiler checks or using public/private 
typed aspects of languages to do that (like for example even in Java 
where you had a public enum for error codes to return to user space, 
but a different private enum for internal state). In practice the C 
language the Linux kernel is written in may not make that easy to 
enforce programmatically though.
Yup. Add more opaque types in the kernel implementation so that a type 
conversion (to the POSIX semantics) _must_ occur.  GCC recently 
converted to compiling itself in stricter C++ mode, and the world did 
not end.  In spite of advice such as..


http://thread.gmane.org/gmane.comp.version-control.git/57643/focus=57918

Marcus
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Incentives and Metrics for Infrastructure vs. Functionality effective abstraction

2012-12-31 Thread Paul D. Fernhout

On 12/31/12 6:36 PM, Marcus G. Daniels wrote:

2. The programmer has a belief or preference that the code is easier to
work with if it isn't abstracted.
It's all right in front of them in the context they want it. Perhaps
they are copying the code from foreign modules they don't maintain and
don't want to.  They don't think in terms of global redundancy and the
cost for the project but just their personal ease of maintenance after
they assimilate the code.  If they have to study any indirect
mechanisms, they become agitated or lose their momentum.


A lot of programmers don't seem to get abstraction (I say this having 
taught programming at the college level), and that is probably a major 
reason code has more duplication and lack of conciseness than it should. 
Nonetheless, a lot of programmers who don't really get abstraction 
(including ideas like recursion) can still get a lot of useful-to-them 
stuff done by writing programs within their own comfort zone. My guess 
is that 80% to 90% or more of people who actually make their living 
programming these days fall into this category of not really being 
able to deal well with abstraction to any great degree much beyond being 
able to write a class to represent some problem domain object. So, such 
developers would not be very comfortable thinking about writing the code 
to compile those classes etc. -- even if many of them might understand a 
problem domain very well. These are the people who love Visual Basic. 
:-) And probably SQL. :-) And they love both of them precisely for all 
the reason that probably many people on this list may feel very 
constrained and unhappy when using either of those two -- stuff like 
abstraction or recursion in the case of SQL is hard to do in them. So, 
code in such languages in practice tends to stay within the abstraction 
limits of what such programmers can understand without a lot of effort.


Still, those who do get abstraction eventually realize that every layer 
of abstraction imposes its own cost -- both on the original developer 
and any subsequent maintainer. One can hope that new computer languages 
(whatever?) and new programming tools (like for refactoring) and new 
cultural habits (unit tests) could reduce those costs, but it does not 
seem like we are quite there yet -- although there are some signs of 
progress. Often invention is about tradeoffs, you can have some of this 
(reduced redundancy) if you give up some of that (the support code being 
straightforward). True breakthroughs come when someone figures out how 
to have lots of both (reduced redundancy and the supporting code being 
straightforward.


Related:
  http://en.wikipedia.org/wiki/Indirection
A famous aphorism of David Wheeler goes: All problems in computer 
science can be solved by another level of indirection;[1] this is often 
deliberately mis-quoted with abstraction layer substituted for level 
of indirection. Kevlin Henney's corollary to this is, ...except for 
the problem of too many layers of indirection.


My programming these days is a lot less abstract (clever?) than it used 
to be, and I don't think that is mostly because I am less sharp having 
programmed for about thirty years (although it is true that I am indeed 
somewhat less sharp than I used to be). I'm willing to tolerate somewhat 
more duplication than I used to because I know the price that needs to 
be paid for introducing abstraction. Abstractions need to be maintained. 
They may be incomplete. They may leak. They are often harder to debug. 
They can be implemented badly because a couple examples of something are 
often not enough to generalize well from. And so on (much of this known 
from personal experience). So, these days, I tend more towards the 
programming language equivalent of the Principle of least privilege, 
where I would like a computer language where, somewhat like DrScheme, I 
could specify a language level for every section of code to reduce the 
amount of cleverness allowed in that code. :-) That way, the Visual 
Basic like parts might be 90% of the code, and when I got into sections 
that did a lot of recursion or implemented parsers or whatever, it would 
be clear I was in code working at a higher level of abstraction. 
Considering how you can write VMs in Fortran, maybe that idea would not 
work in practice, but it is at least a thing to think about. Related:

http://en.wikipedia.org/wiki/Principle_of_least_privilege
http://c2.com/cgi/wiki?PrincipleOfLeastPrivilege

And on top of that, it is quite likely that future maintainers who have 
more problems with abstractions. So, the code in practice is likely to 
be less easily maintained by a random programmer. So, I now try to apply 
cleverness in other directions. For example, I might make tools that may 
stand outside the system and help with maintaining or understanding it 
(where even if the tool was not maintained, it would have served its 
purpose and given a return on time invested). Or I 

Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Marcus G. Daniels

On 12/31/12 8:30 PM, Paul D. Fernhout wrote:
So, I guess another meta-level bug in the Linux Kernel is that it is 
written in C, which does not support certain complexity management 
features, and there is no clear upgrade path from that because C++ has 
always had serious linking problems.
But the ABIs aren't specified in terms of language interfaces, they are 
architecture-specific.  POSIX kernel interfaces don't need C++ link 
level compatibility, or even extern C compatibility interfaces.  
Similarly on the device side, that's packing command blocks and such, 
byte by byte.  Until a few years ago, GCC was the only compiler ever 
used (or able) to compile the Linux kernel.  It is a feature that it all 
can be compiled with one open source toolchain.  Every aspect can be 
improved.


From that thread I read that those in the Linus camp are fine with 
abstraction, but it has to be their abstraction on their terms.   An 
later in the thread, Theodore T'so gave an example of opacity in the 
programming model:


a = b + /share/ + c + serial_num;

Arguing where you can have absolutely no idea how many memory 
allocations are

done, due to type coercions, overloaded operators

Well, I'd say just write the code in concise notation.  If there are 
memory allocations they'll show up in valgrind runs, for example. Then 
disassemble that function and understand what the memory allocations 
actually are.  If there is a better way to do it, then either change 
abstractions, or improve the compiler to do it more efficiently.   Yes, 
there can be an investment in a lot of stuff. But just defining any 
programming model with a non-obvious performance model as a bad 
programming model is shortsighted advice, especially for developers 
outside of the world of operating systems.   That something is 
non-obvious is not necessarily a bad thing.   It just means a bit more 
depth-first investigation.   At least one can _learn_ something from the 
diversion.


Marcus
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc