Re: [fonc] Spec-Driven Self-Testing Code

2010-10-14 Thread Julian Leviston

On 14/10/2010, at 4:50 PM, K. K. Subramaniam wrote:

 On Monday 11 Oct 2010 7:56:02 am Julian Leviston wrote:
 I think this is better off baked in because it would encourage programmers
 (users of the language) to write down what they intend to do before they
 do it. Something most people do whenever they're going to do something
 complicated anyway. It would encourage people to take care and to not code
 things quickly, but it would also provide regression tests.
 This is why asserts were introduced. What you call a spec is nothing but the 
 sequence:
  self assert: self inv and: self p.
  self dosomething.
  self assert: self inv and: self q.
 
 Where inv is the state invariant and p and q are pre- and post- conditions on 
 the parts of the state undergoing modifications. Researchers are interested 
 in 
 inv, p and q while most practitioners depend on doSomething. assert strikes a 
 balance between the two.
 
 Subbu

I'm well aware of testing frameworks. I think you may have missed what I was 
trying to say.

Sorry about that!

Executable documentation coupled with behavioural testing baked in is what I'm 
after. ie the code won't actually execute without a checksum existing first 
that indicates that the test suite has been run across this code.

Julian.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Spec-Driven Self-Testing Code

2010-10-14 Thread BGB


- Original Message - 
From: Julian Leviston jul...@leviston.net

To: Fundamentals of New Computing fonc@vpri.org
Sent: Wednesday, October 13, 2010 9:13 PM
Subject: Re: [fonc] Spec-Driven  Self-Testing Code



On 11/10/2010, at 12:24 PM, BGB wrote:


--
Does anyone know about a language (possibly something in smalltalk) that 
involves spec-driven development?


A language that stipulates a spec (in code - ie a test) must be written 
before any code can be written, and that can also do self-checks (sort of 
like a checksum) before the code is executed?


Julian.
--

possibly relevant questions:
what would be the merit of imposing such a restriction?
how would one enforce such a restriction?
how could this be done without compromising the ability to use the 
language to get much useful done?

...

IMO, any such restrictions are better left under the control of the 
development process rather than trying to enforce them at the language 
level.




--
The merit of imposing that users write specs first assumes that programmers 
have a plan before they start writing code, and then encourages them to 
become aware of their plan and requirements, explicitly laying them out in a 
series of test of behaviour. (See http://behaviour-driven.org/ possibly for 
a good explanation of where I'm coming from here).

--

quick skim of some of it...

ok, there seems to be a conflict here...

writing specs/plans, and testing quick/early, are very different approaches 
to software design.



in a typical spec or plan-based process, a person will try to work out 
everything in advance, and delay coding until after the spec is relatively 
complete. this is what is usually termed waterfall method or BDUF.


waterfall: first spec; then code; then test and debug; then deploy.


in a test-driven setup, usually the tests and code are written early and 
written alongside each other, and planning is usually very minimal (usually 
just writing out maybe basic ideas for what the thing should do).


in this later form, usually specs are written *after* the code, usually 
either to document the code, or to help identify which directions they want 
to go, ...


this would seem to be more the style advocated by the posted link.


it seems like there is jumping between advocating several different 
methodologies.


admittedly, I tend to combine both strategies, usually writing specs for 
individual components or features, but using an incremental process and 
regular testing for large scale features.



--
I'm not advocating enforcement, rather encouragement. As most of us know, 
you cannot happily enforce things on people, let alone programmers - for 
example Java's nightmarish type system hinders more than helps (in my humble 
personal opinion, only). One *can* encourage certain things through the FORM 
of the language/IDE/workspace/etc., though (for example, smalltalk's class 
browser encourages a programmer to think in terms of classes)... 
unfortunately composition might be a better practice most of the time and it 
*may* be that having a class browser as a default tool encouraged the 
over-use of inheritance that we have seen as quite a dominant 
practice/idea in the OOP world.

--

enforcement also usually only works when there is something well defined to 
enforce, or something can be enforced in terms of taking away some specific 
feature.



--
What I *am* advocating, though, is a checksum that is written by testing 
suites at various levels that would mean that code wouldn't run unless it 
had at least the most basic testing suite run against it. I'm advocating 
that behaviour-tests should be a mandatory part of development (any 
development process - be it writing a song, programming a program or 
building a house). The best of us as developers in any realm do these kinds 
of requirement-conformance tests when building whatever it is we're 
building ANYWAY, so I see it as simply a formalisation of a hitherto 
unexpressed best-practice.

--

IMO, checksum is also a bad choice of a terms to use in reference to unit 
tests or similar.
typically, what a checksum does and what a unit test do are very different, 
and it would create unneeded confusion to use one term to describe the 
other.


the problem is (with the proposed idea), until both code and tests can be 
written, one is almost inevitably going to write a test which always passes, 
so that they can run the code, effectively so that they can in turn test and 
debug the code.


traditionally, code will be run regardless of whether it is known to work, 
and the main point of unit testing then is to start throwing up red-flags 
that something may need to be looked at or fixed, or to track new 
functionality which should work, but doesn't as of yet (if wanting to add 
a new feature may make the test fail, which may break the build, then people 
are going to delay adding the test until after the feature is generally 
known to work, partly defeating some of the purpose of 

Re: [fonc] Spec-Driven Self-Testing Code

2010-10-14 Thread K. K. Subramaniam
On Thursday 14 Oct 2010 11:30:44 am Julian Leviston wrote:
 Executable documentation coupled with behavioural testing baked in is what
 I'm after. ie the code won't actually execute without a checksum existing
 first that indicates that the test suite has been run across this code.
Your requirement is recursive.  You seek a verifier machine to gate execution 
of code in a lower level machine. The distinction you make between 'testing' 
and 'execution' involves intention - something which cannot be coded in 
general. Writing executable spec is just another form of coding. How would you 
test if the written spec captured the intention of the tester?

A computing machine that executes documentation is just another machine; 
possibly virtual but still a machine and subject to the same frailties. Alan 
Kay called an object a recursion on the entire possibilities of the computer 
- warts and all :-).

Subbu

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Spec-Driven Self-Testing Code

2010-10-14 Thread Julian Leviston
On 14/10/2010, at 5:56 PM, K. K. Subramaniam wrote:

 On Thursday 14 Oct 2010 11:30:44 am Julian Leviston wrote:
 Executable documentation coupled with behavioural testing baked in is what
 I'm after. ie the code won't actually execute without a checksum existing
 first that indicates that the test suite has been run across this code.
 Your requirement is recursive.  You seek a verifier machine to gate execution 
 of code in a lower level machine. The distinction you make between 'testing' 
 and 'execution' involves intention - something which cannot be coded in 
 general. Writing executable spec is just another form of coding. How would 
 you 
 test if the written spec captured the intention of the tester?
 
 A computing machine that executes documentation is just another machine; 
 possibly virtual but still a machine and subject to the same frailties. Alan 
 Kay called an object a recursion on the entire possibilities of the 
 computer 
 - warts and all :-).
 
 Subbu


Ah, you jest! :)

Yes, it is recursive, and like all good recursion, it has an out. That out 
is that tests are a special case. The simple case is just that if it breaks in 
any way, it's broken, otherwise it's good). This is always the case for 
testing frameworks.

Humans have to check their own tests, and they have to not break. sigh. :P

Julian.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Steve Dekorte

I have to wonder how things might be different if someone had made a tiny, 
free, scriptable Smalltalk for unix before Perl appeared...

BTW, there were rumors that Sun considered using Smalltalk in browsers instead 
of Java but the license fees from the vendors were too high. Anyone know if 
that's true?

On 2010-10-08 Fri, at 11:28 AM, John Zabroski wrote:
 Why are we stuck with such poor architecture?


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
I saw Paul Fernhout mention this once on /.
http://developers.slashdot.org/comments.pl?sid=1578224cid=31429692

He linked to: http://fargoagile.com/joomla/content/view/15/26/

which references:

http://lists.squeakfoundation.org/pipermail/squeak-dev/2006-December/112337.html

which states:

When I became V.P. of Development at ParcPlace-Digitalk in 1996, Bill
 Lyons (then CEO) told me the same story about Sun and VW. According
 to Bill, at some point in the early '90's when Adele was still CEO,
 Sun approached ParcPlace for a license to use VW (probably
 ObjectWorks at the time) in some set top box project they were
 working on. Sun wanted to use a commercially viable OO language with
 a proven track record. At the time ParcPlace was licensing Smalltalk
 for $100 a copy. Given the volume that Sun was quoting, PP gave Sun
 a firm quote on the order of $100/copy. Sun was willing to pay at
 most $9-10/copy for the Smalltalk licenses. Sun was not willing to go
 higher and PP was unwilling to go lower, so nothing ever happened and
 Sun went its own way with its own internally developed language
 (Oak...Java). The initial development of Oak might well have predated
 the discussions between Sun and PP, but it was PP's unwillingness to
 go lower on the price of Smalltalk that gave Oak its green light
 within Sun (according to Bill anyway). Bill went on to lament that
 had PP played its cards right, Smalltalk would have been the language
 used by Sun and the language that would have ruled the Internet.
 Obviously, you can take that with a grain of salt. I don't know if
 Bill's story to me was true (he certainly seemed to think it was),
 but it might be confirmable by Adele. If it is true, it is merely
 another sad story of what might have been and how close Smalltalk
 might have come to universal acceptance.

 -Eric Clayberg


That being said, I have no idea why people think Smalltalk-80 would have
been uniformly better than Java.  I am not saying this to be negative.  In
my view, much of the biggest mistakes with Java were requiring insane legacy
compatibility, and doing it in really bad ways.  Swing should have never
have been forced to reuse AWT, for example.  And AWT should never have had a
concrete component model, thus forcing Swing to inherit it (dropping the
rabbit ears, because I see no good explanation for why it had to inherit
AWT's component model via implementation inheritance).  It's hard for me
to even guage if the Swing developers were good programmers or not, given
that ridiculously stupid constraint.  It's not like Swing even supported
phones, it was never in J2ME.  The best I can conclude is that they were not
domain experts, but who really was at the time?

On Thu, Oct 14, 2010 at 6:14 PM, Steve Dekorte st...@dekorte.com wrote:


 I have to wonder how things might be different if someone had made a tiny,
 free, scriptable Smalltalk for unix before Perl appeared...

 BTW, there were rumors that Sun considered using Smalltalk in browsers
 instead of Java but the license fees from the vendors were too high. Anyone
 know if that's true?

 On 2010-10-08 Fri, at 11:28 AM, John Zabroski wrote:
  Why are we stuck with such poor architecture?


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Pascal J. Bourguignon


On 2010/10/15, at 00:14 , Steve Dekorte wrote:



I have to wonder how things might be different if someone had made a  
tiny, free, scriptable Smalltalk for unix before Perl appeared...


There has been GNU smalltalk for a long time, AFAIR before perl, which  
was quite adapted to the unix environment.


It would certainly qualify as tiny since it lacked any big GUI  
framework, obviously it is free in all meanings of the words, and it  
is best in writing scripts.



My point is that it hasn't changed anything and nothing else would have.


BTW, there were rumors that Sun considered using Smalltalk in  
browsers instead of Java but the license fees from the vendors were  
too high. Anyone know if that's true?


No idea, but since they invented Java, they could have at a much lower  
cost written their own implementation of Smalltalk.


--
__Pascal Bourguignon__
http://www.informatimago.com




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Duncan Mak
On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski johnzabro...@gmail.comwrote:

 That being said, I have no idea why people think Smalltalk-80 would have
 been uniformly better than Java.  I am not saying this to be negative.  In
 my view, much of the biggest mistakes with Java were requiring insane legacy
 compatibility, and doing it in really bad ways.  Swing should have never
 have been forced to reuse AWT, for example.  And AWT should never have had a
 concrete component model, thus forcing Swing to inherit it (dropping the
 rabbit ears, because I see no good explanation for why it had to inherit
 AWT's component model via implementation inheritance).  It's hard for me
 to even guage if the Swing developers were good programmers or not, given
 that ridiculously stupid constraint.  It's not like Swing even supported
 phones, it was never in J2ME.  The best I can conclude is that they were not
 domain experts, but who really was at the time?


I started programming Swing a year ago and spent a little time learning its
history when I first started. I was able to gather a few anecdotes, and they
have fascinated me.

There were two working next-generation Java GUI toolkits at the time of
Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
toolkits were developed by ex-NeXT developers and borrowed heavily from
AppKit's design. IFC even had a design tool that mimicd Interface Builder
(which still lives on today in Cocoa).

Sun first acquired Lighthouse Design, then decided to join forces with
Netscape - with two proven(?) toolkits, the politics worked out such that
all the AWT people at Sun ended up leading the newly-joined team, and the
working code from the other parties discarded, and from this, Swing was
born.

http://talblog.info/archives/2007/01/sundown.html
http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/

-- 
Duncan.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Jecel Assumpcao Jr.
Pascal J. Bourguignon wrote:
 No idea, but since they invented Java, they could have at a much lower  
 cost written their own implementation of Smalltalk.

or two (Self and Strongtalk).

Of course, Self had to be killed in favor of Java since Java ran in just
a few kilobytes while Self needed a 24MB workstation and most of Sun's
clients still had only 8MB (PC users were even worse, at 4MB and under).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread John Zabroski
Wow!  Thanks for that amazing nugget of Internet history.

Fun fact: Tony Duarte wrote the book Writing NeXT Programs under the
pseudonym Ann Weintz because supposedly Steve Jobs was so secretive that he
told employees not to write books about the ideas in NeXT's GUI.  See:
http://www.amazon.com/Writing-Next-Programs-Introduction-Nextstep/dp/0963190105/where
Tony comments on it.



On Thu, Oct 14, 2010 at 7:53 PM, Duncan Mak duncan...@gmail.com wrote:

 On Thu, Oct 14, 2010 at 6:51 PM, John Zabroski johnzabro...@gmail.comwrote:

 That being said, I have no idea why people think Smalltalk-80 would have
 been uniformly better than Java.  I am not saying this to be negative.  In
 my view, much of the biggest mistakes with Java were requiring insane legacy
 compatibility, and doing it in really bad ways.  Swing should have never
 have been forced to reuse AWT, for example.  And AWT should never have had a
 concrete component model, thus forcing Swing to inherit it (dropping the
 rabbit ears, because I see no good explanation for why it had to inherit
 AWT's component model via implementation inheritance).  It's hard for me
 to even guage if the Swing developers were good programmers or not, given
 that ridiculously stupid constraint.  It's not like Swing even supported
 phones, it was never in J2ME.  The best I can conclude is that they were not
 domain experts, but who really was at the time?


 I started programming Swing a year ago and spent a little time learning its
 history when I first started. I was able to gather a few anecdotes, and they
 have fascinated me.

 There were two working next-generation Java GUI toolkits at the time of
 Swing's conception - Netscape's IFC and Lighthouse Design's LFC - both
 toolkits were developed by ex-NeXT developers and borrowed heavily from
 AppKit's design. IFC even had a design tool that mimicd Interface Builder
 (which still lives on today in Cocoa).

 Sun first acquired Lighthouse Design, then decided to join forces with
 Netscape - with two proven(?) toolkits, the politics worked out such that
 all the AWT people at Sun ended up leading the newly-joined team, and the
 working code from the other parties discarded, and from this, Swing was
 born.

 http://talblog.info/archives/2007/01/sundown.html
 http://www.noodlesoft.com/blog/2007/01/23/the-sun-also-sets/

 --
 Duncan.

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-14 Thread Josh McDonald
I'd say the biggest problem is more in the selection than generation /
mutation. In the world, it's easy to determine the winner - he passes on
more of his genes. But if we've got two potential solutions, neither of
which actually pass the test, how do we select which to continue mutating,
and which to let die? And when you've got two that *do* pass the test, how
do you select between them? Size? Running time?

It would make an interesting project for a year or two, though :)

-Josh

On 15 October 2010 11:20, Casey Ransberger casey.obrie...@gmail.com wrote:

 The previous thread about testing got me thinking about this again. One of
 the biggest problems I have in the large with getting developers to write
 tests is the burden of maintaining the tests when the code changes.

 I have this wacky idea that we need the tests more than the dev code; it
 makes me wish I had some time to study prolog.

 I wonder: what if all we did was write the tests? What if we threw some
 kind of genetic algorithm or neural network at the task of making the tests
 pass?

 I realize that there are some challenges with the idea: what's the DNA of a
 computer program look like? Compiled methods? Pure functions? Abstract
 syntax trees? Objects? Classes? Prototypes? Source code fragments? How are
 these things composed, inherited, and mutated?

 I've pitched the idea over beer before; the only objections I've heard have
 been of the form that's computationally expensive and no one knows how to
 do that.

 Computational expense is usually less expensive than developer time these
 days, so without knowing exactly *how* expensive, it's hard to buy that. And
 if no one knows how to do it, it could be that there aren't enough of us
 trying:)

 Does anyone know of any cool research in this area?
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
Therefore, send not to know For whom the bell tolls. It tolls for thee.

Josh 'G-Funk' McDonald
   -  j...@joshmcdonald.info
   -  http://twitter.com/sophistifunk
   -  http://flex.joshmcdonald.info/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-14 Thread Julian Leviston

On 15/10/2010, at 12:20 PM, Casey Ransberger wrote:

 The previous thread about testing got me thinking about this again. One of 
 the biggest problems I have in the large with getting developers to write 
 tests is the burden of maintaining the tests when the code changes. 
 
 I have this wacky idea that we need the tests more than the dev code; it 
 makes me wish I had some time to study prolog. 
 
 I wonder: what if all we did was write the tests? What if we threw some kind 
 of genetic algorithm or neural network at the task of making the tests pass?
 
 I realize that there are some challenges with the idea: what's the DNA of a 
 computer program look like? Compiled methods? Pure functions? Abstract syntax 
 trees? Objects? Classes? Prototypes? Source code fragments? How are these 
 things composed, inherited, and mutated?
 
 I've pitched the idea over beer before; the only objections I've heard have 
 been of the form that's computationally expensive and no one knows how to 
 do that.
 
 Computational expense is usually less expensive than developer time these 
 days, so without knowing exactly *how* expensive, it's hard to buy that. And 
 if no one knows how to do it, it could be that there aren't enough of us 
 trying:)
 
 Does anyone know of any cool research in this area?
 ___


This is quite interesting to me, too, because if you think of how we've built 
programming libraries and frameworks - in granular, small-chunked ways, why not 
have libraries of requirements and tests and such? If we match these two up, 
then we don't even need any form of neural network to build code for us - the 
corresponding and matching dev code has already been written many times before 
(this is what libraries and frameworks ARE, is it not?)

Surely if the point of a library is to build a set of reproducible similar-code 
that we can map to any problem domain of a similar nature, then the 
corresponding requirements suites would have to come along for the ride in 
terms of there being similar tests - if it was baked in at the compiler or 
interpreter level.

I have a similar problem with maintenance... and this is the biggest drawback 
to behaviour-driven or test-driven development... it requires writing more 
code... the bit that is really good, though, is that you don't spend as much 
time hunting down and fixing bugs most importantly regression errors. This is a 
big win, but it's preventative so therefore at least somewhat un-measurable, 
therefore un-marketable. As we are in a marketing-based society (which I am 
noticing is slowly changing), anything that can't be measured often takes a 
sad-faced back-seat. It's a pity, because I'd wager that the most important 
things that we have as humans more often than not can't be measured.

However - the trouble with starting with requirements first is that it doesn't 
provide humans with that instant or near-instant feedback that gives a happy 
emotional reaction which says I'm making progress. I'm wagering that the 
reason for this is that we start too late. If writing behaviour requirements 
and testing before writing dev code is simply part of the programming process, 
then I wager it will be much easier... if learnt from the ground up.

For example, when learning C, one has to decide what type a variable is before 
one uses it. This isn't a requirement in Smalltalk, where you merely have to 
know about behaviours. (ie does it respond to the message printToScreen - 
then it's fine). Thus, one of the requirements of writing C is that you are 
limited in this way by the language enforcing you to make decisions for type. 
This has fairly obvious refactoring ramifications, and yet because it's part of 
the language, it's simply taken on board. You want to change the type of some 
fairly ubiquitous variable, then it requires a LOT of work. People don't see 
these differences very clearly or cleanly.

Julian.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-14 Thread Faré
On 14 October 2010 21:20, Casey Ransberger casey.obrie...@gmail.com wrote:
 The previous thread about testing got me thinking about this again. One of 
 the biggest problems I have in the large with getting developers to write 
 tests is the burden of maintaining the tests when the code changes.

 I have this wacky idea that we need the tests more than the dev code; it 
 makes me wish I had some time to study prolog.

 I wonder: what if all we did was write the tests? What if we threw some kind 
 of genetic algorithm or neural network at the task of making the tests pass?

 I realize that there are some challenges with the idea: what's the DNA of a 
 computer program look like? Compiled methods? Pure functions? Abstract syntax 
 trees? Objects? Classes? Prototypes? Source code fragments? How are these 
 things composed, inherited, and mutated?

 I've pitched the idea over beer before; the only objections I've heard have 
 been of the form that's computationally expensive and no one knows how to 
 do that.

 Computational expense is usually less expensive than developer time these 
 days, so without knowing exactly *how* expensive, it's hard to buy that. And 
 if no one knows how to do it, it could be that there aren't enough of us 
 trying:)

 Does anyone know of any cool research in this area?

For the low-hanging fruits, see Programming by example.

For the general case, search for (Solomonoff) Induction, and see
notably recent work by Noah Goodman.

If you're going to explore this area, be sure to come back to tell
your experiences.

[ François-René ÐVB Rideau | ReflectionCybernethics | http://fare.tunes.org ]
I discovered a few years ago that happiness was something you put into life,
not something you get out of it — and I was transformed.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc