Re: [fonc] Stephen Wolfram on the Wolfram Language

2014-09-24 Thread David Leibs
I think Stephen is misrepresenting the Wolfram Language when he says it is a 
big language. He is really talking about the built in library which is indeed 
huge.  The language proper is actually simple, powerful, and lispy.
-David

On Sep 24, 2014, at 3:32 PM, Reuben Thomas r...@sc3d.org wrote:

 On 24 September 2014 23:20, Tim Olson tim_ol...@att.net wrote:
 Interesting talk by Stephen Wolfram at the Strange Loop conference:
 
 https://www.youtube.com/watch?v=EjCWdsrVcBM
 
 He goes in the direction of creating a “big” language, rather than a small 
 kernel that can be built upon, like Smalltalk, Maru, etc.
 
 Smalltalk and Maru are rather different: Ian Piumarta would argue, I suspect, 
 that the distinction between small and large languages is an artificial 
 one imposed by most languages' inability to change their syntax. Smalltalk 
 can't, but Maru can. Here we see Ian making Maru understand Smalltalk, ASCII 
 state diagrams, and other things:
 
 https://www.youtube.com/watch?v=EGeN2IC7N0Q
 
 That's the sort of small kernel you could build Wolfram on.
 
 Racket is a production-quality example of the same thing: 
 http://racket-lang.org
 
 -- 
 http://rrt.sc3d.org
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Task management in a world without apps.

2013-10-31 Thread David Leibs
Hi Chris,
I get your point but I have really grown to dislike that phrase Worse is 
Better.  Worse is never better.  Worse is always worse and worse never reduces 
to better under any set of natural rewrite rules. Yes there are advantages in 
the short term to being first to market and things that are worse can have more 
mindshare in the arena of public opinion.  

Worse is Better sounds like some kind of apology to me.

cheers,
-David Leibs

On Oct 31, 2013, at 10:37 AM, Chris Warburton chriswa...@googlemail.com wrote:

 Unfortunately, a big factor is also the first-to-market pressure,
 otherwise known as 'Worse Is Better': you can reduce the effort required
 to implement a system by increasing the effort required to use it. The
 classic example is C vs LISP, but a common one these days is
 multithreading vs actors, coroutines, etc.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Task management in a world without apps.

2013-10-31 Thread David Leibs
In the spirit of equivocation when I look at the world we live in and and note 
the trends then I feel worse, not better.

-David Leibs

On Oct 31, 2013, at 11:10 AM, David Barbour dmbarb...@gmail.com wrote:

 The phrase Worse is better involves an equivocation - the 'worse' and 
 'better' properties are applied in completely different domains (technical 
 quality vs. market success). But, hate it or not, it is undeniable that 
 worse is better philosophy has been historically successful. 
 
 
 On Thu, Oct 31, 2013 at 12:50 PM, David Leibs david.le...@oracle.com wrote:
 Hi Chris,
 I get your point but I have really grown to dislike that phrase Worse is 
 Better.  Worse is never better.  Worse is always worse and worse never 
 reduces to better under any set of natural rewrite rules. Yes there are 
 advantages in the short term to being first to market and things that are 
 worse can have more mindshare in the arena of public opinion.  
 
 Worse is Better sounds like some kind of apology to me.
 
 cheers,
 -David Leibs
 
 On Oct 31, 2013, at 10:37 AM, Chris Warburton chriswa...@googlemail.com 
 wrote:
 
 Unfortunately, a big factor is also the first-to-market pressure,
 otherwise known as 'Worse Is Better': you can reduce the effort required
 to implement a system by increasing the effort required to use it. The
 classic example is C vs LISP, but a common one these days is
 multithreading vs actors, coroutines, etc.
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread David Leibs
I really like you observation about debugging.  The error you see was bad 
copying from another workspace. Totally botched. My email proof reading skill 
are totally lacking as well.   In general I will get everything I try to do 
initially wrong and if I don't get something very wrong every 30 minutes then 
I am not doing anything.

-David Leibs

On Jun 17, 2012, at 9:49 AM, GrrrWaaa wrote:

 
 On Jun 15, 2012, at 12:17 PM, David Leibs wrote:
 
 As children we spend a lot of time practicing adding up numbers. Humans are 
 very bad at this if you measure making a silly error as bad. Take for 
 example:
 
   365
 +  366
 --
 
 this requires you to add 5  6, write down 1 and carry 1 to the next column
 then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next 
 column
 finally add 3, 3 and the carried 1 and write down 7
 this gives you 721, oops, the wrong answer.  In step 2 I made a totally 
 dyslexic mistake and should have written down a 3.
 
 Ken proposed learning to see things a bit differently and remember the  
 digits are a vector times another vector of powers.
 Ken would have you see this as a two step problem with the digits spread out.
 
   3   6   5
 +  3   6   6
 
 
 Then you just add the digits. Don't think about the carries.
 
   3   6   5
 +  3   6   6
 
   6  12  11
 
 
 Now we normalize the by dealing with the carry part moving from right to 
 left in fine APL style. You can almost see the implied loop using residue 
 and n-residue.
 6  12 11
 6  13  0
 7   3  0
 
 Ken believed that this two stage technique was much easier for people to get 
 right.  
 
 I'm not sure the argument holds: the answer should be 731. :-)
 
 But, to be fair, spreading out the calculation like this makes it easier to 
 debug and find the place where it went awry. Ha - I never thought of that 
 before - writing out proofs in math problems is as much debugging as it is 
 verifying! Maybe programming interfaces could help us debug by more readily 
 showing the 'reasoning' behind a particular value or state, the particular 
 data/control-flows that led to it. Like picking up the program-mesh by 
 holding the result value we are interested in, and seeing the connected 
 inputs draping away to the floor.
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread David Leibs
Thanks for the link.  This thread has had me thinking quite a bit about the 
Central Limit Theorem from probability.

http://en.wikipedia.org/wiki/Central_limit_theorem

It explains why so many of our measurements result in normal distributions.

-David Leibs

On Jun 17, 2012, at 9:36 AM, GrrrWaaa wrote:

 
 On Jun 16, 2012, at 12:07 PM, Miles Fidelman wrote:
 
 Wesley Smith wrote:
 If things are expanding then they have to get more complex, they encompass
 more.
 Aside from intuition, what evidence do you have to back this statement
 up?  I've seen no justification for this statement so far.
 
 As I recall, there was a recent Nobel prize that boiled down to: Increase 
 the energy flowing into a system, and new, more complex, behaviors arise.
 
 Are you thinking of Prigogine's dissipative structures? Nobel laureate in 
 1977.
 http://en.wikipedia.org/wiki/Ilya_Prigogine
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread David Leibs
I have kinda lost track of this thread so forgive me if I wander off in a 
perpendicular direction.

I believe that things do not have to continually get more and more complex.  
The way out for me is to go back to the beginning and start over (which is what 
this mailing list is all about).
I constantly go back to the beginnings in math and/or physics and try to 
re-understand from first principles.  Of course every time I do this I get less 
and less further along the material continuum because the beginnings are so 
darn interesting.

Let me give an example from arithmetic which I learned from Ken Iverson's 
writings years ago.

As children we spend a lot of time practicing adding up numbers. Humans are 
very bad at this if you measure making a silly error as bad. Take for example:

   365
+  366
--

this requires you to add 5  6, write down 1 and carry 1 to the next column
then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next 
column
finally add 3, 3 and the carried 1 and write down 7
this gives you 721, oops, the wrong answer.  In step 2 I made a totally 
dyslexic mistake and should have written down a 3.

Ken proposed learning to see things a bit differently and remember the  digits 
are a vector times another vector of powers.
Ken would have you see this as a two step problem with the digits spread out.

   3   6   5
+  3   6   6


Then you just add the digits. Don't think about the carries.

   3   6   5
+  3   6   6

   6  12  11


Now we normalize the by dealing with the carry part moving from right to left 
in fine APL style. You can almost see the implied loop using residue and 
n-residue.
6  12 11
6  13  0
7   3  0

Ken believed that this two stage technique was much easier for people to get 
right.  I adopted it for when I do addition by had and it works very well for 
me. What would it be like if we changed the education establishment and used 
this technique?  One could argue that this sort of hand adding of columns of 
numbers is also dated. Let's don't go there I am just using this as an example 
of going back and looking at a beginning that is hard to see because it is 
just too darn fundamental. 

We need to reduce complexity at all levels and that includes the culture we 
swim in.

cheers,
-David Leibs

On Jun 15, 2012, at 10:58 AM, BGB wrote:

 On 6/15/2012 12:27 PM, Paul Homer wrote:
 
 I wouldn't describe complexity as a problem, but rather an attribute of the 
 universe we exist in, effecting everything from how we organize our 
 societies to how the various solar systems interact with each other.
 
 Each time you conquer the current complexity, your approach adds to it. 
 Eventually all that conquering needs to be conquered itself ...
 
 
 yep.
 
 the world of software is layers upon layers of stuff.
 one thing is made, and made easier, at the cost of adding a fair amount of 
 complexity somewhere else.
 
 this is generally considered a good tradeoff, because the reduction of 
 complexity in things that are seen is perceptually more important than the 
 increase in internal complexity in the things not seen.
 
 although it may be possible to reduce complexity, say by finding ways to do 
 the same things with less total complexity, this will not actually change the 
 underlying issue (or in other cases may come with costs worse than internal 
 complexity, such as poor performance or drastically higher memory use, ...).
 
 
 Paul.
 
 From: Loup Vaillant l...@loup-vaillant.fr
 To: fonc@vpri.org 
 Sent: Friday, June 15, 2012 1:54:04 PM
 Subject: Re: [fonc] The Web Will Die When OOP Dies
 
 Paul Homer wrote:
  It is far more than obvious that OO opened the door to allow massive
  systems. Theoretically they were possible before, but it gave us a way
  to manage the complexity of these beasts. Still, like all technologies,
  it comes with a built-in 'threshold' that imposes a limit on what we can
  build. If we are too exceed that, then I think we are in the hunt for
  the next philosophy and as Zed points out the ramification of finding it
  will cause yet another technological wave to overtake the last one.
 
 I find that a bit depressing: if each tool that tackle complexity
 better than the previous ones lead us to increase complexity (just
 because we can), we're kinda doomed.
 
 Can't we recognized complexity as a problem, instead of an unavoidable
 law of nature?  Thank goodness we have STEPS project to shed some light.
 
 Loup.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread David Leibs
Speaking of multiplication.  Ken Iverson teaches us to do multiplication by 
using a * outer product to build a times table for the digits involved.
+-++
| | 3  6  6|
+-++
|3| 9 18 18|
|6|18 36 36|
|5|15 30 30|
+-++

Now you sum each diagonal:
   (9) (18+18) (18+36+15) (36+30) (30)
 936   6966 30
And just normalize as usual:

   9 36 69 66 30
   9 36 69 69 0
   9 36 75 9  0
   9 43 5  9  0
  13 3  5  9  0
 1 3 3  5  9  0

The multiplication table is easy and just continued practice for your 
multiplication facts.

You don't need much more machinery before you have the kids doing Cannon's 
order n systolic array algorithm for matrix multiply, on the gym floor, with 
their bodies.  This assumes that the dance teacher is coordinating with the 
algorithms teacher. Of course if there isn't something relevant going on that 
warrants matrix multiply then all is lost. I guess that's a job for the 
motivation teacher. :-)

-David Leibs

On Jun 15, 2012, at 12:57 PM, Pascal J. Bourguignon wrote:

 David Leibs david.le...@oracle.com writes:
 
 I have kinda lost track of this thread so forgive me if I wander off
 in a perpendicular direction.
 
 I believe that things do not have to continually get more and more
 complex.  The way out for me is to go back to the beginning and start
 over (which is what this mailing list is all about).  I constantly go
 back to the beginnings in math and/or physics and try to re-understand
 from first principles.  Of course every time I do this I get less and
 less further along the material continuum because the beginnings are
 so darn interesting.
 
 Let me give an example from arithmetic which I learned from Ken
 Iverson's writings years ago.
 
 As children we spend a lot of time practicing adding up
 numbers. Humans are very bad at this if you measure making a silly
 error as bad. Take for example:
 
   365
 +  366
 --
 
 this requires you to add 5  6, write down 1 and carry 1 to the next
 column then add 6, 6, and that carried 1 and write down 2 and carry a
 1 to the next column finally add 3, 3 and the carried 1 and write down
 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
 totally dyslexic mistake and should have written down a 3.
 
 Ken proposed learning to see things a bit differently and remember the
 digits are a vector times another vector of powers.  Ken would have
 you see this as a two step problem with the digits spread out.
 
   3   6   5
 +  3   6   6
 
 
 Then you just add the digits. Don't think about the carries.
 
   3   6   5
 +  3   6   6
 
   6  12  11
 
 Now we normalize the by dealing with the carry part moving from right
 to left in fine APL style. You can almost see the implied loop using
 residue and n-residue.
 
 6  12 11
 6  13  0
 7   3  0
 
 Ken believed that this two stage technique was much easier for people
 to get right.  I adopted it for when I do addition by had and it works
 very well for me. What would it be like if we changed the education
 establishment and used this technique?  One could argue that this sort
 of hand adding of columns of numbers is also dated. Let's don't go
 there I am just using this as an example of going back and looking at
 a beginning that is hard to see because it is just too darn
 fundamental. 
 
 It's a nice way to do additions indeed.
 
 When doing additions mentally, I tend to do them from right to left,
 predicting whether we need a carry or not by looking ahead the next
 column.  Usually carries don't carry over more than one column, but
 even if it does, you only have to remember a single digit at a time.
 
 There are several ways to do additions :-)
 
 
 Your way works as well for substractions:
 
3  6  5
 -   3  7  1
 ---
0 -1  4
0 -10 + 4 = -6
 
3  7  1
 -  3  6  5
 ---
0  1 -4
   10 -4 = 6
 
 and of course, it's already how we do multiplications too.
 
 
 
 We need to reduce complexity at all levels and that includes the
 culture we swim in.
 
 Otherwise, you can always apply the KISS principle 
 (Keep It Simple Stupid).
 
 
 -- 
 __Pascal Bourguignon__ http://www.informatimago.com/
 A bad day in () is better than a good day in {}.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-18 Thread David Leibs
Your point about politics is so true.
Check out a great classic paper by Mel Conway  at: 
http://www.melconway.com/Home/Committees_Paper.html
 
Any organization that designs a system (defined broadly) will produce a design 
whose structure is a copy of the organization's communication structure.

It's been called Conway's Law.

cheers,
-David


On Aug 18, 2011, at 4:35 AM, karl ramberg wrote:

 The fact that a very powerful idea can be captured in so few lines of code is 
 really mind-blowing.
 Making complex but manageable systems out of it is another subject.
 I find that the bigger and more complex a system grows it gets to be more 
 about politics than about the powerful idea.
 
 Thanks for the reading tip
 
 Karl
 
 On Thu, Aug 18, 2011 at 3:41 AM, Alan Kay alan.n...@yahoo.com wrote:
 Take a look at Landin's papers and especially ISWIM (The next 700 
 programming languages)
 
 You don't so much want to learn Lisp as to learn the idea of Lisp
 
 Cheers,
 
 Alan
 
 From: karl ramberg karlramb...@gmail.com
 
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Wednesday, August 17, 2011 12:00 PM
 
 Subject: Re: [fonc] Extending object oriented programming in Smalltalk
 
 Hi,
 Just reading a Lisp book my self. 
 Lisp seems to be very pure at the bottom level.
 The nesting in parentheses are hard to read and comprehend / debug.
 Things get not so pretty when all sorts of DSL are made to make it more 
 powerful. 
 The REPL give it a kind of wing clipped aura; there is more to computing than 
 text io
 
 Karl
 
 
 On Wed, Aug 17, 2011 at 8:00 PM, DeNigris Sean s...@clipperadams.com wrote:
 Alan,
 
 While we're on the subject, you finally got to me and I started learning 
 LISP, but I'm finding an entire world, rather than a cohesive language or 
 philosophy (Scheme - which itself has many variants, Common LISP, etc). What 
 would you recommend to get it in the way that changes your thinking? What 
 should I be reading, downloading, coding, etc.
 
 Thanks.
 Sean DeNigris
 You wouldn't say that Lisp 1.5 Programmer's Manual is outdated would you?  
 :-)
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-18 Thread David Leibs
Old Timer Alert!Ah, 1956. I was seven years old and Robby the Robot from the science fiction movie "Forbidden Planet" had just leaped into popular culture. Robby was an awesome automatous AI. The movie was really quite something for 1956. Faster than light travel, cool space ship, 3d printers, alien super brain race that had disappeared (the Krell), monsters from the ID.To me Lisp is like something created by the Krell. "As though my ape's brain could contain the secrets of the Krell."I asked John if he had seen the movie and he had. John is "Krell Smart".-David LeibsOn Aug 18, 2011, at 10:15 AM, Alan Kay wrote:One way to try to think about "the idea of Lisp" and the larger interesting issues, is to read "the Advice Taker" paper by John McCarthy (ca. 56-58 "Programs With Common Sense") which is what got him thinking about interactive intelligent agents, and got him to start thinking about creating a programming language that such agents could be built in.___fonc mailing listfonc@vpri.orghttp://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-17 Thread David Leibs

On Aug 17, 2011, at 8:32 AM, Bert Freudenberg wrote:
 
 There is a paper on PIE (and many other interesting systems) in 
 Barstow/Shrobe/Sandewall's Interactive Programming Environments. Used 
 copies for 1 cent (like many outdated computer books):
 
 http://www.amazon.com/dp/0070038856
 

Outdated!  It's a classic. 
You wouldn't say that Lisp 1.5 Programmer's Manual is outdated would you?  :-)

-David___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-17 Thread David Leibs
Hi Sean,
Two books that I like quite a lot are:
 Anatomy of Lisp by John Allen.  It's a classic from the golden age.
Lisp in Small Pieces  by Christian Queninnec.  It's a modern classic.

-David





On Aug 17, 2011, at 11:00 AM, DeNigris Sean wrote:

 Alan,
 
 While we're on the subject, you finally got to me and I started learning 
 LISP, but I'm finding an entire world, rather than a cohesive language or 
 philosophy (Scheme - which itself has many variants, Common LISP, etc). What 
 would you recommend to get it in the way that changes your thinking? What 
 should I be reading, downloading, coding, etc.
 
 Thanks.
 Sean DeNigris
 You wouldn't say that Lisp 1.5 Programmer's Manual is outdated would you?  
 :-)
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Last programming language

2011-07-17 Thread David Leibs
I couldn't handle his condescending attitude towards goto statements.
I might not use them very often but when you need one there is nothing better.

-David Leibs

On Jul 17, 2011, at 2:33 PM, Craig Latta wrote:

 
 That talk would have been a whole lot better if he had grounded it
 with a discussion of how constraints are good for creativity. It's how
 he should have spent the time where he went on about memorizing Pi for
 no good reason...
 
 
 -C
 
 --
 Craig Latta
 www.netjam.org/resume
 +31   6 2757 7177
 + 1 415  287 3547
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Terseness, precedence, deprogramming (was Re: [fonc] languages)

2011-06-05 Thread David Leibs
I love APL!  Learning APL is really all about learning the idioms and how to 
apply them.  This takes quite a lot of training time.   Doing this kind of 
training will change the way you think.  

Alan Perlis quote:  A language that doesn't affect the way you think about 
programming, is not worth knowing.

There is some old analysis out there that indicates that APL is naturally very 
parallel.  Willhoft-1991 claimed that  94 of the 101 primitives operations in 
APL2 could be implemented in parallel and that 40-50% of APL code in real 
applications was naturally parallel. 

R. G. Willhoft, Parallel expression in the apl2 language, IBM Syst. J. 30 
(1991), no. 4, 498–512.


-David Leibs

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] What Should The Code Look Like? (was: Show Us The Code!)

2010-12-20 Thread David Leibs
To see how far you can scale visual node programming I recommend looking at 
Pure Data, Quartz Composer, and LabView. Also interesting is Little Big Planet.

On Dec 20, 2010, at 11:07 AM, Brian Gilman wrote:

 
 Clearly there are some gaps in the programming models of this new era.
 How can people express themselves in a mathematical notation that
 isn't bound to 19th century keyboard technology?
 
 I think that the fundamental problem is that keyboards are good for entering 
 text, and text scales very well. 
 
 Artists and musicians tend to heavily favor visual node based programming, 
 which is a better fit for mobile platforms.  Just drag nodes out, and draw 
 connections.  For non-programmers, being able to see the relationships 
 between visual blocks of code is much more intuitive than text.  The problem 
 is, that it doesn't scale very well.  Once a program reaches even a moderate 
 level of complexity, the graph of nodes end up looking like a pile of 
 spaghetti.  If you want to rearrange your program, you end up having to 
 disconnect and reconnect tons of nodes. 
 
 For systems without keyboards, spatial representation of code seems like the 
 intuitive direction to go, and would work regardless of whether the user is 
 using a multitouch tablet, or is wearing a pair of AR glasses.  Getting that 
 to scale however, seems like a very difficult problem. 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] goals

2010-07-09 Thread David Leibs






for example, is a lot of this added code because:
the programmer has little idea what he was doing, and so just wildly  
copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error  
checking and building abstractions.


similarly, is a piece of code smaller because:
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?




It isn't that the programmer has little idea of what he is doing.   
Things just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that  
illustrates the point.  Maxwells equations originally applied to a set  
of eight equations published by Maxwell in 1865.  After that the  
number of equations escalated to twenty equations in twenty unknowns  
as people struggled with the implications.  Maxwell wrestled with  
recasting the equations in quaternion form.  Time passed. It was all  
very ugly.  Finally In 1884 Oliver Heaviside recast Maxwell's math  
from the then cumbersome form to its modern vector calculus notation,  
thereby reducing the twenty equations in twenty unknowns down to the  
four differential equations in two unknowns that we all love and  call  
Maxwells equations. Heaviside invented the modern notation giving us  
the tools to make sense of something very profound and useful.  Good  
work on hard things takes time plus a lot of good people that care.


cheers,
-David Leibs




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] goals

2010-07-09 Thread David Leibs
I am somewhat dyslexic and I don't always read things in the right  
order so  I read

   SLOC/day/programmer

as
   SHLOCK/day/programmer

it fits in a negative metric kinda way.  Maybe it is a meme we should  
unleash on our overlings.


-djl

On Jul 9, 2010, at 12:16 PM, John Zabroski wrote:


Just to be clear,

The foremost experts and definitive source on software metrics --  
Fenton and Pfleeger [1] -- do not really support SLOC/day/programmer  
as a good metric for productivity.  It seems to me (from hearing  
reports by others) that most people do not actually read books on  
metrics and instead gravitate towards the simplest ones, regardless  
of effectiveness.


Usually SLOC/day/programmer is a good way, though, to convince your  
boss that a project predicted to be 300,000 lines of brute force  
coding cannot be done in a weekend.  The argument being you  
literally cannot type that fast.


Cheers,
Z-Bo

[1] http://www.amazon.com/Software-Metrics-Norman-E-Fenton/dp/0534956009

On Fri, Jul 9, 2010 at 2:47 PM, Max OrHai max.or...@gmail.com wrote:
Just to clarify, I'm a bit uncomfortable with productivity talk  
here because it seems too narrow and ill-defined. Productivity of  
what exactly? By whom? For whom? To what end? To a specific manager  
of a specific project in a specific development phase, these  
questions may have specific, meaningful answers. When it comes to  
fundamentally rethinking basic tools and practices, I'm not so sure.


Of course, core values must be somewhat vague, to allow them to mesh  
with constantly changing circumstances. Personally, I'd rather  
strive for quality than productivity. I'm generally suspicious  
of premature quantification: just because you can measure something  
doesn't make it meaningful!


It seems to me that, as crufty, haphazard, hidebound, etc. as  
software engineering is today, software engineering  
management (with its productivity metrics such as source lines  
of code per programmer per day) are even worse. We all know code  
quality varies wildly between programmers using the exact same sets  
of tools. Talent and training contribute enormously. However, I  
imagine that everyone on this list can agree that the tools  
themselves matter too, even if it's difficult to quantify that  
difference precisely.


Keep it simple is a widely applicable and successful heuristic. I  
see this project as (largely) an experiment in applying that  
heuristic to the fairly well-defined category of creating the  
personal computing experience, with a depth and breadth impossible  
in the productivity/profit-bound world of commercial software, and a  
consistency and quality level impossible in the the traditional open- 
source project. It's just an experiment, though. It's research (or,  
if you prefer, intellectual masturbation). If we already knew the  
outcome, it wouldn't be research, would it?


-- Max


On Fri, Jul 9, 2010 at 10:33 AM, David Leibs  
david.le...@oracle.com wrote:






for example, is a lot of this added code because:
the programmer has little idea what he was doing, and so just  
wildly copy-pasted everywhere and made a big mess?...
has lots of code which is actually beneficial, such as doing error  
checking and building abstractions.


similarly, is a piece of code smaller because:
the programmer is good at getting work done in less code?
or because the code is essentially a tangled mess of hacks?




It isn't that the programmer has little idea of what he is doing.   
Things just take time to be transformed into an optimal form.
There is a good example from the history from math, and physics that  
illustrates the point.  Maxwells equations originally applied to a  
set of eight equations published by Maxwell in 1865.  After that the  
number of equations escalated to twenty equations in twenty unknowns  
as people struggled with the implications.  Maxwell wrestled with  
recasting the equations in quaternion form.  Time passed. It was all  
very ugly.  Finally In 1884 Oliver Heaviside recast Maxwell's math  
from the then cumbersome form to its modern vector calculus  
notation, thereby reducing the twenty equations in twenty unknowns  
down to the four differential equations in two unknowns that we all  
love and  call Maxwells equations. Heaviside invented the modern  
notation giving us the tools to make sense of something very  
profound and useful.  Good work on hard things takes time plus a lot  
of good people that care.


cheers,
-David Leibs





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman