Re: [fonc] Unsolved problem in computer science? Fixing shortcuts.

2014-10-09 Thread Pascal J. Bourguignon
Daniel W Gelder daniel.w.gel...@gmail.com writes:

 The original question seems to be how to maintain links when the file
 is moved or renamed. Perhaps the file could have a unique ID in the
 file system, and the link would try the given pathname, but if it's
 not there, try the unique ID. Would that work? 

That's about what is done by MacOS and MacOSX (at least on HFS, I don't
know if they do the same on UFS).

This let aliases to keep working when you move the target files.  So
aliases, which in a way are more like symbol links (if you delete the
target file and recreate it in the same path, then the alias will refer
the new file), actually behave more like symbolic links in that you can
move the target file, and the alias will still refer it.

Now the question, is what happens if you move away the original target,
and recreat a new one in the old path?

As a MacOS and then MacOSX user, I DO NOT KNOW!

I would have to try it.

The point is that the semantics of symbolic links and of hard links are
clear and easy to understand, and manipulate.

The semantics of MacOS and MacOSX aliases are not.

And I don't know anything about links in MS-Windows (I'd expect they'd
work as broken Mac aliases).


There's some level of DWIM in that.


The question should be whether we want a system that DWIM, or whether we
want a system that implement simple and composable operations that can
easily be understood and used?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
“The factory of the future will have only two employees, a man and a
dog. The man will be there to feed the dog. The dog will be there to
keep the man from touching the equipment.” -- Carl Bass CEO Autodesk
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Unsolved problem in computer science? Fixing shortcuts.

2014-10-09 Thread Pascal J. Bourguignon

Josh McDonald j...@joshmcdonald.info writes:

 Why should links be in the filesystem, rather than an application /
 UI construct? 

For a lot of reasons.  But the question is justified.

- because it would be more modular and represent more code reuse, to
  factorize out the management of links in the file system module,
  rather than having it in each application.

- because that would provide the user with a more consistent user
  experience (both look and feel), than when each application implements
  its own semantics and user interface to manage files (as we can see on
  the horrible iOS and Android systems).

- because providing a generic tool to manipulate files let the users
  control their data, contrarily to putting the application (and
  therefore, unless it's a GPL application readable, understandable,
  modifiable and compilable by the user, is actually a proxy of the
  commercial or political interests that provide this application to the
  slave^W user.  Notice the qualificatifs, the GPL is not enough: how
  many users can read, understand, modify and compile the source of the
  GPL application they use (take for example LibreOffice oremacs).  The
  points here are that:
  
1- applications are not written in languages that are readable by the
   (non programmer) users.

2- applications are not written in a way that is understandable by
   their users.

3- and therefore, applications are not modifiable and compilable by
   users.

  Therefore, embedding important features such as file system management
  into applications is strongly restrictive of users' freedom, taking
  control over users' data.

  The GPL idea would be that users could contract programmers to help
  them with those point.  How many times did users contract you to read
  GPL programs, or to modify them? (cf. the recent openssl and bash bugs).

  Until there is a lot of changes in the way applications are written,
  and in the way they are licensed and distributed,  embedding file
  system management features in the applications (or in the frameworks
  used by the application, like Cocoa and iOS try to do), is very bad.


 Why should there be 1 filesystem, rather than 0, or
 thousands? 

Indeed.  There can as many file systems as you wish, and a system should
provide a way to manage as many file systems as you wish.

The unix systems let you do that, with the mount(2)/umount(2) and
chroot(2) primitives.  Linux helps even more with bind move and shared
mounts.


But otherwise, this question refers to capability based systems which
often provide explicitely set-up per-process directories to give access
explicitely to authorised files.  Indeed, this seems to be a good way to
avoid a lot of security problems.  The question is how to design a user
interface letting us do that conveniently enough.  (And again, there's
the question of trusting the programs that perform those tasks anyways,
can those programs be read and modified by the users (with the required
capabilities)?




 Why must a filesystem be a DAG + links, rather than some
 other graph, or a soup with a lot of queries? 

See my previous answer.  Again, you're right to ask the question.  Trees
are but a simple way to manage files where the human user keeps control
of it.

Granted, when managing libraries of millions of pictures or mp3 songs,
other organization scheme, more automatic, and more multidimensional may
be required.

This becomes more a database than a (classic) file system.  Should the
system integrate database systems to manage files (like old mainframe
systems), or should unix stay a uni-user simple system (unix) instead of
becoming a multi-user complex system (multics)?  Just joking anyways,
I'm on the Lisp Machine side of things.


My point here is that it is a difficult problem to solve satisfatorily: 

- you need to provide a generic file system module that is independent
  of applications to let the user keep ownership of their files despite
  using privative applications,

- you now need to provide a database system that works for all kind of
  different applications with different requirements,

- nowadays, you might also need to integrate remote files (cloud
  storage, etc), behind this database API.


 Why should a link simply be a file pretending to be in two
 directories at once, rather than a data-carrying relationship between
 two nodes in the graph?


Well, in the current situation, links carry information: their name, and
a few file system attributes are link-specific.  An application can take
advantage of it.  An example, is scripts with care called with different
symbolic link names and perform different actions accordingly.

But granted, in a database view of the thing, you might want to add
other attributes.



 I think these are much more interesting questions :)

Yes.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
“The factory of the future will have only two employees, a man and a

Re: [fonc] About the reduce of complexity in educating children to program

2014-09-19 Thread Pascal J. Bourguignon
Iliya Georgiev ikgeorg...@gmail.com writes:

 Hello,
 I am addressing this letter mainly to Mr. Alan Kay and his fellows at
 VPRI. I have an idea how to reduce complexity in educating children
 to program. This seems to be a part of a goal of the VPRI to improve
 powerful ideas education for the world's children.

 But in case my idea turns into success, a moral hazard emerges. If
 the children (6-14 years old) understand things better and can even
 program, can they become a victim of labor exploitation? Up to know
 they could be exploited physically. From now on they could be
 exploited mentally. OK, in the north in so called developed countries
 they may be protected, but in the south...

 On the other side, don't we owe to the tomorrow people the
 possibility to understand the world we leave to them? Or they will be
 savages that use tools, but do not know how work. 

 So if you want to wear the burden of the moral hazard, I will send
 the description of my idea to you and help with what I can. You will
 judge, if it is worth to do it.  It would be easily if people
 work cooperatively. That is a lesson children should learn too. The
 software could be made from one person, but there may be
 more challenges than one think. In case you agree to do it I will
 want you to publish online the results of the experiment. And if
 possible to make the program to run in a web browser and
 to release it freely too, just as you did in some of your recent
 experiments. 

 It is strange that unlike more scientists, I will be equally happy
 from the success and failure of my idea.

This is a choice only you can make (or a trusted friend who could keep
it secret, but anything known by more than one person is not a secret
anymore).

In my experience, even dangerous ideas, that you refrain communicating,
are soon discovered and publicated by others.

Or rather, there are some people whose sole purpose is to find and
exploit anything and any idea they can, and they will outguess you
already.

So if your idea can do some good, and if there can be some good people
that may use it for good, instead of using it as an evil weapon, perhaps
it would be worth sharing it.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
“The factory of the future will have only two employees, a man and a
dog. The man will be there to feed the dog. The dog will be there to
keep the man from touching the equipment.” -- Carl Bass CEO Autodesk
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-18 Thread Pascal J. Bourguignon
Alan Kay alan.n...@yahoo.com writes:

 Hi David

 This is an interesting slant on a 50+ year old paramount problem (and
 one that is even more important today).

 Licklider called it the communicating with aliens problem. He said
 50 years ago this month that if we succeed in constructing the
 'intergalactic network' then our main problem will be learning how to
 'communicate with aliens'. He meant not just humans to humans but
 software to software and humans to software. 

 (We gave him his intergalactic network but did not solve the
 communicating with aliens problem.)

 I think a key to finding better solutions is to -- as he did --
 really push the scale beyond our imaginations -- intergalactic --
 and then ask how can we *still* establish workable communications of
 overlapping meanings?.

 Another way to look at this is to ask: What kinds of prep *can* you
 do *beforehand* to facilitate communications with alien modules?

I don't think that in this universe, intergalactic communication
(assuming the message is transmitted) to be more difficult than
intragalactic communication.  I mean, I don't expect more variability in
intelligent forms in a different galaxy than in the same galaxy,
because we are assuming the same physics laws apply to the whole
universe, and some overall homogeneity in the universe composition.

On the other hand, I'd already expect software, ie. AI, to be more alien
than most intelligent life forms we'll ever encounter.   We will
probably have to work hard to make artificial intelligence, or make it
stay, close enough to ourselves.

On the other hand, if you want really to push beyond our imaginations,
inter-universe communication would be a real challenge (always, assuming
the messages can go thru).

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)

2013-04-15 Thread Pascal J. Bourguignon
David Barbour dmbarb...@gmail.com writes:

 On Sun, Apr 14, 2013 at 1:23 PM, Pascal J. Bourguignon 
 p...@informatimago.com wrote:

 David Barbour dmbarb...@gmail.com writes:

  On Apr 14, 2013 9:46 AM, Tristan Slominski 
  tristan.slomin...@gmail.com wrote:
 
  A mechanic is a poor example because frame of reference is
 almost
  irrelevant in Newtonian view of physics.
 
  The vast majority of information processing technologies allow
 you to
  place, with fair precision, every bit in the aether at any
 given
  instant. The so-called Newtonian view will serve more
 precisely and
  accurately than dubious metaphors to light cones.

 What are you talking about???


 I don't know how to answer that without repeating myself, and in this
 case it's a written conversation. Do you have a more specific
 question? Hmm. At a guess, I'll provide an answer that might or might
 not be to the real question you intended: The air-quotes around
 Newtonian are because (if we step back in context a bit) the
 context is Tristan is claiming that any knowledge of synchronization
 is somehow 'privileged'. (Despite the fact nearly all our technology
 relies on this knowledge, and it's readily available at a glance, and
 does not depend on Newtonian anything.)

 And I've seen Grace Hopper's video on nanoseconds before. If you
 carry a piece of wire of the right length, it isn't difficult to say
 where light carrying information will be after a few nanoseconds. :D

I think that one place where light cone considerations are involved is
with caches in multi-processor systems.  If all processors could have
instantaneous knowledge of what the views of the other processors are
about memory, there wouldn't be any cache coherence problem.  But light
speed, or information transmission speed is not infinite, hence the
appearance of light cones or light cones-like phenomena.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] holy grail of FONC?

2013-04-14 Thread Pascal J. Bourguignon
John Carlson yottz...@gmail.com writes:

 My coworker actually delivered a system with programmer's undo; it
 was called a reversible debugger in 1993--before IDEs were popular. 

There's also the more recent debugging backward in time:
http://www.youtube.com/watch?v=xpI8hIgOyko

With the capabilities of current machines, this can be done rather
easily.  The challenge was to do it on machines where the programs were
as big as the memory, if not bigger.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-31 Thread Pascal J. Bourguignon
Carl Gundel ca...@psychesystems.com writes:

 “If there are contradictions in the design, the program shouldn't
 compile.”

  

 How can a compiler know how to make sense of domain specific
 contradictions?  I can only imagine the challenges we would face if
 compilers operated in this way.

Contradictions are often not really contradictions.

It's a question of representation, that is, of mapping of the domain,
to some other domain, usually a formal system.

Now we know that a given formal system cannot be at the same time
complete and consistent, but nothing prevents an automatic system to
work with an incomplete system or an inconsistent system (or even a
system that's both incomplete and inconsistent).

The only thing, is that sometimes you may reach conclusions such as 1=2,
but if you expect them, you can deal with them.  We do everyday.

Notably, by modifying the mapping between the domain and the formal
system: for different parts of the domain, you can use different formal
systems, or avoid some axioms or theorems leading to a contradiction, to
find some usable conclusion.


Daily, we use formal rules, that are valid just in some context.  The
conclusions we reach can easily be invalidated, if the context is wrong
for the application of those rules.   If we tried to take into account
all the possible rules, we'd get soon enough inconsistencies.  But by
restricting the mapping of the domain to some contextual rules, we can
read usable conclusions (most of the time).  When the conclusion doesn't
match the domain, we may ask where the error is, and often it's just the
context that was wrong, not the rules.


We will have to embrace Artifical Intelligence, even in compilers, eventually.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [talk] Cool Code - Kevlin Henney

2012-12-02 Thread Pascal J. Bourguignon
John Nilsson j...@milsson.nu writes:

 Isn't the pattern language literature exactly that? An effort to
 typeset and edit interesting design artifacts.

Unless you're programming in lisp(*), reading a program written with
patterns is like looking at the wave form of Hello world! said aloud.


(*) See:
http://groups.google.com/group/comp.lang.lisp/msg/ee09f8475bc7b2a0
http://groups.google.com/group/comp.programming/msg/9e7b8aaec1794126

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Pascal J. Bourguignon
Loup Vaillant l...@loup-vaillant.fr writes:

 Pascal J. Bourguignon a écrit :
 The problem is not the sources of the message.  It's the receiptors.

 Even if it's true, it doesn't help.  Unless you see that as an advice
 to just give up, that is.

 Assuming we _don't_ give up, who can we reach even those that won't
 listen?  I only have two answers: trick them, or force them.  Most
 probably a killer-something, followed by the revelation that it uses
 some alien technology.  Now the biggest roadblock is making the alien
 tech not scary (alien technology is already bad in this respect).

 An example of a killer-something might be a Raspberry-Pi shipped with a
 self-documented Frank-like image.  By self-documented, I mean something
 more than emacs.  I mean something filled with tutorials about how to
 implement, re-implement, and customise every part of the system.

 And it must be aimed at children.  Unlike most adults, they can get
 past C-like syntax.

Agreed.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Pascal J. Bourguignon
Paul Homer paul_ho...@yahoo.ca writes:

 The on-going work to enhance the system would consistent of modeling data, 
 and creating
 transformations. In comparison to modern software development, these would be 
 very little
 pieces, and if they were shared are intrinsically reusable (and 
 recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.  

But strangely enough, users are always asking for new programs…  Is it
because we've not programmed every functions already, or because we will
never have them all programmed?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Pascal J. Bourguignon
John Nilsson j...@milsson.nu writes:

 I read that post about constraints and kept thinking that it should be
 the infrastructure for the next generation of systems development, not
 art assets :)

 In my mind it should be possible to input really fuzzy constraints
 like It should have a good looking, blog-like design
 A search engine would find a set of implications from that statement
 created by designers and vetted by their peers. Some browsing and
 light tweaking and there, I have a full front-end design provided for
 the system.

 Then I add further constraints. Available via http://blahblah.com/
 and be really cheap, again the search engine will find the implied
 constrains and provide options among the cheaper cloud providers. I
 pick one of them and there provisioning is taken care of.

 I guess the problem is to come up with a way to formalize all this
 knowledge experts are sitting on into a representation usable by that
 search engine. But could this not be done implicitly from the act of
 selecting a match after a search?

 Say some solution S derived from constrains A,B,C is selected in my
 search. I have constraint A,B and D as input. By implication the
 system now knows that S is a solution to D.

Right.  Just a simple applicaition of AI and all the algorithms
developed so far.  You just need to integrate them in a working system.


And who has the resources to do this work: it seems to me to be a big
endeavour.  Collecting the research prototype developed during the
last 50 years, and develop a such a product.

Even Watson or Siri would only represent a small part of it.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-02 Thread Pascal J. Bourguignon
Reuben Thomas r...@sc3d.org writes:

 On 2 October 2012 16:21, John Pratt jpra...@gmail.com wrote:
 Basically, Alan Kay is too polite to say what
 we all know to be the case, which is that things
 are far inferior to where they could have been
 if people had listened to what he was saying in the 1970's.

 He's also not very good at dissemination, or doesn't work at it
 enough. It's all very well saying I told you so when, at least in
 the internet age, he's done the equivalent of writing I told you so
 on a disposable napkin which he then locked in the bottom drawer of a
 filing cabinet in a basement room of a condemned building on a locked
 site with a sign outside saying BEWARE OF THE LEOPARD, when he
 could've easily put it on an enormous poster on a main street.

I don't think you can say that.  He has worked at Apple, where he has
done all the evangelizing he could.  The iPad is basically the hardware
outcome.  Hypercard, Dylan, etc as software outcome.

There are a lot of developments around Smalltalk and Squeak too.  
Alice http://www.alice.org/
Scratch   http://scratch.mit.edu/
Croquet   http://opencroquet.org/
etc.

The problem is not the sources of the message.  It's the receiptors.
Before 2000, one could give them the excuse that hardware was slow,
dynamic programming languages were not good enough to do fancy things.
But not since.  And indeed, there's more and more dynamic programming
languages available (Ruby, Python, etc).

Of course those new languages and systems are not refinements, they're just
yet another try at the same target, so they don't reach it or even aim
any better.  Again, the problem is more with the receiptors, who just
prefer to reinvent new stuff rather than learn history, read old
papers and use old programming languages and old systems and refine
them.


Well, at least, in 2012, C and C++ have closures…  Perhaps in 35 years,
they'll be sexp-based too!

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deployment by virus

2012-07-19 Thread Pascal J. Bourguignon
John Nilsson j...@milsson.nu writes:
 On Thu, Jul 19, 2012 at 3:55 AM, Pascal J. Bourguignon
 p...@informatimago.com wrote:
 Joke apart, people are still resiting a lot to stochastic software.
 One problem with random spreading of updates is that its random.

 Random as in where it's applied or random in what's applied?

Both.

The problem is that it would give more work to system administrators
everywhere trying to explain to users why this software on this computer
has this bug, while this same software on this other computer hasn't.

It would also complicate bug reporting.


 I was thinking that the viral part was a means to counter the seeming
 randomness in an otherwise chaotic system. Similar in spirit in how
 gardening creates some amount of order and predictability, a gardener
 who can apply DNA tweaks as well as pruning.

 As I understand it CFEngine does something like this wile limited to
 simple configuration.

 BR,
 John


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deployment by virus

2012-07-19 Thread Pascal J. Bourguignon
Eugen Leitl eu...@leitl.org writes:

 On Thu, Jul 19, 2012 at 02:28:18PM +0200, John Nilsson wrote:
 More work relative to an approach where full specification and controll is
 feasible. I was thinking that in a not to distant future we'll want to
 build systems of such complexity that we need to let go of such dreams.
 
 It could be enough with one system. How do you evolve a system that has
 emerged from som initial condition directed by user input. Even with only
 one instance of it running you might have no way to recreate it so you must
 patch it, and given sufficient complexity you might have no way to know how
 a binary diff should be created.

 It seems a great idea for evolutionary computation (GA/GP) but an
 awful idea for human engineering.

Perhaps not.  The idea would be that we would design our systems not
with hard boundaries and functionalities, but like living organisms are
designed.  Notice that all the program of an organism is contained in
each of its cells.  Cells behave differently depending on the
environment (organ, tissue) there are in.  If you migrate a cell to a
different organ, it may start behaving differently.  It can because it
has all the programs.

On the other hand, if you move a library to another place, it will just
break because it doesn't have the programs needed in that other place.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deployment by virus

2012-07-18 Thread Pascal J. Bourguignon
John Nilsson j...@milsson.nu writes:

 I just had a weird though, maybe there is some precedence?

 If we were to do software development in a more organic manner,
 accepting the nature of complex systems as being... complex. In such a
 setting we might have no blue-print (static source code) to usable for
 instantiating new live systems ex nihilo, or the option to take down
 existing systems to deploy an upgrade. The code running the nodes
 can be the result of wild mutation or complex generative algorithms.

 A mode of development could be to work on prototypes in a lab, a clone
 or an isolated node from the production system. When the desired
 properties are created in the prototype they would then spread through
 the production system by means of a virus which would adapt the new
 properties to the running instances individually according to their
 unique configuration.

That's exactly what's happening with most big software editors: Apple,
Microsoft, Adobe, Firefox, etc.

They develop new strands in their laboratories, and then virally spread
over all the computers of the world thru the Internet, automatically.
Well, sometimes you have to pay for big changes, but they let the small
changes spread for $free.


 Is it feasible? Would it provide new options? Any research done in
 this direction?

Joke apart, people are still resiting a lot to stochastic software.
One problem with random spreading of updates is that its random.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread Pascal J. Bourguignon
BGB cr88...@gmail.com writes:

 but you can't really afford a house without a job, and can't have a
 job without a car (so that the person can travel between their job and
 their house).

Job is an invention of the Industrial era.  AFAIK, our great great grand
parents had houses.


 I don't really think it is about gender role or stereotypes, but
 rather it is more basic:
 people mostly operate in terms of the pursuit of their best personal
 interests.

Ok.

 so, typically, males work towards having a job, getting lots money,
 ... and will choose females based mostly how useful they are to
 themselves (will they be faithful, would they make a good parent,
 ...).

Well it's clear that it's not their best interest to do that: only about
40% males reproduce in this setup.


 in this case, then society works as a sort of sorting algorithm, with
 better mates generally ending up together (rich business man with
 trophy wife), and worse mates ending up together (poor looser with a
 promiscuous or otherwise undesirable wife).

And this is also the problem, not only for persons, but for society: the
sorting is done on criteria that are bad.  Perhaps they were good to
survive in the savanah, but they're clearly an impediment to develop a
safe technological society.




 Well, perhaps.  This is not my way to learn how to program (once really)
 or to learn a new programming language.

 dunno, I learned originally partly by hacking on pre-existing
 codebases, and by cobbling things together and seeing what all did and
 did not work (and was later partly followed by looking at code and
 writing functionally similar mock-ups, ...).

 some years later, I started writing a lot more of my own code, which
 largely displaced the use of cobbled-together code.

 from what I have seen in code written by others, this sort of cobbling
 seems to be a fairly common development process for newbies.


I learn programming languages basically by reading the reference, and by
exploring the construction of programs from the language rules.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread Pascal J. Bourguignon
David-Sarah Hopwood david-sa...@jacaranda.org writes:

 On 17/07/12 02:15, BGB wrote:
 so, typically, males work towards having a job, getting lots money, ... and 
 will choose
 females based mostly how useful they are to themselves (will they be 
 faithful, would they
 make a good parent, ...).
 
 meanwhile, females would judge a male based primarily on their income, 
 possessions,
 assurance of continued support, ...
 
 not that it is necessarily that way, as roles could be reversed (the female 
 holds a job),
 or mutual (both hold jobs). at least one person needs to hold a job though, 
 and by
 default, this is the social role for a male (in the alternate case, usually 
 the female is
 considerably older, which has a secondary limiting factor in that females 
 have a viable
 reproductive span that is considerably shorter than that for males, meaning 
 that the
 older-working-female scenario is much less likely to result in offspring, 
 ...).
 
 in this case, then society works as a sort of sorting algorithm, with 
 better mates
 generally ending up together (rich business man with trophy wife), and worse 
 mates ending
 up together (poor looser with a promiscuous or otherwise undesirable wife).

 Way to go combining sexist, classist, ageist, heteronormative, cisnormative, 
 ableist
 (re: fertility) and polyphobic (equating multiple partners with 
 undesirability)
 assumptions, all in the space of four paragraphs. I'm not going to explain in 
 detail
 why these are offensive assumptions, because that is not why I read a mailing 
 list
 that is supposed to be about the Fundamentals of New Computing. Please 
 stick to
 that topic.

It is, but it is the reality, and the reason of most of our problems
too.  And it's not by putting an onus on the expression of these choices
that you will repress them: they come from the deepest, our genes and
the genetic selection that has been applied on them for millena.

My point here being that what's needed is a change in how selection of
reproductive partners is done, and obviously, I'm not considering doing
it based on money or political power.   Of course, I have none of either
:-) 

And yes, it's perfectly on-topic, if you consider how science and
technology developments are directed.  Most of our computing technology
has been created for war.


Or said otherwise, why do you think this kind of refundation project
hasn't the same kind of resources allocated to the commercial or
military projects?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
Iian Neill iian.d.ne...@gmail.com writes:

 And I suspect the fact that BASIC was an interpreted language had a
 lot to do with fostering experimentation  play.

BASIC wasn't interpreted.  Not always.  What matters is not interpreter
or compiler, but to have an INTERACTIVE environment, vs. a BATCH
environment.


As for education, Python makes probably a good BASIC, even if I'd prefer
people be taught Scheme.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
Loup Vaillant l...@loup-vaillant.fr writes:

 Pascal J. Bourguignon a écrit :
 Unfortunately, [CS is] not generalized yet, like mathematics of history.

 Did you mean history of mathematics?  Or something like this?
 http://www.ted.com/talks/jean_baptiste_michel_the_mathematics_of_history.html

Oops, I meant OR, not of.  Sorry for the confusion.

(But both mathematics of history and history of mathematics are
interesting too :-)).
-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
Miles Fidelman mfidel...@meetinghouse.net writes:

 Pascal J. Bourguignon wrote:
 Miles Fidelman mfidel...@meetinghouse.net writes:
 And seems to have turned into something about needing to recreate the
 homebrew computing milieu, and everyone learning to program - and
 perhaps why don't more people know how to program?

 My response (to the original question) is that folks who want to
 write, may want something more flexible (programmable) than Word, but
 somehow turning everone into c coders doesn't seem to be the answer.
 Of course not.  That's why there are languages like Python or Logo.


 More flexible tools (e.g., HyperCard, spreadsheets) are more of an
 answer -  and that's a challenge to those of us who develop tools.
 Turning writers, or mathematicians, or artists into coders is simply a
 recipe for bad content AND bad code.
 But everyone learns mathematics, and even if they don't turn out
 professionnal mathematicians, they at least know how to make a simple
 demonstration (or at least we all did when I was in high school, so it's
 possible).

 Similarly, everyone should learn CS and programming, and even if they
 won't be able to manage software complexity at the same level as
 professionnal programmers (ought to be able to), they should be able to
 write simple programs, at the level of emacs commands, for their own
 needs, and foremost, they should understand enough of CS and programming
 to be able to have meaningful expectations from the computer industry
 and from programmers.

 Ok... but that begs the real question: What are the core concepts that
 matter?

 There's a serious distinction between computer science, computer
 engineering, and programming.  CS is theory, CE is architecture and
 design, programming is carpentry.

 In math, we start with arithmetic, geometry, algebra, maybe some set
 theory, and go on to trigonometry, statistics, calculus, .. and
 pick up some techniques along the way (addition, multiplication, etc.)

 In science, it's physics, chemistry, biology,  and we learn some
 lab skills along the way.

 What are the core concepts of CS/CE that everyone should learn in
 order to be considered educated?  What lab skills?  Note that there
 still long debates on this when it comes to college curricula.

Indeed.  The French National Education is answering to that question
with its educational programme, and the newly edited manual.

https://wiki.inria.fr/sciencinfolycee/TexteOfficielProgrammeISN

https://wiki.inria.fr/wikis/sciencinfolycee/images/7/73/Informatique_et_Sciences_du_Num%C3%A9rique_-_Sp%C3%A9cialit%C3%A9_ISN_en_Terminale_S.pdf



 Some of us greybeards (or fuddy duddies if you wish) argue for
 starting with fundamentals:
 - boolean logic
 - information theory
 - theory of computing
 - hardware design
 - machine language programming (play with microcontrollers in the lab)
 - operating systems
 - language design
 - analysis
 - algorithms

Yes, some of all of that.

 On the other hand, an awful lot of classes, and college degree
 programs seem to think that coding in Java is all there is, and we're
 seeing degrees in game design (not that game design is simple,
 particularly if one goes into things like physics modeling, image
 processing, massive concurrency, and so forth).

Indeed.  In the French manual, it's made mention only of languages in
the Algol family.  It would be better if they also spoke of Prolog,
Haskell, and of course Lisp too.  But this can be easily corrected by
the teachers, if they're good enough. 


 And then there's the school of thought that all you need to know is
 how to use things - turn on a computer, use common programs, maybe
 write some Excel macros, and customize their operating
 environment. (After all, most of us learn to drive, but how many
 people take an auto shop class anymore.)

 Now me... I kind of think that high school should focus more on
 computational thinking than on programming.  Yes, kids should write
 a few programs along the way, but that's the lab component.  A more
 interesting question becomes: is this a separate discipline, or is it
 something to be incorporated into math and science?

Indeed, I find that in the French manual, algorithms are more stressed
than the programming language itself (Java).  It's definitely not a Java
manual.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
Miles Fidelman mfidel...@meetinghouse.net writes:

 Pascal J. Bourguignon wrote:
 Indeed.  The French National Education is answering to that question
 with its educational programme, and the newly edited manual.

 https://wiki.inria.fr/sciencinfolycee/TexteOfficielProgrammeISN

 https://wiki.inria.fr/wikis/sciencinfolycee/images/7/73/Informatique_et_Sciences_du_Num%C3%A9rique_-_Sp%C3%A9cialit%C3%A9_ISN_en_Terminale_S.pdf



 Any idea if there's an English translation floating around?

I doubt it.  It has just been published, and it's really only useful in
France, starting with the next school year.

Try Google Translate on the table of contents?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
BGB cr88...@gmail.com writes:

 general programming probably doesn't need much more than pre-algebra
 or maybe algebra level stuff anyways, but maybe touching on other
 things that are useful to computing: matrices, vectors, sin/cos/...,
 the big sigma notation, ...

Definitely.  Programming needs discreete mathematics and statistics much
more than the mathematics that are usually taught (which are more useful
eg. to physics).


 but, a person can get along pretty well provided they get basic
 literacy down fairly solidly (can read and write, and maybe perform
 basic arithmetic, ...).

 most other stuff is mostly optional, and wont tend to matter much in
 daily life for most people (and most will probably soon enough forget
 anyways once they no longer have a school trying to force it down
 their throats and/or needing to cram for tests).

No, no, no.  That's the point of our discussion.  There's a need to
increase computer-literacy, actually programming-literacy of the
general public.

The situation where everybody would be able (culturally, with a basic
knowing-how, an with the help of the right software tools and system) to
program their applications (ie. something totally contrary to the
current Apple philosophy), would be a better situation than the one
where people are dumbed-down and are allowed to use only canned software
that they cannot inspect and adapt to their needs.

Furthermore, beside the need the general public has of being able to do
some programming, non-CS professionals also need to be able to write
programs.  Technicians and scientists in various domains such as
biology, physics, etc, need to know enough programming to write honest
programs for their needs.  Sure, they won't have to know how to write a
device driver or a unix memory management subsystem.  But they should be
able to design and implement algorithms to process their experiments and
their data, (and again, with the right software tools, things like
Python sound good enough for this kind of users, I kind of agree with
http://danweinreb.org/blog/why-did-mit-switch-from-scheme-to-python).  


 so, the main goal in life is basically finding employment and basic
 job competence, mostly with education being as a means to an end:
 getting higher paying job, ...

Who said that?


 (so, person pays colleges, goes through a lot of pain and hassle, gets
 a degree, and employer pays them more).

You wish!




 probably focusing more on the useful parts though.

No, that's certainly not the purpose of high-school education.



 On the other hand, an awful lot of classes, and college degree
 programs seem to think that coding in Java is all there is, and we're
 seeing degrees in game design (not that game design is simple,
 particularly if one goes into things like physics modeling, image
 processing, massive concurrency, and so forth).
 Indeed.  In the French manual, it's made mention only of languages in
 the Algol family.  It would be better if they also spoke of Prolog,
 Haskell, and of course Lisp too.  But this can be easily corrected by
 the teachers, if they're good enough.

 yes, but you can still do a lot with Java (even if hardly my favorite
 language personally).

 throw some C, C++, or C# on there, and it is better still.

No.  Java is good enough to show off the algol/procedural and OO
paradygms.  There's no need to talk about C, C++ or C# (those language
are only useful to professionnal CS guys, not to the general public).
(And yes, I'd tend to think Python would be better for the general
public than Java).

What you could throw in, is some Lisp, some Prolog, and some
Haskell.  Haskell could even be taught in Maths instead of in CS ;-) 

The point here is to teach to the general public (eg. your future
customers and managers) that there are other languages than the
currently-popular Algol-like languages and languages in the Lisp, logic
or functional families are also useful tools.


 a problem with most other further reaching languages is:
 it is often harder to do much useful with them (smaller communities,
 often deficiencies regarding implementation maturity and library
 support, ... 1);

This is irrelevant.


 it is harder still for people looking at finding a job, since few jobs
 want these more obscure languages;

This is totally irrelevant to the question of educating the general
public and giving them a CS/programming culture.


 a person trying to just get it done may have a much harder time
 finding code to just copy/paste off the internet (or may have to go
 through considerably more work translating it from one language to
 another, 2);

This is irrelevant.  The question is for them to know what CS can do for
them, and know that they can hire a profession CS/programmer to do the
hard work.



 1: it is not a good sign when one of the first major questions usually
 asked is how do I use OpenGL / sound / GUI / ... with this thing?,
 which then either results in people looking for 3rd party packages to
 do it, 

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
Miles Fidelman mfidel...@meetinghouse.net writes:

 Pascal J. Bourguignon wrote:
 No, no, no.  That's the point of our discussion.  There's a need to
 increase computer-literacy, actually programming-literacy of the
 general public.

 The situation where everybody would be able (culturally, with a basic
 knowing-how, an with the help of the right software tools and system) to
 program their applications (ie. something totally contrary to the
 current Apple philosophy), would be a better situation than the one
 where people are dumbed-down and are allowed to use only canned software
 that they cannot inspect and adapt to their needs.

 As fond as I am of the days of Heathkits and homebrew computers, do we
 really expect people to build their computers, or cars, or houses, or
 even bicycles?  Specify and evaluate, maybe repair, but build?
 (Though the new DIY movement is refreshing!).

This is a totally different and unrelated question.



 Furthermore, beside the need the general public has of being able to do
 some programming, non-CS professionals also need to be able to write
 programs.

 I guess the question for me is what do you/we mean by programming?
 To me, it's about analyzing a problem, designing and algorithm, then
 reducing that algorithm to running code.  Being facile in one language
 or another seems less important.

We agree.


 Or put another way, what's important in math are word problems, not
 the multiplication tables.


Agreed too.


 It's about thinking mathematically, or algorithmically.

Yes.


 Just one man's opinion, though.

Two men.

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
BGB cr88...@gmail.com writes:

 and, one can ask: does your usual programmer actually even need to
 know who the past US presidents were and what things they were known
 for? or the differences between Ruminant and Equine digestive systems
 regarding their ability to metabolize cellulose?

 maybe some people have some reason to know, most others don't, and for
 them it is just the educational system eating their money.

My answer is that it depends on what civilization you want.  If you want
a feudal civilization with classes, indeed, some people don't have to
know.  Let's reserve weapon knowledge to the lords, letter and cheese
knowledge to the monks, agriculture knowledge to the peasants.

Now if you prefer a technological civilization including things like
nuclear power (but a lot of other science applications are similarly
delicate), then I argue that you need widespread scientific, technical
and general culture (history et al) knowledge. 

Typically, the problems the Japanese have with their nuclear power
plants, and not only since Fukushima, are due to the lack of general and
scientific knowledge, not in the nuclear power plant engineers, but in
the general population, including politicians.


 so, the barrier to entry is fairly high, often requiring people who
 want to be contributors to a project to have the same vision as the
 project leader. sometimes leading to an inner circle of yes-men, and
 making the core developers often not accepting of, and sometimes
 adversarial to, the positions held by groups of fringe users.

This concerns only CS/programmer professionnals.  This is not the
discussion I was having.



 so, the main goal in life is basically finding employment and basic
 job competence, mostly with education being as a means to an end:
 getting higher paying job, ...
 Who said that?

 I think this is a given.

 people need to live their lives, and to do this, they need a job and
 money (and a house, car, ...).

No.  In what you cite, the only thing need is a house.

What people need are food, water, shelter, clothes, some energy for a
few appliances.  All the rest is not NEEDED, but may be convenient.

Now specific activities or person may require additionnal specific
things.  Eg. we programmers need an internet connection and a computer.
Other people may have some other specific needs.  But a job or money is
of use to nobody (unless you want to run some pack rat race).



 likewise goes for finding a mate: often, potential mates may make
 decisions based largely on how much money and social status a person
 has, so a person who is less well off will be overlooked (well, except
 by those looking for short-term hook-ups and flings, who usually more
 care about looks and similar, and typically just go from one
 relationship to the next).

This is something to be considered too, but even if it's greatly
influenced by genes, 
http://www.psy.fsu.edu/~baumeistertice/goodaboutmen.htm
I'm of the opinion that human are not beasts, and we can also run a
cultural program superceding our genetic programming in a certain
measure.  (Eg. women don't necessarily have to send 2/3 of men to war or
prison and reproduce with, ie. select, only 1/3 of psychopathic males).
Now of course we're not on the wait to any kind of improvement there.
But this is not the topic of this thread either.




 probably focusing more on the useful parts though.
 No, that's certainly not the purpose of high-school education.

 usually it seems more about a combination of:
 keeping students in control and under supervision;
 preparing them for general worker drone tasks, by giving them lots
 of busywork (gotta strive for that A = be a busy little worker bee
 in the office);

Yes, and in designing a new educational program I see no reason to
continue in this way.


 now, how many types of jobs will a person actually need to be able to
 recite all 50 states and their respective capital cities? or the names
 of the presidents and what they were most known for during their terms
 in office?

 probably not all that many...

This kind of background, cultural knowledge could make you avoid costly
errors, the more so in the information age.  Like some geographic
knowledge can let you avoid taking an airplane ticket to Sidney and
arrive in tropical shirt and shorts in North Dakota under 50 cm of
snow.  And some basic chemical or nuclear knowledge can let a janitor
avoid leaking radioactive gases from a Japanese nuclear plant, like it
occured some years ago.  




 1: it is not a good sign when one of the first major questions usually
 asked is how do I use OpenGL / sound / GUI / ... with this thing?,
 which then either results in people looking for 3rd party packages to
 do it, or having to write a lot of wrapper boilerplate, or having to
 fall back to writing all these parts in C or similar.
 This is something that is solved in two ways:

 - socially: letting the general public have some consciousness of what
CS is and what it 

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread Pascal J. Bourguignon
Miles Fidelman mfidel...@meetinghouse.net writes:

 Pascal J. Bourguignon wrote:
 Miles Fidelman mfidel...@meetinghouse.net writes:

 Pascal J. Bourguignon wrote:
 No, no, no.  That's the point of our discussion.  There's a need to
 increase computer-literacy, actually programming-literacy of the
 general public.

 The situation where everybody would be able (culturally, with a basic
 knowing-how, an with the help of the right software tools and system) to
 program their applications (ie. something totally contrary to the
 current Apple philosophy), would be a better situation than the one
 where people are dumbed-down and are allowed to use only canned software
 that they cannot inspect and adapt to their needs.
 As fond as I am of the days of Heathkits and homebrew computers, do we
 really expect people to build their computers, or cars, or houses, or
 even bicycles?  Specify and evaluate, maybe repair, but build?
 (Though the new DIY movement is refreshing!).
 This is a totally different and unrelated question.

 Not at all.  The topic is historical precedents for technical literacy.

Well, I don't think the analogy is valid.  Historically, those
activities were done by hackers.

Nowadays, everybody has a computer in his pocket, and in his car.

I'd rather make an analogy with books: everybody can read and write and
almost everybody has books, and is able to write in their margin.  But
the analogy can go only so far because computers and programming is
radically different from everything we had until now.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Pascal J. Bourguignon
John Zabroski johnzabro...@gmail.com writes:

 On Jun 15, 2012 2:39 PM, Pascal J. Bourguignon p...@informatimago.com 
 wrote:

 John Zabroski johnzabro...@gmail.com writes:


  Sorry, you did not answer my question, but instead presented excuses
  for why programmers misunderstand people.  (Can I paraphrase your
  thoughts as, Because people are not programmers!) 

 No, you misunderstood my answer:
 Because people don't pay programmers enough.

 In the words of comedian Spike Milligan, All I ask is for the chance to 
 prove money can't make me happy.

 But my motto comes from pianist Glenn Gould: the ideal ratio of performers to 
 audience is one. I have never seen a software team produce better results 
 with better pay, but most of
 the great advances in software came from somebody doing something differently 
 because any other way was simply wrong.

 Having seen millionaires throw their money around to build their dream app 
 (the Chandler project featured in Scott Rosenberg's book Dreaming in Code and 
 all of Sandy Klausner's
 vaporware graphical programming ideas), and seeing what road blocks still 
 remained, I disbelieve your answer.

 Who invented the spreadsheet? One person.
 Who invented pivot tables? One person.
 Who invented modeless text editing? One person.

 How much money is enough, anyway?  In the words of John D. Rockefellar, A 
 little bit more?

I wasn't speaking of the work of art programmers would do anyway.

I was speaking of what the customers want.  If they want to have the
same services as offered by plumbers (you don't hold the spanner to a
plumber, or you don't bring your own tubes; you don't get wet;  you just
call him, and let him deal with the leak: simple and nice user
interface, good end-result, including the hefty bill), then you'll have
to pay the same hourly rates as what you pay to plumbers.  Just google
some statistics.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Pascal J. Bourguignon
John Zabroski johnzabro...@gmail.com writes:

 Folks,

 Arguing technical details here misses the point. For example, a
 different conversation can be started by asking Why does my web
 hosting provider say I need an FTP client? Already technology is way
 too much in my face and I hate seeing programmers blame their tools
 rather than their misunderstanding of people.

 Start by asking yourself how would you build these needs from scratch
 to bootstrap something like the Internet.

 What would a web browser look like if the user didnt need a seperate
 program to put data somewhere on their web server and could just use
 one uniform mexhanism? Note I am not getting into nice to have
 features like resumption of paused uploads due to weak or episodic
 connectivity, because that too is basically a technical problem -- and
 it is not regarded as academically difficult either. I am simply
 taking one example of how users are forced to work today and asking
 why not something less technical. All I want to do is upload a file
 and yet I have all these knobs to tune and things to install and
 none of it takes my work context into consideration.


There are different problems.

About the tools and mechanisms, and their multiplicity, it's normal to
have a full toolbox.  Even with evolving technologies some tools are
used less often, each has its specific use and they're all useful.

Also, the point of discrete tools is that they're modular and can be
combined to great effect by a competent professionnal.  You wouldn't
want to dig all the holes with the same tool, be it either a spoon or a
caterpillar.


Now for the other problem, the users, one cause of that problem is the
accessibility and openess of computer and software technology, which
doesn't put clear boundaries between the professionnals and the
customers.  There're all shades of grays, amateurs, students and D.I.Y
in between.

But you're perfectly entitled to have expectations of good service and
ease of use.  You only need to realize that this will come with a cost,
and it won't be cheap.  

Basically, your choice is between:

- here, we have a toolbox, we will gladly lend it to you so you can have
  fun hacking your own stuff.

- tell us what you want, we'll work hard to provide you the easy
  service, and we'll send you the bill.

(ok, there are intermediary choices, but you can basically classify each
offer between a do-it-yourself solution and a everything-s-done-for-you
one).


However the difficulties of the later option is that things evolve so
fast that we may not have the time to develop affordable fine tuned
customer oriented solutions before they become obsolete.  Developing and
refining such services takes time, and money.


And in general, programmers are not paid well enough. 


Just compare the hourly wages of a plumber and a computer programmer,
and you'll understand why you don't get the same easy service from
programmers than what you get from plumbers.   But this is a problem
easily solved: just put the money on the table, and you'll find
competent programmers to implement your easy solution.


But it seems customers prefer crappy service as long as it's cheap (or
free).

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Pascal J. Bourguignon
David Leibs david.le...@oracle.com writes:

 I have kinda lost track of this thread so forgive me if I wander off
 in a perpendicular direction.

 I believe that things do not have to continually get more and more
 complex.  The way out for me is to go back to the beginning and start
 over (which is what this mailing list is all about).  I constantly go
 back to the beginnings in math and/or physics and try to re-understand
 from first principles.  Of course every time I do this I get less and
 less further along the material continuum because the beginnings are
 so darn interesting.

 Let me give an example from arithmetic which I learned from Ken
 Iverson's writings years ago.

 As children we spend a lot of time practicing adding up
 numbers. Humans are very bad at this if you measure making a silly
 error as bad. Take for example:

365
 +  366
 --

 this requires you to add 5  6, write down 1 and carry 1 to the next
 column then add 6, 6, and that carried 1 and write down 2 and carry a
 1 to the next column finally add 3, 3 and the carried 1 and write down
 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
 totally dyslexic mistake and should have written down a 3.

 Ken proposed learning to see things a bit differently and remember the
 digits are a vector times another vector of powers.  Ken would have
 you see this as a two step problem with the digits spread out.

3   6   5
 +  3   6   6
 

 Then you just add the digits. Don't think about the carries.

3   6   5
 +  3   6   6
 
6  12  11

 Now we normalize the by dealing with the carry part moving from right
 to left in fine APL style. You can almost see the implied loop using
 residue and n-residue.

 6  12 11
 6  13  0
 7   3  0

 Ken believed that this two stage technique was much easier for people
 to get right.  I adopted it for when I do addition by had and it works
 very well for me. What would it be like if we changed the education
 establishment and used this technique?  One could argue that this sort
 of hand adding of columns of numbers is also dated. Let's don't go
 there I am just using this as an example of going back and looking at
 a beginning that is hard to see because it is just too darn
 fundamental. 

It's a nice way to do additions indeed.

When doing additions mentally, I tend to do them from right to left,
predicting whether we need a carry or not by looking ahead the next
column.  Usually carries don't carry over more than one column, but
even if it does, you only have to remember a single digit at a time.

There are several ways to do additions :-)


Your way works as well for substractions:

3  6  5
-   3  7  1
---
0 -1  4
0 -10 + 4 = -6

3  7  1
 -  3  6  5
---
0  1 -4
   10 -4 = 6

and of course, it's already how we do multiplications too.



 We need to reduce complexity at all levels and that includes the
 culture we swim in.

Otherwise, you can always apply the KISS principle 
(Keep It Simple Stupid).


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread Pascal J. Bourguignon
Toby Schachman t...@alum.mit.edu writes:

 This half hour talk from Zed Shaw is making rounds,
 https://vimeo.com/43380467

 The first half is typical complaints about broken w3 standards and
 processes. The second half is his own observations on the difficulties
 of teaching OOP. He then suggests that OOP is an unnatural programming
 paradigm and that the problems of the web stem from the problems of
 OOP.

 My take:

 I agree with Zed that the dominant OOP view of reality no longer
 serves us for creating the systems of the future. I imagine that this
 will be a hard pill to swallow because we (programmers) have
 internalized this way of looking at the world and it is hard to *see*
 anything in the world that doesn't fit our patterns.

 The hint for me comes at 22 minutes in to the video. Zed mentions
 OOP's mismatch with relational databases and its emphasis on
 request-response modes of communication. Philosophically, OOP
 encourages hierarchy. Its unidirectional references encourage trees.
 Request-response encourages centralized control (the programmer has to
 choose which object is in charge). Ted Nelson also complains about
 hierarchical vs. relational topologies with respect to the web's
 historical development, particularly unidirectional links.

This is wrong.

Request-responses mode comes from the mapping of the notion of message
sending to the low-level notion of calling a subroutine.  Unidirectional
references comes from the mapping of the notion of association to the
low-level notion of pointer.

But those mappings are not inherent of OOP, and aren't even necessarily
promoted by a OO programming language. 

And even if they may seem at first natural with common OO programming
languages, it's easy to avoid using them.

For example, request-response is NOT the paradigm used in OpenStep
(Cocoa) programming.   For example, when a GUI object is edited, it may
send a message to a controller object, but it does not send the new
value.  Instead, it sends itself, and let the controller ask for its new
value later.

Some languages implement message sending as an asynchronous operation
natively.



So it seems to me it's a case of taking the tree for the forest.



 I've been reading (and rereading) Sutherland's 1963 Sketchpad thesis (
 http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf ) and it
 strikes me that philosophically it is founded on *relationships*
 rather than *hierarchy*. Internally, references are always stored
 bi-directionally. It presents the user with a conceptual model based
 on creating constraints (i.e. relationships) between shapes.

Indeed.


 Chapter 7 has been particularly hard for me to grok because his
 recursive merging has no good analogue in OOP inheritance strategies
 as far as I know. Here he takes a structure A (a network of things and
 their relationships) and merges them onto another structure B by
 specifically associating certain things in A with things in B. This
 operation creates new relationships in structure B, corresponding to
 the analogous relationships in structure A. Inheritance by analogy.

 He claims to get quite a bit of leverage from this strategy.

 Toby
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Presenting my educational language (and a possibly interesting concept)

2012-04-30 Thread Pascal J. Bourguignon
Mohamed Samy samy2...@gmail.com writes:

 Hi everyone,

 I've created an educational programming language (Arabic based) and
 now attempting to test it on some children in my family, talking with
 some schools...etc

 One of the major design directions for it was the concept of nested
 sub-languages; the first level looks like 80s basic, then structured
 programming is added on top, then OOP...etc.  I'm not sure, but this
 might fit in with Piaget's stages of development model; where as a
 child masters the lower-level concepts they can move to the higher
 level; instead of treating the situation as one language must fit
 all it's conceptually a ladder of languages.

 I've written about it here (with English-ized code samples) if
 anyone's interested...

 http://iamsamy.blogspot.com/2012/04/educational-tower-of-programming.html

Congratulations!  AFAIK it's the first programming language in Arabic.

I would suggest to teach lisp (or scheme) to children.  You could make
an arabic lisp.  The only thing needed (you already have it implemented
for kalima) is the right-to-left editor.  

Google translate displays arabic lisp forms nicely:

http://translate.google.com/#en|ar|%28%28lambda%20%28a%20b%29%20%28if%20%28%3C%20a%200%29%20b%20%28-%20b%20a%29%29%29%2042%205%29

You could base it on Common Lisp (definiting a package العربية لثغة
that would export aliases of the COMMON-LISP symbols;  with  99%
solution, there would be very little English leaking (NIL would have to
remain CL:NIL, since any other symbol would be true; for the rest some
level of wrapping around would be able to translate keywords and key
arguments to CL operators). I'd start with clisp which already has some
internationalization provisions.
http://clisp.podval.org/impnotes/i18n.html 

My guess is that once they've learned Arabic Lisp they would more easily
switch to Common Lisp ;-)

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT? Polish syntax

2012-04-30 Thread Pascal J. Bourguignon
Martin Baldan martino...@gmail.com writes:

 I have a little off-topic question.
 Why are there so few programming languages with true Polish syntax? I
 mean, prefix notation, fixed arity, no parens (except, maybe, for
 lists, sequences or similar). And of course, higher order functions.
 The only example I can think of is REBOL, but it has other features I
 don't like so much, or at least are not essential to the idea. Now
 there are some open-source clones, such as Boron, and now Red, but
 what about very different languages with the same concept?

 I like pure Polish notation because it seems as conceptually elegant
 as Lisp notation, but much closer to the way spoken language works.
 Why is it that this simple idea is so often conflated with ugly or
 superfluous features such as native support for infix notation, or a
 complex type system?

Parentheses allows to read code with less cerebral load than pure Polish
notation.

The problem, even with fixed arity, is that you need to know the arity
of each operator!

With parentheses you don't need to know the arity (and you get variable
arity for free).  This is good for the programmer who can reassign the
brain cells feed by parentheses to something else, (like solving the
programming problem at hand, instead of remembering a dictionnary of
operator and arities, or a table of operator precedence), and this is
good for the program, ie. for macros who can process the code without
knowing anything about the operators (the only thing macros need to know
about operators are:

- are they special operators? (there are a few of them, and for them the
  macro needs to know how to deal with them);

- are they macros? (then the macro can call macroexpand to resolve
  them);

- otherwise they are functions and nothing needs to be known about them,
  since the arity is implicitely given by the parentheses.


So parenthesis-less Polish notation is ok when all your operators are
binary operators, but in programming there are a lot of different
arities, so parenthesized Polish notation is better.



-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Pascal J. Bourguignon


On 2010/10/15, at 00:14 , Steve Dekorte wrote:



I have to wonder how things might be different if someone had made a  
tiny, free, scriptable Smalltalk for unix before Perl appeared...


There has been GNU smalltalk for a long time, AFAIR before perl, which  
was quite adapted to the unix environment.


It would certainly qualify as tiny since it lacked any big GUI  
framework, obviously it is free in all meanings of the words, and it  
is best in writing scripts.



My point is that it hasn't changed anything and nothing else would have.


BTW, there were rumors that Sun considered using Smalltalk in  
browsers instead of Java but the license fees from the vendors were  
too high. Anyone know if that's true?


No idea, but since they invented Java, they could have at a much lower  
cost written their own implementation of Smalltalk.


--
__Pascal Bourguignon__
http://www.informatimago.com




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Program representation

2010-05-10 Thread Pascal J. Bourguignon


On 2010/05/10, at 18:21 , John Nilsson wrote:

 When reading about the TCP/IP implementation in OMeta it strikes me  
that parsing the
 ASCII-art is still text. Isn't it kind of silly to spend all that  
syntax on representing

 something as fundamental as a table?

ASCII-art is not so bad.  If NASA use it!  http://en.wikipedia.org/wiki/HAL/S


So I was wondering, have you, at vpri, been contemplating  
alternative program representations besides text? It seems to me  
that if you had a richer model of representation it would be easier  
to make readable code.


The stuff that Jonathan Edwards been working on at http://subtextual.org/ 
 has been very inspiring for my own thinking on these matters.  
Representing conditionals as tables seems like a brilliant approach.


For my self I was thinking that maybe one could embed a programming  
language on top of a more graphical language, like HTML and see what  
comes out of it.


HTML sounds to me somewhat limited.  Try TeX.  But in any case,  
contrarily to ASCII-art, these are serialized representations, and it  
would be better to use Lisp sexps.


Googling for 2d parser or 2d expression parsing:
http://www.eecs.berkeley.edu/~fateman/papers/pres.pdf
http://www.moralfiber.org/eylon/berkeley/cs282/report.pdf



--
__Pascal Bourguignon__
http://www.informatimago.com




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Systems and artifacts

2010-05-03 Thread Pascal J. Bourguignon


On 2010-05-02, at 2:51, Gerry J wrote:

At Andrey's reference (2),there was an example that TCP/IP could be  
modelled in less than a hundred LOC, whereas a C code version might  
be more than an order of magnitude larger.

Is that model available?


I've not read it closely, but it seems we have here
http://fresh.homeunix.net/~luke/misc/repo/slitch/src/tcpip.lisp
an implementation of TCP/IP in lisp in less than one thousand lines  
including comments.


The core of TCP/IP is indeed not big.  Mind you, it had to run on  
computers of 40 years ago, so it just COULD NOT be big!


--
__Pascal J. Bourguignon__
http://www.informatimago.com/



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-03-11 Thread Pascal J. Bourguignon


On 2010/03/06, at 03:34 , John Zabroski wrote:




On Sun, Feb 28, 2010 at 3:19 PM, Kurt Stephens k...@kurtstephens.com  
wrote:

Alejandro F. Reimondo wrote:
John,
 Where else should I look?
 In my opinion what is missing in the languages
 formulations is sustainability of the system. [*]
In case of formula/abstract based declaration of systems
 all alternatives make people put on the idea(L) side
 and not in the system itself (the natural side).
Smalltalk is the only alternative of sustainable system
 development used commertially today.

Smalltalk did not spawn an entire industry of specialized hardware  
like Lisp.  However Lisp hardware is a collector's item now. :)


There are plenty of commercial projects using Common Lisp today and  
from what I can tell, there has been renewed, grassroots interest in  
Lisp (CL and Scheme) over the last 5 years.  Smalltalk is not the  
only alternative.  Both have ANSI standard specifications.


KAS


Have you read the Lisp Lore [1] book for a history of Lisp machines?

I am personally just 25 years old, and have been trying to buy a  
Symbolics Genera machine on eBay for a year now, and just can't get  
one at a reasonable price.


Believe me, these are reasonable prices!  (And don't forget to add the  
shipping and handling cost, these are heavy machines).


The reason why they're so expensive is because so few of them have  
been made.  You know, demand and offer...


For a time it was possible to buy instead alpha hardware and the  
Genera VM running on alpha.  Unfortunately, since the symbolics.com  
domain has been sold to a blogger, I don't know where you could obtain  
it from.




However, what I read in [1] is that the systems inherently were  
unstable in terms of dynamic reconfiguration (Ale's main point about  
openness).  I personally believe any system should inherently  
support superstabilization in its core.  Superstabilization is  
generalization of Dijkstra's definition of system stability.


It was definitely possible to break it, but then you can also break  
your linux kernel and try to reboot.  Or just write to /proc/kmem as  
root...


However, I've been told that they had as good uptimes as unix systems  
if not better, and that their network services weren't as susceptible  
to external attack as on unix systems.




If necessary. I can quote pages from this book that mentions the  
instability of reconfiguring the system.


[1]  LISP Lore: A Guide to Programming the LISP Machine by  H.  
Bromley, Richard Lamson ISBN-13: 978-0898382280


--
__Pascal Bourguignon__
http://www.informatimago.com




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] System A vs B, what?

2010-03-07 Thread Pascal J. Bourguignon


On 2010-03-04, at 18:12, Alejandro Garcia wrote:


Now given those rules:
If in system A I set one of the nodes to TRUE I don't know the state  
of the whole system.

This system is harder to know it has 16 possible states (2^4)


If in system B I set bottom node to TRUE then turns out all the  
nodes have a value of TRUE.

If I set that node to FALSE then all the nodes have a value of FALSE
so in essence it has 2 possible states.

And that is it. It is easier to know what the whole system B is  
going to be in any given time

than it is to know system A. Therefore simplier.




Correct.

However, this depends on the processor.

We haven't indentified an absolute complexity.

The complexity of a system is a function of that system (the program)  
and the underlying system (the processor).


When we give the asymptotic complexities of algorithms (time or  
space), we give then in units relative to the underlying processor.   
For example, we count the number of comparisons, or the number of  
sums, assuming that there is a processor in which these operations are  
considered as the base (in vectorial space sense).


Another example is when you compile a program for two processors, one  
complex and one simple (CISC vs. RISC).  The binary generated for  
the CISC processor will be simplier than the binary generated for the  
RISC processor.  However, the RISC processor is simplier than the CISC  
processor, in terms of their own processors: the silicium circuits.


At the other end, when a customer gives you a very simple  
specification: make a program to compute the pay of each employee of  
my company, he relies on a very complex processor: the human brain of  
the analyst, (or even the brains of a whole software development  
organization).  The work of the programmers will consist in expanding  
the specification by rendering the complexity of the problem explicit,  
so that eventually a processor simple enough to be implemented in  
silicium is able to work on it.



So the problem is that when you give your two systems, A and B, you  
don't identify which processor you're considering.


If you consider as processor a machine that must determine the state  
of an automata instance of the graphs A or B, then indeed, system A +  
that processor is more complex, because system A has more states.


But if you consider as processor a machine that must reproduce the  
graphs A or B, then obviously the graph B containing more information,  
is more complex to reproduce, and will require a more complex processor.




Once you consider the systems composed of a program along with a  
processor, perhaps we may now elaborate an absolute complexity  
measure for them.  (I would use Kolmogorov complexity, or take the  
size of the descriptions of both parts together, compressed, as an  
approximation).





Now, an interesting fact is that the Lisp processor is itself  
described in one page of Lisp code, so that it doesn't add any  
essential complexity to any Lisp program (well, that's true of scheme  
and LISP, Common Lisp is slightly more complex).


To compare hw.c with hw.lisp, you would have to add the sources of a C  
interpreter, or a C compiler with the sources of a virtual machine for  
its target, etc.



There's also a caveat, is that if you consider a processor very  
powerful (a human brain), you can either over complexify or under  
complexify a programm (a specification).  But a good human processor  
will be able to extract or expand the complexity of the specification  
to produce a program and processor system closer to the real  
complexity of the problem (if only for economical reasons).



--
__Pascal Bourguignon__
http://www.informatimago.com/





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-07 Thread Pascal J. Bourguignon


On 2010-03-05, at 00:06, Michael Arnoldus wrote:
So my suggestions was to use complexity in the context of improving  
programmer (FSE) productivity. And I hinted at some possible  
measurements that might be useful for this. I however do not in any  
way pretend this is clear enough to work as a clear definition of  
complexity or even metrics. And - for me at least - is not clear to  
me that a single metric will be sufficient with the chosen context  
and purpose (I'm aware we're not even clear on purpose yet).


I'm not able to pick a single definition of complexity that fits my  
(maybe our?) context and purpose. I suspect that finding the right  
meaning and definition of complexity in this context is more than  
half the solution - as it is with most really interesting problems.


If you have a suggestion I'm all ears :-)


I'm afraid that you cannot ask either to improve the programmer  
productivity without specifying a target processor.


Taking again the example of (Compute the pay of each employee of my  
company + programmer) system, the productivity of programmer A could  
be infinite, if the target processor is programmer B and A can say  
to B: Compute the pay of each employee of my company.


But if we consider as processor in the target (program+processor)  
system  a programmer C who doesn't know anything about pay, then  
programmer A will have more work, and his productivity will be finite:  
he will have to specify to programmer C a program where all the pay  
relative algorithms are explicited.


And if the target processor is an 6502, then the productivity of  
programmer A will be abysmal, without tools.   If you add tools, eg. a  
C compiler, then it means the target processor is not a 6502, but a C  
machine, and the productivity of programmer A is accordingly increased.



Well, this not new, it has been known since the 60's that the  
productivity of programmers is in direct proportion to the high  
levelness of the target machine, that is the programming language  
used,  what I call processor in my (program+processor) systems.


--
__Pascal Bourguignon__
http://www.informatimago.com/





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc