Re: [fonc] How it is

2012-10-03 Thread Loup Vaillant

Pascal J. Bourguignon a écrit :

The problem is not the sources of the message.  It's the receiptors.


Even if it's true, it doesn't help.  Unless you see that as an advice
to just give up, that is.

Assuming we _don't_ give up, who can we reach even those that won't
listen?  I only have two answers: trick them, or force them.  Most
probably a killer-something, followed by the revelation that it uses
some alien technology.  Now the biggest roadblock is making the alien
tech not scary (alien technology is already bad in this respect).

An example of a killer-something might be a Raspberry-Pi shipped with a
self-documented Frank-like image.  By self-documented, I mean something
more than emacs.  I mean something filled with tutorials about how to
implement, re-implement, and customise every part of the system.

And it must be aimed at children.  Unlike most adults, they can get
past C-like syntax.

Loup.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Pascal J. Bourguignon
Loup Vaillant l...@loup-vaillant.fr writes:

 Pascal J. Bourguignon a écrit :
 The problem is not the sources of the message.  It's the receiptors.

 Even if it's true, it doesn't help.  Unless you see that as an advice
 to just give up, that is.

 Assuming we _don't_ give up, who can we reach even those that won't
 listen?  I only have two answers: trick them, or force them.  Most
 probably a killer-something, followed by the revelation that it uses
 some alien technology.  Now the biggest roadblock is making the alien
 tech not scary (alien technology is already bad in this respect).

 An example of a killer-something might be a Raspberry-Pi shipped with a
 self-documented Frank-like image.  By self-documented, I mean something
 more than emacs.  I mean something filled with tutorials about how to
 implement, re-implement, and customise every part of the system.

 And it must be aimed at children.  Unlike most adults, they can get
 past C-like syntax.

Agreed.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Ryan Mitchley

On 03/10/2012 10:39, Loup Vaillant wrote:


An example of a killer-something might be a Raspberry-Pi shipped with a
self-documented Frank-like image.  By self-documented, I mean something
more than emacs.  I mean something filled with tutorials about how to
implement, re-implement, and customise every part of the system.

And it must be aimed at children.  Unlike most adults, they can get
past C-like syntax.



Can I also add my vote for this idea?

Another comment - I have decided that I learned the most as a child by 
typing in program listings from books / magazines. I know this probably 
sounds ridiculous - especially given the attraction of a 
self-documenting, dynamic, inspectable system. However, I think the 
process and tedium gave a real feeling for syntax, allowing one's mind 
to work in the background and mull over the ideas being presented. I 
think the idea of a build your own computer, magazine partwork style - 
with both hardware and software being built up piece by piece - is 
possibly the way to go.


Ryan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Loup Vaillant

Ryan Mitchley a écrit :

On 03/10/2012 10:39, Loup Vaillant wrote:


An example of a killer-something might be a Raspberry-Pi shipped with a
self-documented Frank-like image.  By self-documented, I mean something
more than emacs.  I mean something filled with tutorials about how to
implement, re-implement, and customise every part of the system.

And it must be aimed at children.  Unlike most adults, they can get
past C-like syntax.



Can I also add my vote for this idea?


You can, thanks.  Though I recall it has already been mentioned here?



Another comment - I have decided that I learned the most as a child by
typing in program listings from books / magazines.
[…]
I think the idea of a build your own computer, magazine partwork style
- with both hardware and software being built up piece by piece - is
possibly the way to go.


Possibly.  However, you still need a working computer to be able to
write such a system.  I see 2 obvious paths:

 1 Ship a full computer system, and the magazine will explain how to
   redo (a subset of) it piece by piece.
 2 Ship a bare-bones computer system (say, a Forth console), and the
   magazine will explain how to _bootstrap_ from there.

I think path 2 is best for curious hackers.  For children, path 1 is
probably better.  First, only path 1 can implement Bret Victor's ideas
about learn-able programming¹.  Second, it is probably best for the
child to be able to make something tangible right away, like a cat
chasing a mouse.

There's also a catch: device drivers.  Most computers have very
complicated hardware whose interfaces are not easy to program to.
The Raspberry-Pi itself has some proprietary parts.  Maybe a decent
kernel isn't that hard to write, but if it is, we may want to start
by sweeping a layer of virtualization under the rug.

Loup.

[1]: http://worrydream.com/LearnableProgramming/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Not just clear of mind

2012-10-03 Thread Casey Ransberger
I like a nice Benedict with a mimosa on a Saturday brunch, but lots of my
friends in Seattle think it's gross and/or immoral that I eat chicken eggs
and pig meat (actually I've quit the pig part, so I don't get the Bennie
anymore.) I think kids do need some guidance, but the things they're really
going to glom onto are the things they figure out that they like all on
their own. So: why not take the kid to brunch and get it a Benedict, then,
instead of being focused on stopping it from having ice cream? If she likes
the Benedict, win. If not, try something else. Bonus points for getting
something other than a Benedict for yourself and letting her pick off your
plate in the event that the Benedict doesn't suit her fancy.

In programming: a buffet of language choices is probably a good plan in
general. Most programmers have seen C and something comparable to Perl.
Everyone on this list knows that there are more than two ideas. Keeping the
stuff in the buffet healthy is key, you're right. But young people, more so
than anyone else, desire the freedom to choose. I'd suggest that the real
dodge is to give them a long list of healthy things to choose from.

Okay, it's 7:13am and now I'm gonna eat some ice cream, crack a beer open,
listen to 50 Cent, and do some crimes. When I get back though, Bach is ON.

;)

On Mon, Oct 1, 2012 at 9:24 AM, John Pratt jpra...@gmail.com wrote:

 Children will eat ice cream for breakfast if you don't stop them.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
Casey Ransberger
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Paul Homer
people will clean things up so long as they are sufficiently painful, but once 
this is achieved, people no longer care.

The idea I've been itching to try is to go backwards. Right now we use 
programmers to assemble larger and larger pieces of software, but as time goes 
on they get inconsistent and the inconsistencies propagate upwards. The result 
is that each new enhancement adds something, but also degenerates the overall 
stability. Eventually it all hits a ceiling.

If instead, programmers just built little pieces, and it was the computer 
itself that was responsible for assembling it all together into mega-systems, 
then we could reach scales that are unimaginable today. To do this of course, 
the pieces would have to be tightly organized. Contributors wouldn't have the 
freedom they do now, but that's a necessity to move from what is essentially a 
competitive environment to a cooperative one. Some very fixed rules become 
necessary.

One question that arises from this type of idea is whether or not it is even 
possible for a computer to assemble a massive working system from say, a 
billion little code fragments. Would it run straight into NP? In the general 
case I don't think the door could be closed on this, but realistically the 
assembling doesn't have to happen in real-time, most of it can be cached and 
reused. Also, the final system would be composed of many major pieces, each 
with their own breakdown into sub-pieces and each of those being also 
decomposable. Thus getting to a super-system could take a tremendous amount of 
time, but that could be distributed over a wide and mostly independent set of 
work. As it is, we have 1M+ programmers right now, most of whom are essentially 
writing the same things (just slightly different). With that type of man-power 
directed, some pretty cool things could be created.

Paul.






 From: BGB cr88...@gmail.com
To: fonc@vpri.org 
Sent: Tuesday, October 2, 2012 5:48:14 PM
Subject: Re: [fonc] How it is
 

On 10/2/2012 12:19 PM, Paul Homer wrote:

It always seems to be that each new generation of programmers goes straight 
for the low-hanging fruit, ignoring that most of it has already been solved 
many times over. Meanwhile the real problems remain. There has been progress, 
but over the couple of decades I've been working, I've always felt that it was 
'2 steps forward, 1.99 steps back. 



it depends probably on how one measures things, but I don't think it
is quite that bad.

more like, I suspect, a lot has to do with pain-threshold:
people will clean things up so long as they are sufficiently
painful, but once this is achieved, people no longer care.

the rest is people mostly recreating the past, often poorly, usually
under the idea this time we will do it right!, often without
looking into what the past technologies did or did not do well
engineering-wise.

or, they end up trying for something different, but usually this
turns out to be recreating something which already exists and turns
out to typically be a dead-end (IOW: where many have gone before,
and failed). often the people will think why has no one done it
before this way? but, usually they have, and usually it didn't turn
out well.

so, a blind rebuild starting from nothing probably wont achieve
much.
like, it requires taking account of history to improve on it
(classifying various options and design choices, ...).


it is like trying to convince other language/VM
designers/implementers that expecting the end programmer to have to
write piles of boilerplate to interface with C is a problem which
should be addressed, but people just go and use terrible APIs
usually based on registering the C callbacks with the VM (or they
devise something like JNI or JNA and congratulate themselves, rather
than being like this still kind of sucks).

though in a way it sort of makes sense:
many language designers end up thinking like this language will
replace C anyways, why bother to have a half-decent FFI?
whereas it is probably a minority position to design a language and
VM with the attitude C and C++ aren't going away anytime soon.


but, at least I am aware that most of my stuff is poor imitations of
other stuff, and doesn't really do much of anything actually
original, or necessarily even all that well, but at least I can try
to improve on things (like, rip-off and refine).

even, yes, as misguided and wasteful as it all may seem sometimes...


in a way it can be distressing though when one has created something
that is lame and ugly, but at the same time is aware of the various
design tradeoffs that has caused them to design it that way (like, a
cleaner and more elegant design could have been created, but might
have suffered in another way).

in a way, it is a slightly different experience I suspect...



Paul.





Re: [fonc] How it is

2012-10-03 Thread Carl Gundel
+1

Aiming the message at children means that we don't need to trick them, or 
force them.

-Carl Gundel

-Original Message-
From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Loup 
Vaillant
Sent: Wednesday, October 03, 2012 4:40 AM
To: fonc@vpri.org
Subject: Re: [fonc] How it is

Pascal J. Bourguignon a écrit :
 The problem is not the sources of the message.  It's the receiptors.

Even if it's true, it doesn't help.  Unless you see that as an advice to just 
give up, that is.

Assuming we _don't_ give up, who can we reach even those that won't listen?  I 
only have two answers: trick them, or force them.  Most probably a 
killer-something, followed by the revelation that it uses some alien 
technology.  Now the biggest roadblock is making the alien tech not scary 
(alien technology is already bad in this respect).

An example of a killer-something might be a Raspberry-Pi shipped with a 
self-documented Frank-like image.  By self-documented, I mean something more 
than emacs.  I mean something filled with tutorials about how to implement, 
re-implement, and customise every part of the system.

And it must be aimed at children.  Unlike most adults, they can get past C-like 
syntax.

Loup.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Loup Vaillant

De : Paul Homer paul_ho...@yahoo.ca


If instead, programmers just built little pieces, and it was the
computer itself that was responsible for assembling it all together into
mega-systems, then we could reach scales that are unimaginable today.
[…]


Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
I have no idea what assembling mechanisms could be used.  Could you
sketch a trivial example?

Loup.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Miles Fidelman

Loup Vaillant wrote:

De : Paul Homer paul_ho...@yahoo.ca


If instead, programmers just built little pieces, and it was the
computer itself that was responsible for assembling it all together into
mega-systems, then we could reach scales that are unimaginable today.
[…]


Sounds neat, but I cannot visualize an instantiation of this. Meaning,
I have no idea what assembling mechanisms could be used.  Could you
sketch a trivial example?

You're thinking too small!  The Internet (networks + computers + 
software + users), RESTful services, mashups, email discussion threads, 
 - great examples of emergent behavior.


Miles Fidelman

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread J. Andrew Rogers

On Oct 3, 2012, at 7:53 AM, Paul Homer paul_ho...@yahoo.ca wrote:
 If instead, programmers just built little pieces, and it was the computer 
 itself that was responsible for assembling it all together into mega-systems, 
 then we could reach scales that are unimaginable today. To do this of course, 
 the pieces would have to be tightly organized.


The missing element is an algorithmic method for decomposing the representation 
of large, distributed systems that is general enough that it does not imply 
significant restrictions on operator implementation at the level of individual 
piece such that the programmer has to be aware of the whole system 
implementation. The details of the local implementation will matter much less 
but it will also be a very different type of interface than programmers are 
used to designing toward. 

Programming environments improperly conflate selecting data models and 
operators with selecting data structures and algorithms.  Reimplementation is 
common because I need sorting, maps, vectors, etc in the abstract to build the 
software but the algorithms someone else may use to optimally implement them 
for their data model is pathological for my data model. The algorithms used to 
operate on a data model are not separable from the data model but most 
programming environments treat them as though they are. Large systems are not 
nicely decomposable because they impose global implementation details beyond 
the interfaces of the pieces.

A canonical example is the decomposition of an ad hoc relational join 
operation. Common data model representations impose the use of algorithms that 
are pathological for this purpose. You can't get there, or even close, with the 
data structures commonly used to represent data models. At the root, it is a 
data structure problem.

--
J. Andrew Rogers


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Loup Vaillant

Miles Fidelman a écrit :

Loup Vaillant wrote:

De : Paul Homer paul_ho...@yahoo.ca


If instead, programmers just built little pieces, and it was the
computer itself that was responsible for assembling it all together into
mega-systems, then we could reach scales that are unimaginable today.
[…]


Sounds neat, but I cannot visualize an instantiation of this. Meaning,
I have no idea what assembling mechanisms could be used.  Could you
sketch a trivial example?


You're thinking too small!  The Internet (networks + computers +
software + users), RESTful services, mashups, email discussion threads,
 - great examples of emergent behavior.


Emergent?  Beware, this words often reads Phlogiston. (It's often
used to explain phenomenons we just don't understand yet.)

The examples you provided are based on static standards (IP, HTTP, SMTP
—I don't know about mashups).  One characteristic of these standards
is, they are _dumb_.  Which is the point, as intelligence is supposed
to lie at the edge of the network (basic Internet principle that is at
risk these times).

Your idea seemed quite different.  I had the impression of something
_smart_, able to lift a significant part of the programming effort.  I
visualised some sort of self-assembling 'glue', whose purpose would be
to assemble various code snippets to do our bidding.

Note that we have already examples of such things.  Compilers, garbage
collectors, inferential engines… even game scripting engines. But those
are highly specialized. You seem to have in mind something more general.
But what, short of a full blown AI?

I see small because I see squat.  What kind of code fragments could be
involved? How the whole system may be specified? You do need to program
the system into doing what you want, eventually.

Loup.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Miles Fidelman

Loup Vaillant wrote:

Miles Fidelman a écrit :

Loup Vaillant wrote:

De : Paul Homer paul_ho...@yahoo.ca


If instead, programmers just built little pieces, and it was the
computer itself that was responsible for assembling it all together 
into

mega-systems, then we could reach scales that are unimaginable today.
[…]


Sounds neat, but I cannot visualize an instantiation of this. Meaning,
I have no idea what assembling mechanisms could be used. Could you
sketch a trivial example?


You're thinking too small!  The Internet (networks + computers +
software + users), RESTful services, mashups, email discussion threads,
 - great examples of emergent behavior.


Emergent?  Beware, this words often reads Phlogiston. (It's often
used to explain phenomenons we just don't understand yet.)


I believe it was Ilya Prigogine who won the Nobel Prize for his work on 
dissipative structures - the gist of which is that if you pour energy 
into a system, order emerges - essentially the inverse of the 2nd law of 
thermodynamics.


I'd also observe that the nature of biological evolution is that various 
kinds of building blocks (e.g., proteins, DNA), and that new levels of 
order emerge from combinations of building blocks.


My observation is that today's large systems are not designed, they're 
complex, adaptive systems (to use the current jargon) - where a lot of 
the observed systemic behavior emerges from the interactions of users 
and technology, when focused on particular applications.  My favorite 
example: walk into an Air Force operations center, and the screens 
aren't covered with windows from fancy command and control systems, 
they're covered with chat windows.  When a new op center is stood up, 
specific collections of persistent chat sessions emerge, over a period 
of weeks, as relationships and information flows emerge in response to 
the specific operational and mission environment.  The result is a 
rather complex person-machine-information_flow system that has emerged, 
on top of a rather simple platform.




The examples you provided are based on static standards (IP, HTTP, SMTP
—I don't know about mashups).  One characteristic of these standards
is, they are _dumb_.  Which is the point, as intelligence is supposed
to lie at the edge of the network (basic Internet principle that is at
risk these times).

Your idea seemed quite different.  I had the impression of something
_smart_, able to lift a significant part of the programming effort.  I
visualised some sort of self-assembling 'glue', whose purpose would be
to assemble various code snippets to do our bidding.


I kind of think that, with the right building blocks (right = a 
combination of useful, and presenting simple, composable interfaces), 
users, environment, and application provide a framework for something 
that looks very close to biological self-assembly.


Note that we have already examples of such things.  Compilers, garbage
collectors, inferential engines… even game scripting engines. But those
are highly specialized. You seem to have in mind something more general.
But what, short of a full blown AI?

I see small because I see squat.  What kind of code fragments could be
involved? How the whole system may be specified? You do need to program
the system into doing what you want, eventually.


That's where we disagree.  Large, complex systems are, by definition, 
more complicated than their component parts - and systems that include 
multiple human beings are intrinsically beyond the comprehension of or 
design by humans (we're components, after all).  Maybe we can grasp 
architectural principles, but details are beyond are cognitive 
abilities.  These days, we build platforms, turn them loose, and things 
emerge on top of them - push information through them (and remember, 
information = energy), and order emerges.  (Another example: an email 
list - the useful aspect are the conversational threads that emerge from 
use, not from design.)


Miles Fidelman



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread BGB

On 10/3/2012 9:53 AM, Paul Homer wrote:
people will clean things up so long as they are sufficiently painful, 
but once this is achieved, people no longer care.


The idea I've been itching to try is to go backwards. Right now we use 
programmers to assemble larger and larger pieces of software, but as 
time goes on they get inconsistent and the inconsistencies propagate 
upwards. The result is that each new enhancement adds something, but 
also degenerates the overall stability. Eventually it all hits a ceiling.


If instead, programmers just built little pieces, and it was the 
computer itself that was responsible for assembling it all together 
into mega-systems, then we could reach scales that are unimaginable 
today. To do this of course, the pieces would have to be tightly 
organized. Contributors wouldn't have the freedom they do now, but 
that's a necessity to move from what is essentially a competitive 
environment to a cooperative one. Some very fixed rules become necessary.


One question that arises from this type of idea is whether or not it 
is even possible for a computer to assemble a massive working system 
from say, a billion little code fragments. Would it run straight into 
NP? In the general case I don't think the door could be closed on 
this, but realistically the assembling doesn't have to happen in 
real-time, most of it can be cached and reused. Also, the final system 
would be composed of many major pieces, each with their own breakdown 
into sub-pieces and each of those being also decomposable. Thus 
getting to a super-system could take a tremendous amount of time, but 
that could be distributed over a wide and mostly independent set of 
work. As it is, we have 1M+ programmers right now, most of whom are 
essentially writing the same things (just slightly different). With 
that type of man-power directed, some pretty cool things could be created.




as to whether or not this can be applied generally.

but, some of this I think has to do, not so much with code, but with 
data and metadata.



code is much better about executing a particular action, representing a 
particular algorithm, ...
but, otherwise, code isn't all that inherently flexible. you can't 
really reinterpret code.


data then, isn't so much about doing something, but describing something 
in some particular format.
the code then can partly operate by interpreting the data, and 
completing a particular action with it.


some flexibility and scalability can be gained then by taking some 
things which would be otherwise hard-coded, and making them be part of 
the data.


however, trying to make data into code, or code into data, looses much 
of its advantages.
hard-coding things that would otherwise be data, often makes things 
brittle and makes maintenance and expansion harder.


trying to make code into data tends to impair its ability to actually 
do stuff, and may often end up being bulkier and/or less clear than 
actual code would have been, and unlike true data, greatly diminishes 
its ability to be reinterpreted (often, less is more when it comes to 
data representations).



a few semi-effective compromises seems to be:
A, scripting, where code is still code (or, at least, dynamically loaded 
scripts), but which may be embedded in or invoked by data (a major 
practical example of this is the whole HTML+JavaScript thing);
B, plug-board representation of logic within data, where the data will 
indicate which actions invoke which piece of code (without describing 
the code itself).


there is no clear way to scale this up to an everything level though.
(you wouldn't want the whole world to be HTML+JavaScript, just, no...).


metadata is, OTOH, data about code.
so, it is not code, since you can't really run metadata;
but, it isn't really data either, given what it describes is typically 
tightly integrated with the code itself (most often, it is stuff that 
the compiler knows while compiling the code, but is routinely discarded 
once the compiler finishes its work).



so, I guess it can be noted that there are several major types of 
metadata in current use:
symbolic debugging information, whose main purpose is to relate the 
compiled output back to the original source code;
reflective metadata, whose main purpose is generally to allow code to 
work on data-structures and objects;
specialized structures, such as class-definition and exception-unwinding 
tables, which are generally used as a part of the ABI.


on various systems, these may be more-or-less integrated, for example, 
on current Linux systems, all 3 cases are (sort-of) handled by DWARF, 
but on Windows using the MS tool-chain, these parts are more-or-less 
independent.


C and C++ typically lack reflective metadata, but this is commonly used 
in languages like Java and C#. in dynamically-typed languages, the 
object is often itself its own metadata.



meanwhile, in my case, I build and use reflective metadata for C.

the big part of the power of 

Re: [fonc] How it is

2012-10-03 Thread Paul Homer
A bit long, but ...


The way most people think about programming is that they are writing 'code'. As 
a lessor side-effect, that code is slinging 
around data. It grabs it from the user, throws it into memory and then 
if it is interesting data, it writes it to disk so that it can be looked at or 
edited later. The code is the primary thing they are creating, 
while the data is just a side-effect of using that code.


Way back I got introduced to seeing it the other way around. Data is 
everything. It's what the user types in, which is moved into some 
data-structures in memory and then is eventually restructured for 
persistence to be stored for later usage. Data sometimes contains 
'static linkages', that is one datam points to another explicitly. 
Sometimes the linkages are dynamic. A piece of code has to be run to 
make the connection between the data. In this perspective, code is 
nothing more than dynamic linkages or transformations between 
data-structures/formats (one could see the average of a bunch of floats 
for example as a transformation to a more simplified summation of the 
original data). The system is really just a massive flow of data, while 
the code is just what helps it get from place to place.

In the second perspective, an inventory system allows the data to flow 
from the users to the persistence medium. Sometimes the users need the 
data to flow back to them again, possibly summarized, or just for 
re-editing. The core of the system holds very simple data, basically a 
series of physical items, each with many associated properties and 
probably a bunch of cross-relationships. The underlying types, properties and 
relationships form a model of the data. For our modern systems that model might 
be implemented as a relational schema, but it could also be more exotic like 
NoSQL. 


In this sort of system, if the model where stored explicitly in the 
persistence and it is simple enough that the users could do data entry 
directly on a flat representation of it on the screen, then the whole 
system would be as simple as flinging the data back and forth between 
the disks and the screen. However as we all know, systems are never this 
trivial in the real world. 


Users need to navigate to specific data, and they often want the computer to 
fill in any 'global context information' for them as they move around. 
As well, they generally enter data in a simplified format, store the 
data in another, and then want a third way to view it. All of this 
amounts to a series of transformations happening to the data as it flows back 
and forth. Some transformations are simple, such as displaying a 
floating point number as a string truncated to some level of precision. 
Some are very complex, such as displaying a report that cross-checks the 
inventory to determine data or real-life problems. But all of the things on 
the screen are either directly data, or algorithmic transformations of 
the existing data.


As for programming, this type of system could be build by first specifying the 
model. To add to this would be a series of transformations, each 
basically a black box that specifies a set of input and a set of output. With 
the model and the transformations, someone could lay out a series 
of screens for the users (or power users could do it themselves). The 
underlying kernel of the system would then take requests for the screens and 
use that to work out the flow from or to the database. One could 
generalize this a bit further by ignoring any difference between the 
screen and the disks, and just thinking of them as a generalized 'context' of 
some 
type. 

What I like about this idea is that once someone creates a model, it can be 
re-used as is, elsewhere. Gradually industries will build up common models 
(with less being secret). And as they add billions of little 
transformations, these too can be shared. The kernel (if it it possible 
to actually write one :-) only needs to exist once. Then all that 
remains is for people to toss screens together as they need them (this 
part of programming is likely to never be static). As for performance, once a 
flow has been established, it would be possible to store and reuse any 
static data or transformation sequences, and that auto-optimization 
would only exist in the kernel so it could focus precisely on what 
provides the best results.

In a grand sense, you can see everything on the screen -- even little rounded 
corners, images and gadgets -- as just data that has flowed there from the disk 
somewhere (or network :-). The transformations behind something like a 
windowing system can appear daunting, but we know that they all started life as 
data somewhere that moved and bounced through a huge number of different 
data-structures, until finally ending up as a set of bits toggled in a screen 
buffer.

The on-going work to enhance the system would consistent of modeling data, and 
creating transformations. In comparison to modern 

Re: [fonc] How it is

2012-10-03 Thread karl ramberg
On Wed, Oct 3, 2012 at 6:14 PM, Loup Vaillant l...@loup-vaillant.fr wrote:
 Miles Fidelman a écrit :

 Loup Vaillant wrote:

 De : Paul Homer paul_ho...@yahoo.ca

 If instead, programmers just built little pieces, and it was the
 computer itself that was responsible for assembling it all together into
 mega-systems, then we could reach scales that are unimaginable today.
 […]


 Sounds neat, but I cannot visualize an instantiation of this. Meaning,
 I have no idea what assembling mechanisms could be used.  Could you
 sketch a trivial example?

 You're thinking too small!  The Internet (networks + computers +
 software + users), RESTful services, mashups, email discussion threads,
  - great examples of emergent behavior.


 Emergent?  Beware, this words often reads Phlogiston. (It's often
 used to explain phenomenons we just don't understand yet.)

 The examples you provided are based on static standards (IP, HTTP, SMTP
 —I don't know about mashups).  One characteristic of these standards
 is, they are _dumb_.  Which is the point, as intelligence is supposed
 to lie at the edge of the network (basic Internet principle that is at
 risk these times).

 Your idea seemed quite different.  I had the impression of something
 _smart_, able to lift a significant part of the programming effort.  I
 visualised some sort of self-assembling 'glue', whose purpose would be
 to assemble various code snippets to do our bidding.

 Note that we have already examples of such things.  Compilers, garbage
 collectors, inferential engines… even game scripting engines. But those
 are highly specialized. You seem to have in mind something more general.
 But what, short of a full blown AI?

 I see small because I see squat.  What kind of code fragments could be
 involved? How the whole system may be specified? You do need to program
 the system into doing what you want, eventually.

 Loup.


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

Some kind of AI would be necessary to achieve a self organizing
growing and learning system.
And the system would need to be general enough to be used for a wide
variety of applications.

Geoffrey Hinton have shown some interesting results for image recognition:
https://www.coursera.org/course/neuralnets

Karl
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Pascal J. Bourguignon
Paul Homer paul_ho...@yahoo.ca writes:

 The on-going work to enhance the system would consistent of modeling data, 
 and creating
 transformations. In comparison to modern software development, these would be 
 very little
 pieces, and if they were shared are intrinsically reusable (and 
 recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.  

But strangely enough, users are always asking for new programs…  Is it
because we've not programmed every functions already, or because we will
never have them all programmed?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Paul Homer
I think it's because that's what we've told them to ask for :-) 

In truth we can't actually program 'everything', I think that's a side-effect 
of Godel's incompleteness theorem. But if you were to take 'everything' as 
being abstract quantity, the more we write, the closer our estimation comes to 
being 'everything'. That perspective lends itself to perhaps measuring the 
current state of our industry by how much code we are writing right now. In the 
early years, we should be writing more and more. In the later years, less and 
less (as we get closer to 'everything'). My sense of the industry right now is 
that pretty much every year (factoring in the economy and the waxing or waning 
of the popularity of programming) we write more code than the year before. Thus 
we are only starting :-)

Paul.





 From: Pascal J. Bourguignon p...@informatimago.com
To: Paul Homer paul_ho...@yahoo.ca 
Cc: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, October 3, 2012 3:32:34 PM
Subject: Re: [fonc] How it is
 
Paul Homer paul_ho...@yahoo.ca writes:

 The on-going work to enhance the system would consistent of modeling data, 
 and creating
 transformations. In comparison to modern software development, these would 
 be very little
 pieces, and if they were shared are intrinsically reusable (and 
 recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.  

But strangely enough, users are always asking for new programs…  Is it
because we've not programmed every functions already, or because we will
never have them all programmed?


-- 
__Pascal Bourguignon__                    http://www.informatimago.com/
A bad day in () is better than a good day in {}.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Alan Moore
Paul,

This sounds a little like Linda and TupleSpaces... what was that you were
saying about re-inventing the wheel over and over?

LOL...

Alan Moore




On Wed, Oct 3, 2012 at 11:34 AM, Paul Homer paul_ho...@yahoo.ca wrote:

 A bit long, but ...

 The way most people think about programming is that they are writing
 'code'. As a lessor side-effect, that code is slinging around data. It
 grabs it from the user, throws it into memory and then if it is interesting
 data, it writes it to disk so that it can be looked at or edited later. The
 code is the primary thing they are creating, while the data is just a
 side-effect of using that code.

 Way back I got introduced to seeing it the other way around. Data is
 everything. It's what the user types in, which is moved into some
 data-structures in memory and then is eventually restructured for
 persistence to be stored for later usage. Data sometimes contains 'static
 linkages', that is one datam points to another explicitly. Sometimes the
 linkages are dynamic. A piece of code has to be run to make the connection
 between the data. In this perspective, code is nothing more than dynamic
 linkages or transformations between data-structures/formats (one could see
 the average of a bunch of floats for example as a transformation to a more
 simplified summation of the original data). The system is really just a
 massive flow of data, while the code is just what helps it get from place
 to place.

 In the second perspective, an inventory system allows the data to flow
 from the users to the persistence medium. Sometimes the users need the data
 to flow back to them again, possibly summarized, or just for re-editing.
 The core of the system holds very simple data, basically a series of
 physical items, each with many associated properties and probably a bunch
 of cross-relationships. The underlying types, properties and relationships
 form a model of the data. For our modern systems that model might be
 implemented as a relational schema, but it could also be more exotic like
 NoSQL.

 In this sort of system, if the model where stored explicitly in the
 persistence and it is simple enough that the users could do data entry
 directly on a flat representation of it on the screen, then the whole
 system would be as simple as flinging the data back and forth between the
 disks and the screen. However as we all know, systems are never this
 trivial in the real world.

 Users need to navigate to specific data, and they often want the computer
 to fill in any 'global context information' for them as they move around.
 As well, they generally enter data in a simplified format, store the data
 in another, and then want a third way to view it. All of this amounts to a
 series of transformations happening to the data as it flows back and forth.
 Some transformations are simple, such as displaying a floating point number
 as a string truncated to some level of precision. Some are very complex,
 such as displaying a report that cross-checks the inventory to determine
 data or real-life problems. But all of the things on the screen are either
 directly data, or algorithmic transformations of the existing data.

 As for programming, this type of system could be build by first specifying
 the model. To add to this would be a series of transformations, each
 basically a black box that specifies a set of input and a set of output.
 With the model and the transformations, someone could lay out a series of
 screens for the users (or power users could do it themselves). The
 underlying kernel of the system would then take requests for the screens
 and use that to work out the flow from or to the database. One could
 generalize this a bit further by ignoring any difference between the screen
 and the disks, and just thinking of them as a generalized 'context' of some
 type.

 What I like about this idea is that once someone creates a model, it can
 be re-used as is, elsewhere. Gradually industries will build up common
 models (with less being secret). And as they add billions of little
 transformations, these too can be shared. The kernel (if it it possible to
 actually write one :-) only needs to exist once. Then all that remains is
 for people to toss screens together as they need them (this part of
 programming is likely to never be static). As for performance, once a flow
 has been established, it would be possible to store and reuse any static
 data or transformation sequences, and that auto-optimization would only
 exist in the kernel so it could focus precisely on what provides the best
 results.

 In a grand sense, you can see everything on the screen -- even little
 rounded corners, images and gadgets -- as just data that has flowed there
 from the disk somewhere (or network :-). The transformations behind
 something like a windowing system can appear daunting, but we know that
 they all started life as data somewhere that moved and bounced through a
 huge number of 

Re: [fonc] How it is

2012-10-03 Thread BGB

On 10/3/2012 2:46 PM, Paul Homer wrote:

I think it's because that's what we've told them to ask for :-)

In truth we can't actually program 'everything', I think that's a 
side-effect of Godel's incompleteness theorem. But if you were to take 
'everything' as being abstract quantity, the more we write, the closer 
our estimation comes to being 'everything'. That perspective lends 
itself to perhaps measuring the current state of our industry by how 
much code we are writing right now. In the early years, we should be 
writing more and more. In the later years, less and less (as we get 
closer to 'everything'). My sense of the industry right now is that 
pretty much every year (factoring in the economy and the waxing or 
waning of the popularity of programming) we write more code than the 
year before. Thus we are only starting :-)





yeah, this seems about right.

from my own experience, new code being written in any given area tends 
to drop off once that part is reasonably stable or complete, apart from 
occasional tweaks/extensions, ...


but, there is always more to do somewhere else, so on average the code 
gradually gets bigger, as more functionality gets added in various areas.


and, I often have to decide where I will not invest time and effort.

so, yeah, this falls well short of everything...



Paul.


*From:* Pascal J. Bourguignon p...@informatimago.com
*To:* Paul Homer paul_ho...@yahoo.ca
*Cc:* Fundamentals of New Computing fonc@vpri.org
*Sent:* Wednesday, October 3, 2012 3:32:34 PM
*Subject:* Re: [fonc] How it is

Paul Homer paul_ho...@yahoo.ca mailto:paul_ho...@yahoo.ca writes:

 The on-going work to enhance the system would consistent of
modeling data, and creating
 transformations. In comparison to modern software development,
these would be very little
 pieces, and if they were shared are intrinsically reusable (and
recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.

But strangely enough, users are always asking for new programs... 
Is it

because we've not programmed every functions already, or because
we will
never have them all programmed?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/

A bad day in () is better than a good day in {}.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread David Barbour
I discuss a similar vision in:

http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/

My preferred glue is soft stable constraint logics and my reactive
paradigm, RDP. I discuss a particular application of this technique with
regards to game art development:

http://awelonblue.wordpress.com/2012/09/07/stateless-stable-arts-for-game-development/

Regards,

Dave


On Wed, Oct 3, 2012 at 8:10 AM, Loup Vaillant l...@loup-vaillant.fr wrote:

 De : Paul Homer paul_ho...@yahoo.ca

  If instead, programmers just built little pieces, and it was the
 computer itself that was responsible for assembling it all together into
 mega-systems, then we could reach scales that are unimaginable today.
 […]


 Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
 I have no idea what assembling mechanisms could be used.  Could you
 sketch a trivial example?

 Loup.


 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc




-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread David Barbour
Your idea of first specifying the model... then adding translations can
be made simpler and more uniform, btw, if you treat acquiring initial data
(the model) as a translation between, say, a URL or query and the result.

If you're interested in modeling computation as continuous synchronization
of bidirectional views between data models, you would probably be
interested in RDP (https://github.com/dmbarbour/Sirea/blob/master/README.md
).

Though, reuse of data models is necessarily more sophisticated than you are
imagining. There are many subtle and challenging issues in any conversion
between data models.  I discuss a few such issues here: (
http://awelonblue.wordpress.com/2011/06/15/data-model-independence/)




On Wed, Oct 3, 2012 at 11:34 AM, Paul Homer paul_ho...@yahoo.ca wrote:

 A bit long, but ...

 The way most people think about programming is that they are writing
 'code'. As a lessor side-effect, that code is slinging around data. It
 grabs it from the user, throws it into memory and then if it is interesting
 data, it writes it to disk so that it can be looked at or edited later. The
 code is the primary thing they are creating, while the data is just a
 side-effect of using that code.

 Way back I got introduced to seeing it the other way around. Data is
 everything. It's what the user types in, which is moved into some
 data-structures in memory and then is eventually restructured for
 persistence to be stored for later usage. Data sometimes contains 'static
 linkages', that is one datam points to another explicitly. Sometimes the
 linkages are dynamic. A piece of code has to be run to make the connection
 between the data. In this perspective, code is nothing more than dynamic
 linkages or transformations between data-structures/formats (one could see
 the average of a bunch of floats for example as a transformation to a more
 simplified summation of the original data). The system is really just a
 massive flow of data, while the code is just what helps it get from place
 to place.

 In the second perspective, an inventory system allows the data to flow
 from the users to the persistence medium. Sometimes the users need the data
 to flow back to them again, possibly summarized, or just for re-editing.
 The core of the system holds very simple data, basically a series of
 physical items, each with many associated properties and probably a bunch
 of cross-relationships. The underlying types, properties and relationships
 form a model of the data. For our modern systems that model might be
 implemented as a relational schema, but it could also be more exotic like
 NoSQL.

 In this sort of system, if the model where stored explicitly in the
 persistence and it is simple enough that the users could do data entry
 directly on a flat representation of it on the screen, then the whole
 system would be as simple as flinging the data back and forth between the
 disks and the screen. However as we all know, systems are never this
 trivial in the real world.

 Users need to navigate to specific data, and they often want the computer
 to fill in any 'global context information' for them as they move around.
 As well, they generally enter data in a simplified format, store the data
 in another, and then want a third way to view it. All of this amounts to a
 series of transformations happening to the data as it flows back and forth.
 Some transformations are simple, such as displaying a floating point number
 as a string truncated to some level of precision. Some are very complex,
 such as displaying a report that cross-checks the inventory to determine
 data or real-life problems. But all of the things on the screen are either
 directly data, or algorithmic transformations of the existing data.

 As for programming, this type of system could be build by first specifying
 the model. To add to this would be a series of transformations, each
 basically a black box that specifies a set of input and a set of output.
 With the model and the transformations, someone could lay out a series of
 screens for the users (or power users could do it themselves). The
 underlying kernel of the system would then take requests for the screens
 and use that to work out the flow from or to the database. One could
 generalize this a bit further by ignoring any difference between the screen
 and the disks, and just thinking of them as a generalized 'context' of some
 type.

 What I like about this idea is that once someone creates a model, it can
 be re-used as is, elsewhere. Gradually industries will build up common
 models (with less being secret). And as they add billions of little
 transformations, these too can be shared. The kernel (if it it possible to
 actually write one :-) only needs to exist once. Then all that remains is
 for people to toss screens together as they need them (this part of
 programming is likely to never be static). As for performance, once a flow
 has been established, it would be possible to 

Re: [fonc] How it is

2012-10-03 Thread David Barbour
AI is not necessary for self-organizing systems. You could use a lot of
independent, small constraint solvers to achieve equivalent effect. But
machine learning could make finding solutions very efficient and more
stable. I've been considering such techniques to help replace use of
stateful programming, since avoiding state where possible will result in
simpler systems. I discuss this here:
http://awelonblue.wordpress.com/2012/03/14/stability-without-state/

On Wed, Oct 3, 2012 at 11:37 AM, karl ramberg karlramb...@gmail.com wrote:

 On Wed, Oct 3, 2012 at 6:14 PM, Loup Vaillant l...@loup-vaillant.fr wrote:
  Miles Fidelman a écrit :
 
  Loup Vaillant wrote:
 
  De : Paul Homer paul_ho...@yahoo.ca
 
  If instead, programmers just built little pieces, and it was the
  computer itself that was responsible for assembling it all together
 into
  mega-systems, then we could reach scales that are unimaginable today.
  […]
 
 
  Sounds neat, but I cannot visualize an instantiation of this. Meaning,
  I have no idea what assembling mechanisms could be used.  Could you
  sketch a trivial example?
 
  You're thinking too small!  The Internet (networks + computers +
  software + users), RESTful services, mashups, email discussion threads,
   - great examples of emergent behavior.
 
 
  Emergent?  Beware, this words often reads Phlogiston. (It's often
  used to explain phenomenons we just don't understand yet.)
 
  The examples you provided are based on static standards (IP, HTTP, SMTP
  —I don't know about mashups).  One characteristic of these standards
  is, they are _dumb_.  Which is the point, as intelligence is supposed
  to lie at the edge of the network (basic Internet principle that is at
  risk these times).
 
  Your idea seemed quite different.  I had the impression of something
  _smart_, able to lift a significant part of the programming effort.  I
  visualised some sort of self-assembling 'glue', whose purpose would be
  to assemble various code snippets to do our bidding.
 
  Note that we have already examples of such things.  Compilers, garbage
  collectors, inferential engines… even game scripting engines. But those
  are highly specialized. You seem to have in mind something more general.
  But what, short of a full blown AI?
 
  I see small because I see squat.  What kind of code fragments could be
  involved? How the whole system may be specified? You do need to program
  the system into doing what you want, eventually.
 
  Loup.
 
 
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc

 Some kind of AI would be necessary to achieve a self organizing
 growing and learning system.
 And the system would need to be general enough to be used for a wide
 variety of applications.

 Geoffrey Hinton have shown some interesting results for image recognition:
 https://www.coursera.org/course/neuralnets

 Karl
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread John Nilsson
I read that post about constraints and kept thinking that it should be
the infrastructure for the next generation of systems development, not
art assets :)

In my mind it should be possible to input really fuzzy constraints
like It should have a good looking, blog-like design
A search engine would find a set of implications from that statement
created by designers and vetted by their peers. Some browsing and
light tweaking and there, I have a full front-end design provided for
the system.

Then I add further constraints. Available via http://blahblah.com/
and be really cheap, again the search engine will find the implied
constrains and provide options among the cheaper cloud providers. I
pick one of them and there provisioning is taken care of.

I guess the problem is to come up with a way to formalize all this
knowledge experts are sitting on into a representation usable by that
search engine. But could this not be done implicitly from the act of
selecting a match after a search?

Say some solution S derived from constrains A,B,C is selected in my
search. I have constraint A,B and D as input. By implication the
system now knows that S is a solution to D.

BR,
John


On Wed, Oct 3, 2012 at 11:09 PM, David Barbour dmbarb...@gmail.com wrote:
 I discuss a similar vision in:

 http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/

 My preferred glue is soft stable constraint logics and my reactive paradigm,
 RDP. I discuss a particular application of this technique with regards to
 game art development:

 http://awelonblue.wordpress.com/2012/09/07/stateless-stable-arts-for-game-development/

 Regards,

 Dave



 On Wed, Oct 3, 2012 at 8:10 AM, Loup Vaillant l...@loup-vaillant.fr wrote:

 De : Paul Homer paul_ho...@yahoo.ca

 If instead, programmers just built little pieces, and it was the
 computer itself that was responsible for assembling it all together into
 mega-systems, then we could reach scales that are unimaginable today.
 […]


 Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
 I have no idea what assembling mechanisms could be used.  Could you
 sketch a trivial example?

 Loup.


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 --
 bringing s-words to a pen fight

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread David Barbour
On Wed, Oct 3, 2012 at 2:37 PM, John Nilsson j...@milsson.nu wrote:

 I read that post about constraints and kept thinking that it should be
 the infrastructure for the next generation of systems development, not
 art assets :)


Got to start somewhere. The constraint-logic technology could always be
shifted to systems programming after it matures in a low-risk, high-reward
space.




 In my mind it should be possible to input really fuzzy constraints
 like It should have a good looking, blog-like design
 A search engine would find a set of implications from that statement
 created by designers and vetted by their peers. Some browsing and
 light tweaking and there, I have a full front-end design provided for
 the system.

 Then I add further constraints. Available via http://blahblah.com/
 and be really cheap, again the search engine will find the implied
 constrains and provide options among the cheaper cloud providers. I
 pick one of them and there provisioning is taken care of.

 I guess the problem is to come up with a way to formalize all this
 knowledge experts are sitting on into a representation usable by that
 search engine. But could this not be done implicitly from the act of
 selecting a match after a search?

 Say some solution S derived from constrains A,B,C is selected in my
 search. I have constraint A,B and D as input. By implication the
 system now knows that S is a solution to D.

 BR,
 John


 On Wed, Oct 3, 2012 at 11:09 PM, David Barbour dmbarb...@gmail.com
 wrote:
  I discuss a similar vision in:
 
  http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/
 
  My preferred glue is soft stable constraint logics and my reactive
 paradigm,
  RDP. I discuss a particular application of this technique with regards to
  game art development:
 
 
 http://awelonblue.wordpress.com/2012/09/07/stateless-stable-arts-for-game-development/
 
  Regards,
 
  Dave
 
 
 
  On Wed, Oct 3, 2012 at 8:10 AM, Loup Vaillant l...@loup-vaillant.fr
 wrote:
 
  De : Paul Homer paul_ho...@yahoo.ca
 
  If instead, programmers just built little pieces, and it was the
  computer itself that was responsible for assembling it all together
 into
  mega-systems, then we could reach scales that are unimaginable today.
  […]
 
 
  Sounds neat, but I cannot visualize an instantiation of this.  Meaning,
  I have no idea what assembling mechanisms could be used.  Could you
  sketch a trivial example?
 
  Loup.
 
 
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc
 
 
 
 
  --
  bringing s-words to a pen fight
 
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Miles Fidelman

Paul Homer wrote:

I'm in a slightly different head-space with this idea.

A URL for instance, is essentially an encoded set of instructions for 
navigating to somewhere and then if it is a GET, grabbing the 
associated data, lets say an image. If my theoretical user where to 
create a screen (or perhaps we could call it a visual context), they'd 
just drag-and-drop an image-type into the position they desired.

snip


That whole flow wouldn't be constructed by a programmer, just the 
translations, say bitmap-png, bits-compressed and compressed-bits.


Again, not a new concept - c.f., Intel Mashup Maker, Yahoo Pipes, Deri 
Pipes, ...



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread Pascal J. Bourguignon
John Nilsson j...@milsson.nu writes:

 I read that post about constraints and kept thinking that it should be
 the infrastructure for the next generation of systems development, not
 art assets :)

 In my mind it should be possible to input really fuzzy constraints
 like It should have a good looking, blog-like design
 A search engine would find a set of implications from that statement
 created by designers and vetted by their peers. Some browsing and
 light tweaking and there, I have a full front-end design provided for
 the system.

 Then I add further constraints. Available via http://blahblah.com/
 and be really cheap, again the search engine will find the implied
 constrains and provide options among the cheaper cloud providers. I
 pick one of them and there provisioning is taken care of.

 I guess the problem is to come up with a way to formalize all this
 knowledge experts are sitting on into a representation usable by that
 search engine. But could this not be done implicitly from the act of
 selecting a match after a search?

 Say some solution S derived from constrains A,B,C is selected in my
 search. I have constraint A,B and D as input. By implication the
 system now knows that S is a solution to D.

Right.  Just a simple applicaition of AI and all the algorithms
developed so far.  You just need to integrate them in a working system.


And who has the resources to do this work: it seems to me to be a big
endeavour.  Collecting the research prototype developed during the
last 50 years, and develop a such a product.

Even Watson or Siri would only represent a small part of it.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread David Barbour
Distilling what you just said to its essence:

   - humans develop miniature dataflows
   - search algorithm automatically glues flows together
   - search goal is a data type

A potential issue is that humans - both engineers and end-users - will
often want a fair amount of control over which translations and data
sources are used, options for those translations, etc.. You need a good way
to handle preferences, policy, configurations.

I tend to favor soft constraints in those roles. I'm actually designing a
module systems around the idea, and an implementation in Haskell (for RDP)
using the plugins system and dynamic types. (Related:
http://awelonblue.wordpress.com/2011/09/29/modularity-without-a-name/ ,
http://awelonblue.wordpress.com/2012/04/12/make-for-haskell-values-part-alpha/
).

Regards,

Dave

On Wed, Oct 3, 2012 at 3:33 PM, Paul Homer paul_ho...@yahoo.ca wrote:

 I'm in a slightly different head-space with this idea.

 A URL for instance, is essentially an encoded set of instructions for
 navigating to somewhere and then if it is a GET, grabbing the associated
 data, lets say an image. If my theoretical user where to create a screen
 (or perhaps we could call it a visual context), they'd just drag-and-drop
 an image-type into the position they desired. They'd have to have some way
 of tying that to 'which image', but for simplicity lets just say that they
 already created something that allows them to search, and then list all of
 the images from a known database context, so that the 'which image' is
 cascaded down from their earlier work. Once they 'made the screen live' and
 searched and selected, the underlying code would essentially get a request
 for a data flow that specified the context (location), some 'type'
 information (an image) and a context-specific instance id (as passed in
 from the search and list). The kernel would then arrange for that data to
 be moved from where-ever it is (local or remote, but lets go with remote)
 and converted (if its base format was something the user's screen couldn't
 handle, say a custom bitmap). So along the way there might be a translation
 from one image format to another, and perhaps a 'compress and decompress'
 if the source is remote.

 That whole flow wouldn't be constructed by a programmer, just the
 translations, say bitmap-png, bits-compressed and compressed-bits. The
 kernel would work backwards, knowing that it needed an image in png format,
 and knowing that there exists base data stored in another context as a
 bitmap, and knowing that for large data it is generally cheaper to
 compress/decompress if the network is involved. The kernel would
 essentially know the absolute minimum about the flow, and thus could
 algorithmically decide on the optimal amount of work.

 For most basic systems, for most data, once the user navigated into
 something it's just a matter of shifting the data. I've done an end-run
 around any of the processing issues, by jumping dumping them into the
 kernel. From your list, scatter-gather, queries and views, etc. are all
 left up the the translations. Incremental is just having the model in the
 context handles updates. ACID is a property of the context.

 I haven't given any real thought to issues like pulls or bi-directional
 but I think that the screen would just send a flow back to the original
 context in an observer style pattern associated with the raw pre-translated
 data. If any of that changed in the context, the screen would redo any
 'dirty' flows, but that might not be a workable approach for millions of
 users watching the same data.

 The crux of this (crazy) idea is really that the full intelligence
 necessary for moving the data about and playing with it is highly
 fragmented. Programmers don't have to write massive intelligent sets of
 instructions, they just have to know how data goes from one format to
 another. They can do their thing in small bits and pieces and be as
 organized or inconsistent as they like. The system comes together from the
 intelligence embedded in the kernel, but the kernel isn't concerned with
 what are essentially domain or data issues. It's all just bits that are on
 their way from one place to another, and translations that are required
 along the way. Most of the code-specific issues like security melt away
 (you have access to a context or you don't) mostly because the linkage
 between the user and data is under control of just one single (distributed)
 program.


 Paul.

   --
 *From:* David Barbour dmbarb...@gmail.com

 *To:* Paul Homer paul_ho...@yahoo.ca; Fundamentals of New Computing 
 fonc@vpri.org
 *Sent:* Wednesday, October 3, 2012 5:27:12 PM

 *Subject:* Re: [fonc] How it is

 Your idea of first specifying the model... then adding translations can
 be made simpler and more uniform, btw, if you treat acquiring initial data
 (the model) as a translation between, say, a URL or query and the result.

 If you're interested in 

Re: [fonc] How it is

2012-10-03 Thread John Nilsson
On Thu, Oct 4, 2012 at 1:06 AM, Pascal J. Bourguignon
p...@informatimago.com wrote:
 And who has the resources to do this work: it seems to me to be a big
 endeavour.  Collecting the research prototype developed during the
 last 50 years, and develop a such a product.

I'm not sure it has to be that big of an effort. Most of the work is
already being performed. From my previous examples we have projects
like twitter bootstrap and htm5 boilerplate being created for the
infrastructure, and surrounding that people are creating templates for
various frameworks. In the middle we have organizations like google to
keep it all accessible to the people gluing it together. Provided even
a remote possibility of being indexed and somewhat intelligently
filtered I'm sure google will solve the search part. So what's needed
could be just the right type of glue for automating scaffolding and
bootstrap tasks in a more general sense and some software to take
advantage of it. Given that this software makes some step easier for
people it could be enough to direct efforts toward formalizing the
required meta data.
And then you iterate :)

BR,
John
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc