Re: [fonc] OT? Polish syntax

2012-03-18 Thread Martin Baldan
Hi, shaun, sorry for the delay.

Ambi is apparently a concatenative, stack-based language, similar to
Cat. Those are interesting for their own reasons (and they also have
their own problems) but it's not exactly what I'm thinking of.

REBOL is much closer, but I would like to have more diversity of
approaches with a similar syntax, not just REBOL clones.

So what do I want?

Let's start with Lisp. It's a great programming language and the
syntax is beautiful, because you can always parse an S-expression,
even if you don't know what's in there. Also, paren matching is a
non-issue with something like Paredit. I want to make it clear, I love
the parens, I hate infix notation (except, perhaps, for RDF, where
everything is made of triples), and I find indentation-based syntax
kind of pointless.

The only problem I see with the parens is when you try to speak in
Lisp. Human language is not like that. People build complex sentences,
with higher-order functions, without using anything like parens. The
main difference is the use of types. Human language is usually typed.
This has been discussed since Montague's English as a formal
language, maybe before that. Nowadays the idea of treating spoken
language as a typed formal language is pretty standard in machine
translation. For instance, have a look at GF, a modern open source
tool to build multi-translation systems:

http://www.grammaticalframework.org/

http://www.grammaticalframework.org/lib/doc/synopsis.html

http://www.grammaticalframework.org/doc/gf-quickstart.html



So it seems clear that a speech-like language should have types, but
not types like integer or character, but types like noun noun
phrase and proposition. But there must be tools to convert from any
type to any other, as we also do in natural speech. For instance, we
say Mary walks but we can also say walking is healthy.

For programming languages, maybe simply-typed lambda calculus, with
just one type, would be more adequate. In any case, I'm interested in
ways to have a terse, concise, regular *and* highly expressive syntax,
with no built-in assumptions about types, but where types can help you
use fewer words.

Here's a little example I made up of the kind of syntax I would like
to see. Don't take it very seriously, it's just to convey the idea of
what I'm looking for, because you ask for examples.

We have a function definition in Kernel lisp:

($define! print ($lambda (x) (map write-char (string-list x

Now, with a pure Polish, simply-typed syntax I think it may be
something like this:


$type $define! $fun e fun! e e
$type $lambda $fun e e
$type $map $fun e fun e e
$type write-char fun! e e
$type string-list fun e e

$define! print $lambda x map write-char string-list x


The dollar signs mark the operatives (ie the fexprs), so I'm borrowing
some syntax from Kernel. Sorry for mixing up two ideas which are very
different, but that's how I see it now. Otherwise, I would have to use
quotation instead of operatives.


Notice that I'm using pure Polish notation even in the type
signatures. This is not standard practice in type theory, where
instead they use infix arrows and parens, which I find more ugly. For
instance, instead of:

$type map $fun e fun e e

Using a more standard notation it would be like this:

map :: e $- e - e

I had to turn map into an operative (hence $map), otherwise, as I
said, I would have to quote the function. This is because I don't know
the type of the function which will serve as argument to map. In
Haskell the type signature of map is:

(a - b) - [a] - [b]


Once you are using this kind of trick in the type signature, I wonder
whether the type system is pushing the limits of the expressivity of
your type signature notation, so I would like to be able to avoid
those notation tricks, and one way to do so, I think, is to take as
argument the symbol, instead of the function itself.

Also, the new version of $lambda! only takes one argument (ie, x)
instead of a list of arguments. So, I would need nested lambdas for
several arguments. This is not unusual in some notations. I just
wanted to show you a version with no lists at all, but of course,
there's nothing wrong with having some functions take a list as
argument where it makes sense. I just want to reduce the use of lists
as much as possible, to make the language more concise.

I hope this helps clarify my previous post.

Best,

 -Martin







On Thu, Mar 15, 2012 at 5:52 PM, shaun gilchrist shaunxc...@gmail.com wrote:
 This looks interesting: https://code.google.com/p/ambi/ - instead of
 supporting infix it supports both polish and reverse polish. Can you give
 some examples of what your ideal syntax would look like which illustrates
 the spoken language aspect you touched on? -Shaun

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT? Polish syntax

2012-03-18 Thread Martin Baldan
BGB, please see my answer to shaun. In short:

_ I'm not looking for stack-based languages. I want a Lisp which got
rid of (most of the) the parens by using fixed arity and types,
without any loss of genericity, homoiconicity or other desirable
features. REBOL does just that, but it's not so good regarding
performance, the type system, etc.

_ I *hate* infix notation. It can only make sense where everything has
arity 3, like in RDF.

_ Matching parens is a non-issue. Just use Paredit or similar ;)

_ Umm, whitespace sensitive sounds a bit dangerous. I have enough
with Python :p

Thanks for your input.

Best,

 -Martin


On Thu, Mar 15, 2012 at 6:54 PM, BGB cr88...@gmail.com wrote:
 On 3/15/2012 9:21 AM, Martin Baldan wrote:

 I have a little off-topic question.
 Why are there so few programming languages with true Polish syntax? I
 mean, prefix notation, fixed arity, no parens (except, maybe, for
 lists, sequences or similar). And of course, higher order functions.
 The only example I can think of is REBOL, but it has other features I
 don't like so much, or at least are not essential to the idea. Now
 there are some open-source clones, such as Boron, and now Red, but
 what about very different languages with the same concept?

 I like pure Polish notation because it seems as conceptually elegant
 as Lisp notation, but much closer to the way spoken language works.
 Why is it that this simple idea is so often conflated with ugly or
 superfluous features such as native support for infix notation, or a
 complex type system?


 because, maybe?...
 harder to parse than Reverse-Polish;
 less generic than S-Expressions;
 less familiar than more common syntax styles;
 ...

 for example:
 RPN can be parsed very quickly/easily, and/or readily mapped to a stack,
 giving its major merit. this gives it a use-case for things like textual
 representations of bytecode formats and similar. languages along the lines
 of PostScript or Forth can also make reasonable assembler substitutes, but
 with higher portability. downside: typically hard to read.

 S-Expressions, however, can represent a wide variety of structures. nearly
 any tree-structured data can be expressed readily in S-Expressions, and all
 they ask for in return is a few parenthesis. among other things, this makes
 them fairly good for compiler ASTs. downside: hard to match parens or type
 correctly.

 common syntax (such as C-style), while typically harder to parse, and
 typically not all that flexible either, has all the usual stuff people
 expect in a language: infix arithmetic, precedence levels, statements and
 expressions, ... and the merit that it works fairly well for expressing most
 common things people will care to try to express with them. some people
 don't like semicolons and others don't like sensitivity to line-breaks or
 indentation, and one generally needs commas to avoid ambiguity, but most
 tend to agree that they would much rather be using this than either
 S-Expressions or RPN.

 (and nevermind some attempts to map programming languages to XML based
 syntax designs...).

 or, at least, this is how it seems to me.


 ironically, IMO, it is much easier to type C-style syntax interactively
 while avoiding typing errors than it is to type S-Expression syntax
 interactively while avoiding typing errors (maybe experience, maybe not,
 dunno). typically, the C-style syntax requires less total characters as
 well.

 I once designed a language syntax specially for the case of being typed
 interactively (for terseness and taking advantage of the keyboard layout),
 but it turned out to be fairly difficult to remember the syntax later.

 some of my syntax designs have partly avoided the need for commas by making
 the parser whitespace sensitive regarding expressions, for example a -b
 will parse differently than a-b or a - b. however, there are some common
 formatting quirks which would lead to frequent misparses with such a style.
 foo (x+1); (will parse as 2 expressions, rather than as a function call).

 a partial downside is that it can lead to visual ambiguity if code is read
 using a variable-width font (as opposed to the good and proper route of
 using fixed-width fonts for everything... yes, this world is filled with
 evils like variable-width fonts and the inability to tell apart certain
 characters, like the Il1 issue and similar...).

 standard JavaScript also uses a similar trick for implicit semicolon
 insertion, with the drawback that one needs to use care when breaking
 expressions otherwise the parser may do its magic in unintended ways.


 the world likely goes as it does due to lots of many such seemingly trivial
 tradeoffs.

 or such...


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Martin Baldan


 this is possible, but it assumes, essentially, that one doesn't run into
 such a limit.

 if one gets to a point where every fundamental concept is only ever
 expressed once, and everything is built from preceding fundamental concepts,
 then this is a limit, short of dropping fundamental concepts.

Yes, but I don't think any theoretical framework can tell us a priori
how close we are to that limit. The fact that we run out of ideas
doesn't mean there are no more new ideas waiting to be discovered.
Maybe if we change our choice of fundamental concepts, we can further
simplify our systems.

For instance, it was assumed that the holy grail of Lisp would be to
get to the essence of lambda calculus, and then John Shutt did away
with lambda as a fundamental concept, he derived it from vau, doing
away with macros and special forms in the process. I don't know
whether Kernel will live up to its promise, but in any case it was an
innovative line of inquiry.


 theoretically, about the only way to really do much better would be using a
 static schema (say, where the sender and receiver have a predefined set of
 message symbols, predefined layout templates, ...). personally though, I
 really don't like these sorts of compressors (they are very brittle,
 inflexible, and prone to version issues).

 this is essentially what write a tic-tac-toe player in Scheme implies:
 both the sender and receiver of the message need to have a common notion of
 both tic-tac-toe player and Scheme. otherwise, the message can't be
 decoded.

But nothing prevents you from reaching this common notion via previous
messages. So, I don't see why this protocol would have to be any more
brittle than a more verbous one.


 a more general strategy is basically to build a model from the ground up,
 where the sender and reciever have only basic knowledge of basic concepts
 (the basic compression format), and most everything else is built on the fly
 based on the data which has been seen thus far (old data is used to build
 new data, ...).

Yes, but, as I said, old that are used to build new data, but there's
no need to repeat old data over and over again. When two people
communicate with each other, they don't introduce themselves and share
their personal details again and again at the beginning of each
conversation.




 and, of course, such a system would likely be, itself, absurdly complex...


The system wouldn't have to be complex. Instead, it would *represent*
complexity through first-class data structures. The aim would be to
make the implicit complexity explicit, so that this simple system can
reason about it. More concretely, the implicit complexity is the
actual use of competing, redundant standards, and the explicit
complexity is an ontology describing those standards, so that a
reasoner can transform, translate and find duplicities with
dramatically less human attention. Developing such an ontology is by
no means trivial, it's hard work, but in the end I think it would be
very much worth the trouble.




 and this is also partly why making everything smaller (while keeping its
 features intact) would likely end up looking a fair amount like data
 compression (it is compression code and semantic space).


Maybe, but I prefer to think of it in terms of machine translation.
There are many different human languages, some of them more expressive
than others (for instance, with a larger lexicon, or a more
fine-grained tense system). If you want to develop an interlingua for
machine translation, you have to take a superset of all features of
the supported languages, and a convenient grammar to encode it (in GF
it would be an abstract syntax). Of course, it may be tricky to
support translation from any language to any other, because you may
need neologisms or long clarifications to express some ideas in the
least expressive languages, but let's leave that aside for the moment.
My point is that, once you do that, you can feed a reasoner with
literature in any language, and the reasoner doesn't have to
understand them all; it only has to understand the interlingua, which
may well be easier to parse than any of the target languages. You
didn't eliminate the complexity of human languages, but now it's
tidily packaged in an ontology, where it doesn't get in the reasoner's
way.



 some of this is also what makes my VM sub-project as complex as it is: it
 deals with a variety of problem cases, and each adds a little complexity,
 and all this adds up. likewise, some things, such as interfacing (directly)
 with C code and data, add more complexity than others (simpler and cleaner
 FFI makes the VM itself much more complex).

Maybe that's because you are trying to support everything by hand,
with all this knowledge and complexity embedded in your code. On the
other hand, it seems that the VPRI team is trying to develop new,
powerful standards with all the combined features of the existing
ones while actually supporting a very small subset of those 

[fonc] Where is the Moshi image?

2012-03-13 Thread Martin Baldan
I've been reading a few more documents, and it seems that the first
step towards having something like Frank at home would be to get hold
of a Moshi Squeak image.

For instance, in Implementing DBJr with Worlds we can read:

Try It Yourself!
The following steps will recreate our demo. (Important: this only works in our
Moshi Squeak image. Bring in Worlds2-aw.cs, WWorld-A-tk.1.cs,
WWorld-B-tk.4.cs,
Worlds-Morph-A-tk.5.cs, Worlds-DBJr-B-tk.1.cs, then look at file 'LStack WWorld
workspace') These instructions are here so that we won't lose them.
This demo was
difficult to get working.



But the Mythical Moshi image turned out to be surprisingly elusive.

For instance, all I've found in this list is this email message:

http://www.mail-archive.com/fonc@vpri.org/msg01037.html

The other research that are based on the Moshi image equally
interesting, but the Moshi image is nowhere to be downloaded so one
can only read the code and papers about it.

Are we getting into military secret land? :D
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-12 Thread Martin Baldan


 that is a description of random data, which granted, doesn't apply to most
 (compressible) data.
 that wasn't really the point though.

I thought the original point was that there's a clear-cut limit to how
much redundancy can be eliminated from computing environments, and
that thousand-fold (and beyond) reductions in code size per feature
don't seem realistic. Then the analogy from data compression was used.
I think it's a pretty good analogy, but I don't think there's a
clear-cut limit we can estimate in advance, because meaningful data
and computations are not random to begin with. Indeed, there are
islands of stability where you've cut all the visible cruft and you
need new theoretical insights and new powerful techniques to reduce
the code size further.



 for example, I was able to devise a compression scheme which reduced
 S-Expressions to only 5% their original size. now what if I want 3%, or 1%?
 this is not an easy problem. it is much easier to get from 10% to 5% than to
 get from 5% to 3%.

I don't know, but there may be ways to reduce it much further if you
know more about the sexprs themselves. Or maybe you can abstract away
the very fact that you are using sexprs. For instance, if those sexprs
are a Scheme program for a tic-tac-toe player, you can say write a
tic-tac-toe player in Scheme and you capture the essence.

I expect much of future progress in code reduction to come from
automated integration of different systems, languages and paradigms,
and this integration to come from widespread development and usage of
ontologies and reasoners. That way, for instance, you could write a
program in BASIC, and then some reasoner would ask you questions such
as I see you used a GOTO to build a loop. Is that correct? or this
array is called 'clients'  , do you mean it as in server/client
architecture or in the business sense? . After a few questions like
that, the system would have a highly descriptive model of what your
program is supposed to do and how it is supposed to do it. Then it
would be able to write an equivalent program in any other programming
language. Of course, once you have such a system, there would be much
more powerful user interfaces than some primitive programming
language. Probably you would speak in natural language (or very close)
and use your hands to point at things. I know it sounds like full-on
AI, but I just mean an expert system for programmers.



 although many current programs are, arguably, huge, the vast majority of the
 code is likely still there for a reason, and is unlikely the result of
 programmers just endlessly writing the same stuff over and over again, or
 resulting from other simple patterns. rather, it is more likely piles of
 special case logic and optimizations and similar.


I think one problem is that not writing the same stuff over and over
again is easier said than done. To begin with, other people's code
may not even be available (or not under a free license). But even if
it is, it may have used different names, different coding patterns,
different third-party libraries and so on, while still being basically
the same. And this happens even within the same programming language
and environment. Not to speak of all the plethora of competing
platforms, layering schemes, communication protocols, programming
languages, programming paradigms, programming frameworks and so on.
Everyone says let's do it may way, and then my system can host
yours, same here, let's make a standard, let's extend the
standard, let's make a cleaner standard, now for real, let's be
realistic and use the available standards let's not reinvent the
wheel, we need backwards compatibility, backwards compatibility is a
drag, let's reinvent the wheel. Half-baked standards become somewhat
popular, and then they have to be supported.

And that's how you get a huge software stack. Redundancy can be
avoided in centralized systems, but in distributed systems with
competing standards that's the normal state. It's not that programmers
are dumb, it's that they can't agree on pretty much anything, and they
can't even keep track of each other's ideas because the community is
so huge.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-11 Thread Martin Baldan
I won't pretend I really know what I'm talking about, I'm just
guessing here, but don't you think the requirement for independent
and identically-distributed random variable data in Shannon's source
coding theorem may not be applicable to pictures, sounds or frame
sequences normally handled by compression algorithms? I mean, many
compression techniques rely on domain knowledge about the things to be
compressed. For instance, a complex picture or video sequence may
consist of a well-known background with a few characters from a
well-known inventory in well-known positions. If you know those facts,
you can increase the compression dramatically. A practical example may
be Xtranormal stories, where you get a cute 3-D animated dialogue from
a small script.

Best,

-Martin

On Sun, Mar 11, 2012 at 7:53 PM, BGB cr88...@gmail.com wrote:
 On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:

 On 28.02.12 06:42, BGB wrote:

 but, anyways, here is a link to another article:
 http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


 Shannon's theory applies to lossless transmission. I doubt anybody here
 wants to reproduce everything down to the timings and bugs of the original
 software. Information theory is not thermodynamics.


 Shannon's theory also applies some to lossy transmission, as it also sets a
 lower bound on the size of the data as expressed with a certain degree of
 loss.

 this is why, for example, with JPEGs or MP3s, getting a smaller size tends
 to result in reduced quality. the higher quality can't be expressed in a
 smaller size.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-09 Thread Martin Baldan
Thanks, interesting link. But I have some questions and comments:

_ How much does an e-reader last?

The article says:

 This means an iPad owner would need to offset 32.4 printed
books during the iPad’s lifetime to break even in terms of the carbon
footprint of reading those books.

But as far as I know, it doesn't say what the iPad's lifetime is, so I
don't know how many books per year that means.
By the way, an iPad is not more of an ebook reader than a desktop is.
I would say that only e-ink devices (or something just as good in
terms of visual comfort) deserve to be called e-book readers.

_  Hey, we forgot about newspapers and magazines! :

If you are also offsetting printed magazines and printed newspapers
with the iPad then the number of books
you would need to offset to break even could be much lower.

_ Now this is cheating:

If a person would normally share a printed book with others, buy some
used printed books, or borrow many of the printed books from the
library then the numbers would need to be adjusted to account for
that.

That's a bit like saying that physical exercise makes you fat, because
you get hungry and you eat more. Even if people ended up wasting more
because they read *a lot* more, that wouldn't affect the economic and
ecologic impact per book, which was the issue at hand.

For my part, I don't even conceive of e-readers as a replacement for
paper books. In my case, they are a replacement for laptops and
desktop when it comes to reading long texts.  I don't buy paper books
any more, because I don't have any spare room for them.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread Martin Baldan
Indeed, now that you mention it, there's a paper factory not too far
from where I live...well, far enough, fortunately. By night, with its
huge vapor clouds and red lights, it looks like the gates of hell. And
you know what, it smells accordingly, tens of miles around.

On Thu, Mar 8, 2012 at 11:12 PM, Mack m...@mackenzieresearch.com wrote:
 Just a reminder that paper-making is one of the more toxic industries in
 this country:

 http://en.wikipedia.org/wiki/Paper_pollution

 Paper itself may be simple and eco-friendly, but the commercial process to
 produce it is rife with chorine, dioxin, etc. not to mention heavy thermal
 pollution of water sources.

 So there are definitely arguments on both sides of the ledger wrt eBooks.

 -- Mack


 On Mar 8, 2012, at 1:54 PM, BGB wrote:

 On 3/8/2012 12:34 PM, Max Orhai wrote:



 On Thu, Mar 8, 2012 at 7:07 AM, Martin Baldan martino...@gmail.com wrote:

 
  - Print technology is orders of magnitude more environmentally benign
  and affordable.
 

 That seems a pretty strong claim. How do you back it up? Low cost and
 environmental impact are supposed to be some of the strong points of
 ebooks.


 Glad you asked! That was a pretty drastic simplification, and I'm conflating
 'software' with 'hardware' too. Without wasting too much time, hopefully,
 here's what I had in mind.

 I live in a city with some amount of printing industry, still. In the past,
 much more. Anyway, small presses have been part of civic life for centuries
 now, and the old-fashioned presses didn't require much in the way of
 imports, paper mills aside. I used to live in a smaller town with a
 mid-sized paper mill, too. No idea if they're still in business, but I've
 made my own paper, and it's not that hard to do well in small batches. My
 point is just that print technology (specifically the letterpress) can be
 easily found in the real world which is local, nontoxic, and sustainable
 (in the sense of only needing routine maintenance to last indefinitely) in a
 way that I find hard to imagine of modern electronics, at least at this
 point in time. Have you looked into the environmental cost of manufacturing
 and disposing of all our fragile, toxic gadgets which last two years or
 less? It's horrifying.


 I would guess, apart from macro-scale parts/materials reuse (from
 electronics and similar), one could maybe:
 grind them into dust and extract reusable materials via means of mechanical
 separation (magnetism, density, ..., which could likely separate out most
 bulk glass/plastic/metals/silicon/... which could then be refined and
 reused);
 maybe feed whatever is left over into a plasma arc, and maybe use either
 magnetic fields or a centrifuge to separate various raw elements (dunno if
 this could be made practical), or maybe dissolve it with strong acids and
 use chemical means to extract elements (could also be expensive), or lacking
 a better (cost effective) option, simply discard it.


 the idea for a magnetic-field separation could be:
 feed material through a plasma arc, which will basically convert it into
 mostly free atoms;
 a large magnetic coil accelerates the resultant plasma;
 a secondary horizontal magnetic field is applied (similar to the one in a
 CRT), causing elements to deflect based on relative charge (valence
 electrons);
 depending on speed and distance, there is likely to be a gravity based
 separation as well (mostly for elements which have similar charge but differ
 in atomic weight, such as silicon vs carbon, ...);
 eventually, all of them ram into a wall (probably chilled), with a more or
 less 2D distribution of the various elements (say, one spot on the wall has
 a big glob of silicon, and another a big glob of gold, ...). (apart from
 mass separation, one will get mixes of similarly charged elements, such as
 globs of silicon carbide and titanium-zirconium and similar)

 an advantage of a plasma arc is that it will likely result in some amount of
 carbon-monoxide and methane and similar as well, which can be burned as fuel
 (providing electricity needed for the process). this would be similar to a
 traditional gasifier.


 but, it is possible that in the future, maybe some more advanced forms of
 manufacturing may become more readily available at the small scale.

 a particular example is that it is now at least conceivably possible that
 lower-density lower-speed semiconductor electronics (such as polymer
 semiconductors) could be made at much smaller scales and cheaper than with
 traditional manufacturing (silicon wafers and optical lithography), but at
 this point there is little economic incentive for this (companies don't
 care, as they have big expensive fabs to make chips, and individuals and
 communities don't care as they don't have much reason to make their own
 electronics vs just buying those made by said large semiconductor
 manufacturers).

 similarly, few people have much reason to invest much time or money in
 technologies which are likely

Re: [fonc] Sorting the WWW mess

2012-03-02 Thread Martin Baldan
Julian,

I'm not sure I understand your proposal, but I do think what Google
does is not something trivial, straightforward or easy to automate. I
remember reading an article about Google's ranking strategy. IIRC,
they use the patterns of mutual linking between websites. So far, so
good. But then, when Google became popular, some companies started to
build link farms, to make themselves look more important to Google.
When Google finds out about this behavior, they kick the company to
the bottom of the index. I'm sure they have many secret automated
schemes to do this kind of thing, but it's essentially an arms race,
and it takes constant human attention. Local search is much less
problematic, but still you can end up with a huge pile of unstructured
data, or a huge bowl of linked spaghetti mess, so it may well make
sense to ask a third party for help to sort it out.

I don't think there's anything architecturally centralized about using
Google as a search engine, it's just a matter of popularity. You also
have Bing, Duckduckgo, whatever.

 On the other hand, data storage and bandwidth are very centralized.
Dropbox, Google docs, iCloud, are all sympthoms of the fact that PC
operating systems were designed for local storage. I've been looking
at possible alternatives. There's distributed fault-tolerant network
filesystems like Xtreemfs (and even the Linux-based XtreemOS), or
Tahoe-LAFS (with object-capabilities!), or maybe a more P2P approach
such as Tribler (a tracker-free bittorrent), and for shared bandwidth
apparently there is a BittorrentLive (P2P streaming). But I don't know
how to put all that together into a usable computing experience. For
instance, squeak is a single file image, so I guess it can't benefit
from file-based capabilities, except if the objects were mapped to
files in some way. Oh, well, this is for another thread.


-Best

 Martin

On Fri, Mar 2, 2012 at 6:54 AM, Julian Leviston jul...@leviston.net wrote:
 Right you are. Centralised search seems a bit silly to me.

 Take object orientedism and apply it to search and you get a thing where
 each node searches itself when asked...  apply this to a local-focussed
 topology (ie spider web serch out) and utilise intelligent caching (so
 search the localised caches first) and you get a better thing, no?

 Why not do it like that? Or am I limited in my thinking about this?

 Julian

 On 02/03/2012, at 4:26 AM, David Barbour wrote:

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Sorting the WWW mess

2012-03-01 Thread Martin Baldan
Loup,

I agree that the Web is a mess. The original sin was to assume that people
would only want to connect to other computers in order to retrieve a
limited set of static documents. I think the reason for this was that
everyone sticked to the Unix security model, where everything you run has
all the permissions you have. That's why you don't want to run code from
untrusted sources. If they had used a capablity-based security model from
the start, this concern would probably not have arised.

Also, a deeper culprit, in my opinion, is Intellectual Property. There were
several great networking protocols before the internet, but they were
usually proprietary protocols for proprietary operatinog systems. Don't
forget that, for instance, Plan9 was not open sourced until 2000 or 2002.
Now there's a lot of talk of open standards, but there was a time when the
main source of open standards were half-baked government projects. The main
reason why the IBM PC architecture dominates is that Compaq managed to
clone it legally. The main reason why Microsoft operating systems got to
dominate is that they were ready from the start to run on those cheap and
widespread IBM PC clones, both technically and legally.

 I also think that the internet, with its silly limited IP numbers and DNS
servers smack of premature optimization. I mean, configuring a network
feels a bit like programming in machine code. There's also the issue of
one-way links, which creates the need for complex feedback mechanisms such
as RSS, moreover, the fact that regular URLs are so ephemeral, which gave
rise to permalinks. Then again, if it were all based on two-way links,
maybe we would need a complex system for transparent anonymous linking,
some kind of virtual link.

That said, I don't see why you have an issue with search engines and search
services. Even on your own machine, searching files with complex properties
is far from trivial. When outside, untrusted sources are involved, you need
someone to tell you what is relevant, what is not, who is lying, and so on.
Google got to dominate that niche for the right reasons, namely, being much
better than the competition.

Best,

 -Martin
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Can semantic programming eliminate the need for Problem-Oriented Language syntaxes?

2012-03-01 Thread Martin Baldan
Yes, namespaces provide a form of jargon, but that's clearly not enough.
If it were, there wouldn't be so many programming languages. You can't use,
say, Java imports to turn Java into Smalltalk, or Haskell or Nile. They
have different syntax and different semantics. But in the end you describe
the syntax and semantics with natural language. I was wondering about using
a powerful controlled language, with a backend of, say, OWL-DL, and a
suitable syntax defined using some tool like GF (or maybe OMeta?).
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Sorting the WWW mess

2012-03-01 Thread Martin Baldan
Ah, thanks! :)

On Thu, Mar 1, 2012 at 6:26 PM, David Barbour dmbarb...@gmail.com wrote:



 http://www.mail-archive.com/fonc@vpri.org/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Martin Baldan
David,

Thanks for the link. Indeed, now I see how to run  eval with .l example
files. There are also .k  files, which I don't know how they differ from
those, except that .k files are called with ./eval filename.k while
.l files are called with ./eval repl.l filename.l where filename is
the name of the file. Both kinds seem to be made of Maru code.

I still don't know how to go from here to a Frank-like GUI. I'm reading
other replies which seem to point that way. All tips are welcome ;)

-Martin


On Mon, Feb 27, 2012 at 3:54 AM, David Girle davidgi...@gmail.com wrote:

 Take a look at the page:

 http://piumarta.com/software/maru/

 it has the original version you have + current.
 There is a short readme in the current version with some examples that
 will get you going.

 David

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Martin Baldan
Guys, I find these off_topic comments (as in not strictly about my idst
compilation  problem)  really interesting. Maybe I should start a new
thread? Something like «how can a newbie start playing with this
technology?». Thanks!
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread Martin Baldan
Julian,

Thanks, now I have a much better picture of the overall situation, although
I still have a lot of reading to do. I already had read a couple of Frank
progress reports, and some stuff about worlds, in the publications link you
mention. So I thought, this sounds great, how can I try this? Then I went
to the wiki, and there was a section called Fundamental new computing
technologies, so I said this is the thing!.  But no, the real thing was,
as you said, hidden in plain sight, under the unconspicuous titles such as
Other prototypes and projects related to our work and experiment. I
wonder, is that some kind of prank for the uninitiated? hehe. By the way,
I've played a little with Squeak, Croquet and other great projects by Alan
and the other wonderful Smalltalk people, so I did have a sense of their
focus on children. I must confess I was a bit annoyed with what seemed to
me like Jedi elitism (as in He is too old. Yes, too old to begin the
training. ) but hey, their project, their code, their rules.

So, to get back on topic,

I've downloaded Maru, The contents are:

boot-eval.c  boot.l  emit.l  eval.l  Makefile

So, the .l files are

So this is the file extension for Maru's implementation language (does it
have a name?).

Sure enough, the very first line of eval.l reads:

;;; -*- coke -*-

This made me smile. Well, actually it was a mad laughter.

It compiles beautifully. Yay!

Now there are some .s files. They look like assembler code. I thought it
was Nothing code, but the Maru webpage explains it's just ia-32. Oh, well.
I don't know yet where Nothing enters the picture.

So, this is compiled to .o files and linked to build the eval
executable, which can take .l files and make a new eval
 executable, and so on. So far so good.

But what else can I do with it? Should I use it to run the examples at 
http://tinlizzie.org/dbjr/; ? All I see is files with a .lbox file
extension. What are those? Apparently, there are no READMEs. Could you
please give me an example of how to try one of those experiments?

Thanks for your tips and patience ;)




On Sun, Feb 26, 2012 at 3:48 AM, Julian Leviston jul...@leviston.netwrote:

 As I understand it, Frank is an experiment that is an extended version of
 DBJr that sits atop lesserphic, which sits atop gezira which sits atop
 nile, which sits atop maru all of which which utilise ometa and the
 worlds idea.

 If you look at the http://vpri.org/html/writings.php page you can see a
 pattern of progression that has emerged to the point where Frank exists.
 From what I understand, maru is the finalisation of what began as pepsi and
 coke. Maru is a simple s-expression language, in the same way that pepsi
 and coke were. In fact, it looks to have the same syntax. Nothing is the
 layer underneath that is essentially a symbolic computer - sitting between
 maru and the actual machine code (sort of like an LLVM assembler if I've
 understood it correctly).

 They've hidden Frank in plain sight. He's a patch-together of all their
 experiments so far... which I'm sure you could do if you took the time to
 understand each of them and had the inclination. They've been publishing as
 much as they could all along. The point, though, is you have to understand
 each part. It's no good if you don't understand it.

 If you know anything about Alan  VPRI's work, you'd know that their focus
 is on getting children this stuff in front as many children as possible,
 because they have so much more ability to connect to the heart of a problem
 than adults. (Nothing to do with age - talking about minds, not bodies
 here). Adults usually get in the way with their stuff - their knowledge
 sits like a kind of a filter, denying them the ability to see things
 clearly and directly connect to them unless they've had special training in
 relaxing that filter. We don't know how to be simple and direct any more -
 not to say that it's impossible. We need children to teach us meta-stuff,
 mostly this direct way of experiencing and looking, and this project's main
 aim appears to be to provide them (and us, of course, but not as
 importantly) with the tools to do that. Adults will come secondarily - to
 the degree they can't embrace new stuff ;-). This is what we need as an
 entire populace - to increase our general understanding - to reach
 breakthroughs previously not thought possible, and fast. Rather than
 changing the world, they're providing the seed for children to change the
 world themselves.

 This is only as I understand it from my observation. Don't take it as
 gospel or even correct, but maybe you could use it to investigate the parts
 of frank a little more and with in-depth openness :) The entire project is
 an experiment... and that's why they're not coming out and saying hey guys
 this is the product of our work - it's not a linear building process, but
 an intensively creative process, and most of that happens within oneself
 before any results are seen (rather like boiling a 

Re: [fonc] Error trying to compile COLA

2012-02-25 Thread Martin Baldan
Michael,

Thanks for your reply. I'm looking into it.

Best,

 Martin
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-25 Thread Martin Baldan
Is that the case? I'm a bit confused. I've read the fascinating reports
about Frank, and I was wondering what's the closest thing one can download
and run right now. Could you guys please clear it up for me?

Best,

Martin

On Sat, Feb 25, 2012 at 5:23 PM, Julian Leviston jul...@leviston.netwrote:

 Isn't the cola basically irrelevant now? aren't they using maru instead?
 (or rather isn't maru the renamed version of coke?)

 Julian


 On 26/02/2012, at 2:52 AM, Martin Baldan wrote:

  Michael,
 
  Thanks for your reply. I'm looking into it.
 
  Best,
 
   Martin
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Error trying to compile COLA

2012-02-19 Thread Martin Baldan
Hello,

I'm trying to compile the COLA distribution, just to know what it's like,
but I'm getting errors.

Here's what I did:


[code]

$ cat /etc/issue
Ubuntu 11.10 \n \l

$ svn checkout http://piumarta.com/svn2/idst/tags/idst-376 fonc-stable

$ cd fonc-stable/
$ make

[/code]

I've posted the make output here:

http://tinypaste.com/5e9072a5



Amazing articles at VPRI and discussions in the fonc mailing list. You guys
are doing a fascinating work :)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc