Re: [fonc] Compiling COLA on x86_64

2013-10-20 Thread Ian Piumarta
Simon,

Sorry for the slightly late fixes.  Please svn update your idst sources and 
then make clean in the top-level directory.

On 32-bit Linux make sure you have the packages libreadline-dev and execstack 
installed.
Then type make in the top-level directory.
This will build the st80 libraries for id, the idst compiler 'idc' and Jolt.

On 64-bit Linux make sure you have the packages lib32readline6-dev (or similar) 
and execstack installed.
Then type make TARGET=x86_32-pc-linux in the top-level directory.
This will build the st80 libraries for id, the idst compiler 'idc' and a 32-bit 
version of Jolt that can execute its own dynamic code on a 64-bit Linux system.

If your Linux does not install execstack in /usr/sbin you may have to tweak 
function/jolt-burg/Makefile in the obvious way.

The VPU version of Jolt is no longer built by default, so...
'cd' to function/jolt-burg and then type ./main.
At the prompt, typing 
(+ 3 4) should give you back a 7.

I've tested this on 32- and 64-bit versions of Linux Mint.

Hope that helps.

(We are certainly not alone in thinking the compiler should be part of many a 
language runtime system. :)

Regards,
Ian

On Oct 18, 2013, at 18:44 , Simon Forman wrote:

 On 10/12/2013 at 2:12 PM, Ian Piumarta i...@vpri.org wrote:
 
 I will set up a 64-bit Ubuntu VirtualBox and fix whatever is 
 broken.  If I haven't done this by midweek then please do feel 
 free to remind me.
 
 It will all have to run in 32 bits, though, since there is no 64-
 bit code generator for idst/jolt.  It should be possible to do 
 this within a 64-bit environment.
 
 No rush. I'm just tinkering. :)  I tried a couple of simple things but I 
 don't really know what I'm doing and none of it worked.  It's been a long 
 time since I've worked with C.
 
 From what I understand of how the COLA works I'm really blown away.  The VPU 
 alone is very impressive.  I'm beginning to think that anyone who isn't using 
 a COLA-style self-hosted system (with native code compilation!) is just 
 punishing themselves.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Compiling COLA on x86_64

2013-10-20 Thread Karl
On Sunday, October 20, 2013 09:37:54 AM Ian Piumarta wrote:
 On 64-bit Linux make sure you have the packages lib32readline6-dev (or 
 similar) 
and execstack installed.
 Then type make TARGET=x86_32-pc-linux in the top-level directory.
 This will build the st80 libraries for id, the idst compiler 'idc' and a 
 32-bit version of 
Jolt that can execute its own dynamic code on a 64-bit Linux system.
 
 If your Linux does not install execstack in /usr/sbin you may have to tweak 
function/jolt-burg/Makefile in the obvious way.
 
 The VPU version of Jolt is no longer built by default, so...
 'cd' to function/jolt-burg and then type ./main.
 At the prompt, typing 
 (+ 3 4) should give you back a 7.
 
 I've tested this on 32- and 64-bit versions of Linux Mint.
 
 Hope that helps.
 
 (We are certainly not alone in thinking the compiler should be part of many a 
language runtime system. :)
 
 Regards,
 Ian


Thanks for the fix.  It works on Fedora 19 using the readline-devel.i686 
package and 
changing the execstack path to /usr/bin/.

The wiki mentions a Git mirror at git://fig.org, but this has been unreachable 
over the 
last 24 hours.  When was the last time anyone used this mirror?


-Karl
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Hacking Maru

2013-10-20 Thread Ian Piumarta
Dear Faré,

I'm sorry (but also impressed ;) to hear that you have reached the limit of 
procrastination within Common Lisp.

I'm a little preoccupied until next week, but brief answers to some of your 
questions are inline below...

 Is there a mailing list specifically about technical details of maru,
 or is FoNC the thing?

No and not really (at least not for any kind of sustained planning, design and 
development discussion).

Throwing caution to the wind and misplaced trust into the toxic 
smog^H^H^H^H^H^H^H^H^H^H cloud, I created maru-...@googlegroups.com (with 
https://groups.google.com/forum/#!forum/maru-dev possibly being the place to 
sign up, I think, assuming some cookie or other isn't poisoning my view of that 
page compared to everyone else's view of it).

I am tempted to write a big introductory post containing my vision for where 
Maru might go or how it might be reborn, given enough community support, but 
that will not happen before next week.

 * I see the gc reserves 8 bits for flags, but uses only 3. Is that a
 typo? Is that done with the intent of addressing that byte directly?

No and not really (at least not explicitly).  An ancient gcc generated wretched 
code when the boundary was not byte-aligned.  Things may be different now.  
You're right that 16 Mbytes is a stingy limit for certain kinds of application.

 On the other hand, given the opportunity for 5 extra bits, I'd gladly
 grab one for a one-bit reference count (i.e. is the object linear so
 far or not), and another one for a can-we-duplicate capability,
 if/when implementing a linear lisp on top of it.

They're yours for the taking.  I was waiting to have some time to reply 
properly to the linear lisp thread but, since you mention it, one of the 
reasons for having all accesses to object fields go through get() and set() was 
to make it ridiculously easy to add read and write barriers.

 * The source could use comments and documentation.

I suffer from being able to make sense of my own code many years after writing 
it.  Not very conducive to community process, I know.

 Would you merge in patches that provide some?

Of course!  I might not merge them very quickly, though, since I'd be so 
unreasonable as to want to read them all first and elaborate if necessary. :)

 * Speaking of documentation, could you produce an initial README, with
 a roadmap of which files are which and depend on which, and how to run
 the various tests, etc.? Maybe separating things in subdirectories
 could help. Or not. Many Makefile targets are obsolete. A working
 test-suite could help.

You almost answered your own question.  I try to keep README.examples (which is 
a script) working properly.  It exercises a huge amount of the code, but to 
find out what exactly that code is you do need to obtain some kind of 
transitive closure over all the 'load's and 'require's that pull in lots of 
source files.

Many Makefile targets were for temporary experiments, long abandoned, and 
should be removed.  Others are for exercising interesting code (that still 
works and yet is not portable enough to be part of README.examples) and should 
be made more prominent.  (Targets that exercise a statically-typed compiler for 
Maru are hiding in there somewhere, with obscure names.)  Other interesting 
code has no Makefile or README.examples presence at all.  (Minimal SDL bindings 
to open a window, render some text and then paint with the mouse is one 
example, prompted by a recent discussion on this list.)

The whole thing is teetering on the edge of being rewritten in itself (as the 
original three-file version of Maru was written in itself).  My intention was 
always to tidy things up considerably at that time.

FWIW, there is a sketch of how Maru's generalised eval works 
(http://piumarta.com/freeco11/freeco11-piumarta-oecm.pdf) which is entirely 
accurate in intent and approach, if a little different in some implementation 
details.

 * Is the idea that everyone should be doing/forking his own,
 CipherSaber style, or is there an intent to share and build common
 platform?

I'd love to build a common platform.  Maru is in particular trying to be 
malleable at the very lowest levels, so any special interest that cannot be 
accommodated easily within the common platform would be a strong indicator of a 
deficiency within the platform that should be addressed rather than 
disinherited.

Where there is a choice between forking to add a fix or feature and clamouring 
to get that fix of feature adopted into some kind of central repository, I 
believe clamouring always benefits vastly more people in the long run.  I 
intensely dislike the github/gitorius 'clone fest' mindset because it dilutes 
and destroys progress, encouraging trivial imitation rather than radical 
innovation -- which, if there is any at all, finds itself fighting an 
intractably high noise floor.  Forking will always split a community (even if 
one side is only left with a community of one) so