[9fans] lisp

2008-07-07 Thread Bakul Shah
[Questions in the third para below.]
CMUCL initializes its state essentialy by loading a
previously dumped core image file.  This is slow the first
time around but once the ~25MB core image is cached,
execution is really fast and you have access to a lot of
goodies.  So a script like

#!/usr/local/bin/cmucl -script
(format t Hello, World!~%)

can execute in a few milliseconds.  On systems with mmap(2)
or equivalent, the core image is simply copy-on-write mmaped.
This is a win since only the required pages will be loaded
(and not all of 25MB) and COW allows local changes.

From what I understand, to do something equivalent on plan9
would require creating a segment and copying the core file to
it.  Is this correct?  Presumably the reads are cached?  Even
so, there will the cost of copying to the segment.  Or can
one create multiple text and data segments in some way so
that stuff will be paged in as necessary?  Also, if a shared
segment is created won't the forked processes be able to
modify this segment?  Ideally one would like a private copy
for each child.  Is segattach + read the best (only?) way to
do this?

sbcl too uses a core file like cmucl.  They both compile code
so are generally faster than clisp, which is the third
alternative.  Note: my interest is purely hypothetical at the
moment.

Thanks!



Re: [9fans] cloud computing

2008-07-07 Thread andrey mirtchovski
is your app export-controlled?

On Mon, Jul 7, 2008 at 12:04 PM, ron minnich [EMAIL PROTECTED] wrote:
 I have an application for 10,000 machines. One option is to buy them
 and run them, yuck!



Re: [9fans] cloud computing

2008-07-07 Thread Mark F Rodriguez

Ron,


Anybody worked with these guys? I need to run the 10k machines with
lguest. Any other possible providers anyone out there knows of?


We're using http://www.joyent.com which has worked well for us.

Good luck.

--
Mark F Rodriguez



Re: [9fans] lisp

2008-07-07 Thread erik quanstrom
i'm assuming by core file you don't mean executable.
plan 9 already keeps an executable cache.

 Presumably the reads are cached?  

reads are not cached.  read on plan 9 is syncronous.
there is no block cache.

 Even so, there will the cost of copying to the segment.  Or can
 one create multiple text and data segments in some way so
 that stuff will be paged in as necessary?  Also, if a shared
 segment is created won't the forked processes be able to
 modify this segment?  Ideally one would like a private copy
 for each child.  Is segattach + read the best (only?) way to
 do this?

why wouldn't you use ramfs?

- erik




Re: [9fans] cloud computing

2008-07-07 Thread ron minnich
On Mon, Jul 7, 2008 at 11:27 AM, andrey mirtchovski
[EMAIL PROTECTED] wrote:
 is your app export-controlled?

No. That was the first thought that hit my mind.

ron



Re: [9fans] lisp

2008-07-07 Thread David Leimbach
On Mon, Jul 7, 2008 at 11:45 AM, erik quanstrom [EMAIL PROTECTED] wrote:

 i'm assuming by core file you don't mean executable.
 plan 9 already keeps an executable cache.


A Lisp core file can be equated to a specially formatted executable.  The
Lisp environment is sort of a loader for these.  When people distribute lisp
applications, they give you a core file, and it doesn't run on it's own
without the Lisp runtime loading it.  The same goes for some Scheme
implementations as well (with the notable exception of Gambit Scheme which
has a C library runtime, and can build standalone binaries as well as more C
code, and it's not even GNU specific C thank god)

Remember code is data and data is code in Lisp land.



  Presumably the reads are cached?

 reads are not cached.  read on plan 9 is syncronous.
 there is no block cache.

  Even so, there will the cost of copying to the segment.  Or can
  one create multiple text and data segments in some way so
  that stuff will be paged in as necessary?  Also, if a shared
  segment is created won't the forked processes be able to
  modify this segment?  Ideally one would like a private copy
  for each child.  Is segattach + read the best (only?) way to
  do this?

 why wouldn't you use ramfs?

 - erik





Re: [9fans] lisp

2008-07-07 Thread Bakul Shah
On Mon, 07 Jul 2008 14:45:53 EDT erik quanstrom [EMAIL PROTECTED]  wrote:
 i'm assuming by core file you don't mean executable.
 plan 9 already keeps an executable cache.

It contains executable code but it is not an executable in
the sense you don't directly feed it to exec(2).  A lisp
process might add new functions and dump a new core image
file.  A later lisp process will have those functions
available to it.  One can even choose a diff. core file to
start with.

  Presumably the reads are cached?  
 
 reads are not cached.  read on plan 9 is syncronous.
 there is no block cache.

  Even so, there will the cost of copying to the segment.  Or can
  one create multiple text and data segments in some way so
  that stuff will be paged in as necessary?  Also, if a shared
  segment is created won't the forked processes be able to
  modify this segment?  Ideally one would like a private copy
  for each child.  Is segattach + read the best (only?) way to
  do this?
 
 why wouldn't you use ramfs?

You mean to cache the core file in memory?  That can
work...

Thanks!



Re: [9fans] lisp

2008-07-07 Thread Bakul Shah
On Mon, 07 Jul 2008 20:55:36 BST Charles Forsyth [EMAIL PROTECTED]  wrote:
  It contains executable code but it is not an executable in
  the sense you don't directly feed it to exec(2).  A lisp
 
 in the script you gave earlier
   #!/usr/local/bin/cmucl -script
   (format t Hello, World!~%)
 cmucl is directly executable but that's presumably the original
 lisp image; where is the new core image mentioned that makes
 the script run quickly?

cmucl is directly executable but it has only enough
intelligence to load a big lisp.core, which contains all the
smarts.



Re: [9fans] lisp

2008-07-07 Thread Bakul Shah
On Mon, 07 Jul 2008 13:21:33 PDT David Leimbach [EMAIL PROTECTED]  wrote:
 (format t Hello, World!~%)
 
 basically gets read then compiled then executed right?  (thinking REPL
 here)

 Right?  At least that's how SBCL works based on my understanding.

Well, it is not a read-eval-print-loop -- it is not
interactive or a loop and the code has to explicit print
something.  But yes the expression does get read, compiled
and executed.  SBCL is forked from CMUCL so they share most
of their behavior.

[Aside: there seems to be a lag of about 15 minutes from my
sending an email to 9fans and getting it back]



Re: [9fans] lisp

2008-07-07 Thread David Leimbach
On Mon, Jul 7, 2008 at 1:47 PM, Bakul Shah
[EMAIL PROTECTED][EMAIL PROTECTED]
wrote:

 On Mon, 07 Jul 2008 13:21:33 PDT David Leimbach [EMAIL PROTECTED]
  wrote:
  (format t Hello, World!~%)
 
  basically gets read then compiled then executed right?  (thinking REPL
  here)
 
  Right?  At least that's how SBCL works based on my understanding.

 Well, it is not a read-eval-print-loop -- it is not
 interactive or a loop and the code has to explicit print
 something.  But yes the expression does get read, compiled
 and executed.  SBCL is forked from CMUCL so they share most
 of their behavior.


right, I was just referring to REPL because obviously that S-expression is
scripted in.  It's not going through the standard REPL at all, but a sort of
a variant of it.  Perhaps just RE :-)

Dave




 [Aside: there seems to be a lag of about 15 minutes from my
 sending an email to 9fans and getting it back]




Re: [9fans] APE printf difference

2008-07-07 Thread Pietro Gagliardi

C89 does have such a requirement, in two places:

Section 5.1.2.3:
...
- The input and output dynamics of interactive devices shall take  
place as specified in 7.9.3. ... or line-buffered input appear as soon  
as possible, to ensure that prompting messages actually appear prior  
to a program waiting for input.

...

Section 7.9.3:
... Furthermore, characters are intended to be transmitted as a block  
to the host environment when ... input is requested on an unbuffered  
stream, or input is requested on a line buffered stream ...


So there you go. I don't know about C99, but I do know POSIX/SUS are  
designed to be aligned with standard C.


Pietro




Re: [9fans] APE printf difference

2008-07-07 Thread erik quanstrom
 So there you go. I don't know about C99, but I do know POSIX/SUS are  
 designed to be aligned with standard C.

i think you're missing the bit where determining what
an interactive device is is not straightfoward, as forsyth
pointed out.  think of connecting to a shell on the other
end of a unix socket.  or #|.

if you really depend on the prompt being synchronous,
you'd better either use nonbuffered io or fflush before prompting.

this all is a good argument for plan 9's decoupling of the print
library from the buffered i/o library.

- erik





Re: [9fans] Isnt it time we have the next bay area meeting?

2008-07-07 Thread David Hendricks
On Sun, Jun 1, 2008 at 3:03 PM, Tharaneedharan Vilwanathan 
[EMAIL PROTECTED] wrote:

 hi,

 it has been a long time since we met.

 any plans to have our next Plan9 Bay Area Users Group Meeting?


Is there any traction on this yet? You all are certainly welcome back to the
Googleplex if it's convenient. I know I can snag a couple more folks from
around here to join in the fun (9vx has generated some good buzz).


Re: [9fans] APE printf difference

2008-07-07 Thread Charles Forsyth
... Furthermore, characters are intended to be transmitted as a block  
to the host environment when ... input is requested on an unbuffered  
stream, or input is requested on a line buffered stream ...

that's true when a stream is established as _IONBF or _IOLBF,
so if you call setvbuf, that will be true.  the bit i quoted earlier dealt
with how the default i/o streams stdin/stdout/stderr were configured,
and that was not clear-cut, being conditional on whether the system
decided they were interactive or not, which seemed to be 
implementation-dependent
behaviour.  now, it might turn out to be straightforward to change APE to 
configure
the streams based on the guess made by isatty(), but that's why i asked
which way APE should face by default?  for import or export?
Trickey's short paper on APE touches on that, and i think an earlier
paper by Hume on Portability in the Research Unix environment was more
explicit.  in the past i relied on APE to check programs for good portability
when I developed on Plan 9 but supplied the programs to others to run on Unix, 
VMS and Windows.
perhaps now inward portability is more common, but should the older pedantry 
not be
available at all?




Re: [9fans] 9vx on OpenBSD-4.3

2008-07-07 Thread sqweek
On Sat, Jul 5, 2008 at 9:38 AM, Iruata Souza [EMAIL PROTECTED] wrote:
 sorry for not reporting until now.
 I´m not at home and have no access to my tree right now, but I can
 already run 9vx.
 console says the kernel is getting 0M of memory. that is surely
 because of my hacks.

 Don't think so. Here on linux I get:
256M memory: 0M kernel data, 256M user, 256M swap
 (and everything works).
-sqweek


Re: [9fans] APE printf difference

2008-07-07 Thread a
This disparity comes up fairly often. As an example, some
folks have done a lot of good (in the sense of being useful)
work getting various GNU things working on Plan 9 as pre-
requisites for things they want. It'd be nice if these were
available as an import package for folks to use who just
want some random linux program.

The complete set of dependencies is obviously gigantic, ever-
growing, and infeasable, but getting some useful subset is
a different story. In fact, if you look through contrib (by
which I mean fgb's very nice packaging system), we already
have a nice subset.

Some folks doing this work - and more watching - have
argued that this should all be dumped into APE. I think this
would be bad. I've used APE for exprt as much as import
(although, honestly, I've gotten somewhat lazy recently and
have started making p9p a pre-requisite), and being able to
check code against a least common denominator is very
helpful. One might argue that APE's current denominator is
too low (some of the things we require defines for and treat
as extensions have since become incorporated into ANSI and
POSIX), but that's really an independent discussion. There
remains a real difference between ANSI/POSIX standards and
GNU (or whoever) random gunk.

APE's export filter is very useful, and would be missed, but
there's an equally valid import role that it isn't very good at
serving, in modern contexts. I'd rather see that broken out
into a distinct environment, say G(nu)APE.

Of course, back on the question which prompted all this,
there remains an interesting question: do we duplicate the
library, or have it behave differently depending on where
it's used? Adding stuff in for the import environment is
relatively easy (in terms of organization and conflicts), but
should our stdio library really need to know whether it's being
used for import or export?

Anthony