Re: [9fans] Go Plan 9

2011-04-03 Thread Pavel Zholkover
I've set up a Mercurial patch queue with some instructions on building
at https://bitbucket.org/paulzhol/golang-plan9-runtime-patches/
with Andrey and Taru's patches. I'll try to keep it updated :)

Pavel



Re: [9fans] Go Plan 9

2011-04-03 Thread Pavel Zholkover
On Sun, Apr 3, 2011 at 2:52 AM, David Leimbach leim...@gmail.com wrote:
 So wait... We can get the toolchain built on plan 9. Or we can target plan 9 
 via cross compiler?  Either way is pretty awesome!  Nice work!

We are cross-compiling unless someone syncs
http://code.google.com/p/go-plan9/ with mainline.

Pavel



Re: [9fans] Go Plan 9

2011-04-03 Thread Pavel Zholkover
On Apr 3, 2011 12:18 PM, erik quanstrom quans...@quanstro.net wrote:
 okay, i volunteer.  just to make sure, we're talking
 about a plan 9 port, not a cross compile?

 just let me know what i need to get set up.  i can
 easily do 386 and arm at this point.

 - erik

I think Rob meant it would be a cross compile,  at least at first.

Could you comment on your changes at http://code.google.com/p/go-plan9/ ?
Can they be pushed to mainline ?

Pavel


Re: [9fans] Go Plan 9

2011-04-03 Thread erik quanstrom
 Could you comment on your changes at http://code.google.com/p/go-plan9/ ?
 Can they be pushed to mainline ?

i don't think they can in total.  we should push the silly
print format fixes and the added USED() that 8c caught
and gcc didn't.

but there definately are some difficult bits.  this hacked
inclusion of stdio.h is a problem on plan 9.

http://code.google.com/p/go-plan9/source/diff?spec=svnd6ec95bd4f9b2e9af2d10f08d9869aa2ca49d851r=d6ec95bd4f9b2e9af2d10f08d9869aa2ca49d851format=sidepath=/src/cmd/8a/a.y

my solution clearly ignored the problems in pushing
back to the main line.  but at least we have the problem identified.

/src/cmd/8l/l.h has another problem.  we need to figure
out how to get the definitions for uint8, etc from #include u.h
the defines i put in are just wrong, but the alternative is passing in -Isomedir
with a u.h that includes the real u.h and then tweaks other stuff.
that's just wronger.  :-)

a real solution would be one of
0  copy u.h; hack to taste
1  add the hacks to the real u.h
2  come to a concensus with go about what the defined-bit-width
types should be called.  change both plan 9 and go to conform.

i'd vote for 2.  it's harder that way, but i'd hate for go to
feel like it was pasted on.  but i'd like to know what everyone
else thinks.

- erik



[9fans] tr2post ignores \X'anything' troff lines

2011-04-03 Thread Rudolf Sykora
Hello 9fans,

I found that tr2post (at least on p9p) seems to discard [not process]
lines produced with \X'anything' in troff, so an input like

hello
\X'ps: exec 1 0 0 setrgbcolor'
blabla

results in output *without* the setrgbcolor line.
(using lines like this one can achieve coloured .ps, in this case the
'blabla' should be red)

Best regards
Ruda



Re: [9fans] tr2post ignores \X'anything' troff lines

2011-04-03 Thread Rudolf Sykora
On 3 April 2011 13:21, Rudolf Sykora rudolf.syk...@gmail.com wrote:
 Hello 9fans,

 I found that tr2post (at least on p9p) seems to discard [not process]
 lines produced with \X'anything' in troff, so an input like

 hello
 \X'ps: exec 1 0 0 setrgbcolor'
 blabla

 results in output *without* the setrgbcolor line.
 (using lines like this one can achieve coloured .ps, in this case the
 'blabla' should be red)

 Best regards
 Ruda


Ok, it seems that what I need is a modification of
/src/cmd/postscript/tr2post/devcntl.c
in the case 'X' part.

Is there any reason for this
/*   else
error(WARNING, Unknown string %s %s after x X\n, 
str, buf);
*/
being commented out?

Thanks
Ruda



Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sat, Apr 02, 2011 at 07:48:14PM -0700, Rob Pike wrote:
 
 We'll get the Plan 9 implementation up to scratch. It's not there yet,
 though.  Once things look solid we'll need a volunteer to set up a
 builder so we can automatically make sure the Plan 9 port stays
 current.
 
That's code for we'll have a build under Linux for the Plan 9
cross-development system or we'll have a Plan 9 port of Go?

I've been thinking, besides my now very dated efforts to port the Go
toolchain to Plan 9 native, that there may be merit in an intermediate
port of the C and Go toolchains to p9p.  But combining the build
environments looked pretty complicated.  I did try, but I got lost
trying to keep the three build environments (Linux, Plan 9 and p9p)
in my head at the same time.

Still, there may be somebody out there who can get this done better than
I would.

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sun, Apr 03, 2011 at 06:34:28AM -0400, erik quanstrom wrote:
 
 but there definately are some difficult bits.  this hacked
 inclusion of stdio.h is a problem on plan 9.
 
 http://code.google.com/p/go-plan9/source/diff?spec=svnd6ec95bd4f9b2e9af2d10f08d9869aa2ca49d851r=d6ec95bd4f9b2e9af2d10f08d9869aa2ca49d851format=sidepath=/src/cmd/8a/a.y
 
As GNU says, GNU is not Unix (or Plan 9).  There is no #ifdef-free
way to satisfy both toolchains unless one wants to pervert the Plan 9
toolchain.  One trivial change to GCC, namely Plan 9's use of empty names
to represent unused arguments, would improve GCC greatly, but is unlikely
to be accepted by the developers.  The alternative is a pain in the butt.

But I agree with Erik, the changes to port the Go toolchain to Plan 9
are quite extensive and would require a great deal of care, I have done
a similar job a year ago.  Actually, I think it was two years agon and
I failed to resurrect my efforts a year later.

I'm not sure whether the compiler, assembler and linker that seemed
to work after my first attempts could be used to bootstrap a fresh
source tree.  I put no effort in place on the Go package side, so that
remains to be tried.

In passing, Erik, you made some changes to Yacc to accept //-comments,
do you still have those at hand?  Do you have some idea why they were
not applied to P9 Yacc?

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sun, Apr 03, 2011 at 06:34:28AM -0400, erik quanstrom wrote:
 
 a real solution would be one of
 0  copy u.h; hack to taste
 1  add the hacks to the real u.h
 2  come to a concensus with go about what the defined-bit-width
 types should be called.  change both plan 9 and go to conform.
 
 i'd vote for 2.  it's harder that way, but i'd hate for go to
 feel like it was pasted on.  but i'd like to know what everyone
 else thinks.
 
I don't think anything comes near to 2 as a solution.  And it really
isn't all that invasive either.  Add my vote to yours.

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread erik quanstrom
 As GNU says, GNU is not Unix (or Plan 9).  There is no #ifdef-free
 way to satisfy both toolchains unless one wants to pervert the Plan 9
 toolchain.  One trivial change to GCC, namely Plan 9's use of empty names
 to represent unused arguments, would improve GCC greatly, but is unlikely
 to be accepted by the developers.  The alternative is a pain in the butt.

a sed script in the plan9-specific could do the trick.  ideally, though,
the go source wouldn't redefine getc(), and the include could no longer
be necessary.  i've seen go define cget in other places, that might be a
solution; but i don't know the local customs well.

 In passing, Erik, you made some changes to Yacc to accept //-comments,
 do you still have those at hand?  Do you have some idea why they were
 not applied to P9 Yacc?

they have been applied.  thanks to geoff for integrating the
change: /n/sources/patch/applied/yacccmt

- erik



Re: [9fans] Go Plan 9

2011-04-03 Thread Skip Tavakkolian
Why can't we use linuxemu to run the build?

-Skip

On Apr 3, 2011, at 8:43 AM, erik quanstrom quans...@quanstro.net wrote:

 As GNU says, GNU is not Unix (or Plan 9).  There is no #ifdef-free
 way to satisfy both toolchains unless one wants to pervert the Plan 9
 toolchain.  One trivial change to GCC, namely Plan 9's use of empty names
 to represent unused arguments, would improve GCC greatly, but is unlikely
 to be accepted by the developers.  The alternative is a pain in the butt.
 
 a sed script in the plan9-specific could do the trick.  ideally, though,
 the go source wouldn't redefine getc(), and the include could no longer
 be necessary.  i've seen go define cget in other places, that might be a
 solution; but i don't know the local customs well.
 
 In passing, Erik, you made some changes to Yacc to accept //-comments,
 do you still have those at hand?  Do you have some idea why they were
 not applied to P9 Yacc?
 
 they have been applied.  thanks to geoff for integrating the
 change: /n/sources/patch/applied/yacccmt
 
 - erik
 



Re: [9fans] GSoC Widget library

2011-04-03 Thread pmarin
While browsing in 9fans today I discovered that  some people actually
have written cool user interfaces in Plan9:
http://marc.info/?l=9fansm=111558827311549w=2
http://basalt.cias.osakafu-u.ac.jp/plan9/Tyrrhena.gif
http://basalt.cias.osakafu-u.ac.jp/plan9/Tyrrhena2.gif


On Sat, Apr 2, 2011 at 4:12 AM, erik quanstrom quans...@quanstro.net wrote:
 Hello, I am participating in Google Summer of Code.
 After searching your ideas page, I was interested in widget library. Have
 you heard anything of IMGUI http://www710.univ-lyon1.fr/%7Eexco/ZMW/? I
 personally think it is a very beautiful way of expressing user interfaces.
 What do you think of making Plan 9 widget library an implementetion of IMGUI
 philosophy?

 in general, i'm all for ui development.  it's not a solved problem
 in plan 9 at all.  i think the question is, what can you do in a
 summer?

 by the way, take a look at nemo's octopus http://lsub.org/ls/octopus.html

 - erik





Re: [9fans] Go Plan 9

2011-04-03 Thread Pavel Zholkover
What about the old gcc3 port? Is it enough for bootstrapping the compilers?
On Apr 3, 2011 7:28 PM, Skip Tavakkolian skip.tavakkol...@gmail.com
wrote:


Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sun, Apr 03, 2011 at 07:49:06PM +0300, Pavel Zholkover wrote:
 What about the old gcc3 port? Is it enough for bootstrapping the compilers?
 On Apr 3, 2011 7:28 PM, Skip Tavakkolian skip.tavakkol...@gmail.com
 wrote:

You'd perpetuate an alien binary format, which sounds like a bad idea
to me.  But I'm so muddled up with all the options, I can't really find
my way out of that paperbag.  So perhaps somebody can pick up where Erik
and I independently left off and make something out of it.  I keep trying,
but it keeps getting more and more complicated, at least to me.

I'm happy to donate all the mkfiles I strung together, but even those
may need major surgery.

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread Devon H. O'Dell
Does -fplan9-extensions not do that? Its in the latest gcc for gccgo...
On Apr 3, 2011 11:26 AM, Lucio De Re lu...@proxima.alt.za wrote:
 On Sun, Apr 03, 2011 at 06:34:28AM -0400, erik quanstrom wrote:

 but there definately are some difficult bits. this hacked
 inclusion of stdio.h is a problem on plan 9.


http://code.google.com/p/go-plan9/source/diff?spec=svnd6ec95bd4f9b2e9af2d10f08d9869aa2ca49d851r=d6ec95bd4f9b2e9af2d10f08d9869aa2ca49d851format=sidepath=/src/cmd/8a/a.y

 As GNU says, GNU is not Unix (or Plan 9). There is no #ifdef-free
 way to satisfy both toolchains unless one wants to pervert the Plan 9
 toolchain. One trivial change to GCC, namely Plan 9's use of empty names
 to represent unused arguments, would improve GCC greatly, but is unlikely
 to be accepted by the developers. The alternative is a pain in the butt.

 But I agree with Erik, the changes to port the Go toolchain to Plan 9
 are quite extensive and would require a great deal of care, I have done
 a similar job a year ago. Actually, I think it was two years agon and
 I failed to resurrect my efforts a year later.

 I'm not sure whether the compiler, assembler and linker that seemed
 to work after my first attempts could be used to bootstrap a fresh
 source tree. I put no effort in place on the Go package side, so that
 remains to be tried.

 In passing, Erik, you made some changes to Yacc to accept //-comments,
 do you still have those at hand? Do you have some idea why they were
 not applied to P9 Yacc?

 ++L



Re: [9fans] Go Plan 9

2011-04-03 Thread erik quanstrom
On Sun Apr  3 12:27:29 EDT 2011, skip.tavakkol...@gmail.com wrote:
 Why can't we use linuxemu to run the build?
 

sure we could,  but then you have to maintain linuxemu, and
go.  that seems silly.

 Does -fplan9-extensions not do that? Its in the latest gcc for gccgo...

what does gcc have to do with getting go compiled on plan 9?

- erik



Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sun, Apr 03, 2011 at 01:43:53PM -0400, Devon H. O'Dell wrote:
 
 Does -fplan9-extensions not do that? Its in the latest gcc for gccgo...

That would be great.  I don't even pretend to keep track of what the GCC
group does, I guess I owe you thanks for correcting me.  If that's how one
goes about finding these things out, well, it's not pretty, but it works.

And in passing that grants me the option to drop unwanted argument names
in the Go sources, but will the Go developers follow suit?  Have they
already done so?  I think I have enough evidence to track down most if
not all instances.

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread Rob Pike
I'm not sure I follow.  The 6c and 6g compilers in the Go distribution
are compiled with the local compiler, such as gcc on Linux and OS X.
I don't believe it's possible they have Plan 9-specific features in
their source.  I can believe they would have problems compiling on
Plan 9, but that's the inverse problem.

Once 6c is built, it is used to build the Go runtime, so the source
and compiler should match perfectly.  Plan 9 compiler issues should
not be relevant.

-rob



Re: [9fans] Go Plan 9

2011-04-03 Thread Steve Simon
A month or so ago I got the go compiler chain to build on plan9,
port is too grand a term, it was just fixing a few nits.

I wrote mkfiles and fixed a few minor bugs. The bigest problem was my knowledge
of yacc was not sufficent to rework the error generation magic go uses from
the bison based code to plan9 yacc code. perhaps there is a yacc expert out 
there
who would be interested in helping?

I am happy to push back my changes, but without either getting yacc to work, or,
abandoning yacc and porting bison, I didn't feel it was ready.

-Steve



Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sun, Apr 03, 2011 at 11:20:25AM -0700, Rob Pike wrote:
 
 I'm not sure I follow.  The 6c and 6g compilers in the Go distribution
 are compiled with the local compiler, such as gcc on Linux and OS X.
 I don't believe it's possible they have Plan 9-specific features in
 their source.  I can believe they would have problems compiling on
 Plan 9, but that's the inverse problem.
 
On the contrary (if that makes sense), they have that nasty #include
stdio.h that Erik referred to and Plan 9 is allergic to, as well as a
number of nested #includes that trigger rebellion in the Plan 9 toolchain.
And I don't have a 64-bit host to test them on.

But do not forget that I have a Plan 9 native toolchain that compiles
a runnable copy of hello.c, including printf() invocation. It's just
too old to be useful.

 Once 6c is built, it is used to build the Go runtime, so the source
 and compiler should match perfectly.  Plan 9 compiler issues should
 not be relevant.
 
I haven't even considered using the toolchain for Plan 9 native because
I can't track releases given the many changes required to silence the
Plan 9 toolchain.  And maybe some warnings can be overlooked, but I
don't want to be the judge of that.

Basically, I can't find an approach to submitting changes to the Go
toolchain so it can run under Plan 9 that does not involve major surgery,
nor can I separate the changes out in small parcels because testing
is impractical.  I'm hoping that things will eventually settle down and
there will be resources to review extensive, if documentable changes.
Not being able to back-port the Go toolchain to Plan 9 native seems
defeatist.  Now that a runtime is available and will hopefully receive
extensive testing, it makes the exercise even more worthwhile.

And if issues such as compatibility in function argument definitions can
be resolved amicably between the GCC compiler and the Plan 9 toolchain,
then things are really looking up.  But, as I stated earlier, it requires
the will on the Go side to retrofit changes for the sake of Plan 9, and
that may be a problem.  My failed efforts (possibly thoroughly misguided)
to get l.h accepted with changes acceptable to the Plan 9 built-in
pre-processor suggest that the Go developers have different objectives
in mind.

If this does not address Rob's concerns, then I'd like to ask for the
question(s) to be phrased more clearly.

And again, I think one ought to look at all the Plan 9 favours out
there: 9vx deserves effort, Plan9port could support Go better than the
native environment, linuxemu would provide a useful testbench.  Only GCC
3.0 in Plan 9 is almost certainly a dead end.

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread Lucio De Re
On Sun, Apr 03, 2011 at 07:50:20PM +0100, Steve Simon wrote:
 
 A month or so ago I got the go compiler chain to build on plan9,
 port is too grand a term, it was just fixing a few nits.
 
That makes a third version.  I seem to remember Erik's version compiled clean 
and I have to ask Steve now whether his version actually generates Plna 9 
executables.  And, for that matter, how far the Go portion reached.

The version I have restricts itself to C, but has libraries generated using the 
Go toolchain and has produced one non-trivial object code that ran 
successfully.  Regarding the Go compiler and runtime, I seem to remember that 
gc.a was created, but nothing else.

 I wrote mkfiles and fixed a few minor bugs. The bigest problem was my 
 knowledge
 of yacc was not sufficent to rework the error generation magic go uses from
 the bison based code to plan9 yacc code. perhaps there is a yacc expert out 
 there
 who would be interested in helping?
 
When I looked at the Go sources, no such magic stood out, but it's a
long time ago and I may have ignored the problem intentionally.

 I am happy to push back my changes, but without either getting yacc to work, 
 or,
 abandoning yacc and porting bison, I didn't feel it was ready.
 
Maybe Erik, Steve and I should consolidate our changes into a single batch
and submit it as a unit, knowing that it will have received at least some
competent code review.  Anybody else who may want to contribute would,
in my view, be welcome.  Reviewing code intended for Plan 9 cannot be
terribly high within the Google framework at this point in time.

++L



Re: [9fans] Go Plan 9

2011-04-03 Thread Anthony Martin
andrey mirtchovski mirtchov...@gmail.com once said:
 cross-compilation (GOOS=plan9, GOARCH=386, link with -s), but there
 are a few issues -- the build fails at crypto, so fmt is not compiled.
 for a hello world you need to manually make install pkg/strconv and
 pkg/reflect and pkg/fmt.

Everything works fine for me without the '-s' flag.

Pavel Zholkover paulz...@gmail.com once said:
 The produced binaries do not run properly on 9vx since the last gc
 changes so its native or kvm+qemu etc.

The reason it doesn't work on 9vx is because the 32 bit Go runtime
reserves a large chunk of address space (currently 768mb).  On all
other platforms, this is accomplised with an mmap equivalient, which
we all know won't work on Plan 9.

Right now, if you want to run Go binaries on Plan 9, you have to 
apply the patch at the bottom of this message.  In the future we
should probably have the runtime use the segment(3) device.

  Anthony


diff -r 11611373ac8a src/pkg/runtime/malloc.goc
--- a/src/pkg/runtime/malloc.gocSun Apr 03 09:11:41 2011 -0700
+++ b/src/pkg/runtime/malloc.gocSun Apr 03 14:00:13 2011 -0700
@@ -231,7 +231,7 @@
 
 int32 runtime·sizeof_C_MStats = sizeof(MStats);
 
-#define MaxArena32 (2U30)
+#define MaxArena32 (240U20)
 
 void
 runtime·mallocinit(void)
@@ -292,7 +292,7 @@
// kernel threw at us, but normally that's a waste of 512 MB
// of address space, which is probably too much in a 32-bit 
world.
bitmap_size = MaxArena32 / (sizeof(void*)*8/4);
-   arena_size = 51220;
+   arena_size = 6420;

// SysReserve treats the address we ask for, end, as a hint,
// not as an absolute requirement.  If we ask for the end



Re: [9fans] Go Plan 9

2011-04-03 Thread Anthony Martin
Anthony Martin al...@pbrane.org once said:
 Right now, if you want to run Go binaries on Plan 9, you have to 
 apply the patch at the bottom of this message.  In the future we
 should probably have the runtime use the segment(3) device.

That should have been '9vx' instead of 'Plan 9'. Sorry.

  Anthony



Re: [9fans] Go Plan 9

2011-04-03 Thread Pavel Zholkover
I'm not sure I understand the reason 9vx will fail to reserve 768mb
with brk() while my Plan 9 install on kvm+qemu with 128mb or ram works
fine, as long as it is not written to.

The -s is no longer needed, 8l generates a.out symbols correctly.

Pavel

On Mon, Apr 4, 2011 at 12:16 AM, Anthony Martin al...@pbrane.org wrote:
 Anthony Martin al...@pbrane.org once said:
 Right now, if you want to run Go binaries on Plan 9, you have to
 apply the patch at the bottom of this message.  In the future we
 should probably have the runtime use the segment(3) device.

 That should have been '9vx' instead of 'Plan 9'. Sorry.

  Anthony





Re: [9fans] Go Plan 9

2011-04-03 Thread erik quanstrom
 The reason it doesn't work on 9vx is because the 32 bit Go runtime
 reserves a large chunk of address space (currently 768mb).  On all
 other platforms, this is accomplised with an mmap equivalient, which
 we all know won't work on Plan 9.
 

if i read the thread on this topic correctly, this reservation
isn't necessary on plan 9, since there are no shared libraries
and the heap will always be contiguous.

if there is a way to override this for plan 9, we probablly should.

- erik



Re: [9fans] Go Plan 9

2011-04-03 Thread Anthony Martin
Pavel Zholkover paulz...@gmail.com once said:
 I'm not sure I understand the reason 9vx will fail to reserve 768mb
 with brk() while my Plan 9 install on kvm+qemu with 128mb or ram works
 fine, as long as it is not written to.

The reason is because 9vx gives user processes a virtual
address space of only 256mb.  The brk works but the
first time we fault one of those pages past USTKTOP the
program suicides.

The first fault happens at src/pkg/runtime/mcache.c:21
in the runtime·MCache_Alloc function.

To show you what I mean, here's a formatted stack trace:

term% cat y.go
package main

func main() {
println(Hello, world.)
}

term% ./8.out
8.out 174: suicide: sys: trap: page fault pc=0x21df

term% db 8.out 174
386 binary
page fault
/go/src/pkg/runtime/mcache.c:21 runtime.MCache_Alloc+39/MOVL0(AX),AX
$c
runtime.MCache_Alloc(sizeclass=0x1, c=0x30424000, size=0x8, zeroed=0x1)
/go/src/pkg/runtime/mcache.c:13 called from runtime.mallocgc+db
/go/src/pkg/runtime/malloc.goc:62
runtime.mallocgc(size=0x8, zeroed=0x1, flag=0x0, dogc=0x0)
/go/src/pkg/runtime/malloc.goc:40 called from runtime.malloc+41
/go/src/pkg/runtime/malloc.goc:115
runtime.malloc(size=0x1)
/go/src/pkg/runtime/malloc.goc:113 called from runtime.mallocinit+e9
/go/src/pkg/runtime/malloc.goc:319
runtime.mallocinit()
/go/src/pkg/runtime/malloc.goc:237 called from runtime.schedinit+39
/go/src/pkg/runtime/proc.c:122
runtime.schedinit()
/go/src/pkg/runtime/proc.c:113 called from _rt0_386+b3
/go/src/pkg/runtime/386/asm.s:78
_rt0_386()
/go/src/pkg/runtime/386/asm.s:12 called from 1 


Cheers,
  Anthony



[9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread smiley
I'm in the process of writing some filters in rc(1).  One thing that has
come to concern me about rc(1) is that read(1) is not a builtin
command.  For example, with a loop like:

while(message=`{read})
  switch($message) {
  case foo
dofoo
  case bar
dobar
  case *
dodefault
  }

Each line that's read by the script causes it to fork a new process,
/bin/read, whose sole purpose is to read a single line and die.  That
means at least one fork for each line read and, if your input has many
lines, that means spawning many processes.  I wonder if it wouldn't make
sense to move read(1) into rc(1) and make it a builtin command.  A
wrapper script could then be created, at /bin/read, to call rc -c 'eval
read $*' with the appropriate arguments (or sed $n^q, etc.), for any
program that requires an actual /bin/read to exist.

A similar line of thought holds for /bin/test.  The string and numeric
tests (-n, -z, =, !=, , , -lt, -eq, -ne, etc.) can be very frequently
used, and can lead to spawning unnecessarily many processes.  For the
file test parameters (-e, -f, -d, -r, -x, -A, -L, -T, etc.), however,
this argument isn't as strong.  Since the file tests have to stat(2) a
path, they already require a call to the underlying file system, and an
additional fork wouldn't be that much more expensive.  I could see the
string and numeric tests being moved into rc(1) as a test builtin,
with the file tests residing at /bin/ftest (note the f).  The test
builtin could scan its arguments and call ftest if needed.  A wrapper
script at /bin/test could provide compatibility for existing programs
which expect an executable named /bin/test to exist.

I understand the Unix/Plan 9 philosophy of connecting tools that do one
job and do it well.  But I don't think /bin/read and /bin/test are
places where that philosophy is practical (i.e., efficient).  After all,
reading input lines really is the perogative of any program that
processes line-oriented data (like rc(1) does).  In addition, /bin/read
represents a simple and fairly stable interface that's not likely to
change appreciably in the future.  Comparison of numeric and string
values is also a fairly stable operation that's not likely to change,
and is not likely to be needed outside of rc(1).  Most programming
languages (C, awk, etc.) have their own mechanisms for integer and
string comparison.  I suspect moving these operations into rc(1) (with
appropriate replacement scripts to ensure compatibility) could
appreciably increase the performance of shell scripts, with very little
cost in modularity or compatibility.

Any thoughts on this?

I'm also a bit stumped by the fact that rc(1) doesn't have anything
analogous to bash(1)'s string parsing operations: ${foo#bar},
${foo##bar}, ${foo%bar}, ${foo%%bar}, or ${foo/bar/baz}.  Is there any
way to extract substrings (or single characters) from a string in rc(1)
without having to fork a dd, awk, or sed?  I've tried setting ifs='' and
using foo=($bar), but rc(1) always splits bar on spaces.  Perhaps, if
rc(1) used the first character of $ifs to split $bar, $bar could be
split into individual characters when ifs=''.  Then, the characters of
$bar could be addressed without resort to dd and friends.

(As a side note, if anyone goes into rc(1)'s source to implement any of
this, please add a -- option (or similar) to the echo builtin while
you're there.  Having to wrap echo in:

# make 'myecho $foo' work even when $foo starts with '-n'
fn myecho {
  if(~ $1 --) {
shift
if(~ $1 -n) {
  shift
  echo -n -n $*
  echo
}
if not echo $*
  }
  if not echo $*
}

can be rather inconvenient.)

-- 
+---+
|E-Mail: smi...@zenzebra.mv.com PGP key ID: BC549F8B|
|Fingerprint: 9329 DB4A 30F5 6EDA D2BA  3489 DAB7 555A BC54 9F8B|
+---+



Re: [9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread pmarin
Write some real world  tests using bash/GNU tools, rc (with static
linked versions of p9p)  and tell us what happend.
Maybe you will be surprised.

[1] http://cm.bell-labs.com/cm/cs/tpop/

On Mon, Apr 4, 2011 at 12:30 AM,  smi...@zenzebra.mv.com wrote:
 I'm in the process of writing some filters in rc(1).  One thing that has
 come to concern me about rc(1) is that read(1) is not a builtin
 command.  For example, with a loop like:

    while(message=`{read})
      switch($message) {
      case foo
        dofoo
      case bar
        dobar
      case *
        dodefault
      }

 Each line that's read by the script causes it to fork a new process,
 /bin/read, whose sole purpose is to read a single line and die.  That
 means at least one fork for each line read and, if your input has many
 lines, that means spawning many processes.  I wonder if it wouldn't make
 sense to move read(1) into rc(1) and make it a builtin command.  A
 wrapper script could then be created, at /bin/read, to call rc -c 'eval
 read $*' with the appropriate arguments (or sed $n^q, etc.), for any
 program that requires an actual /bin/read to exist.

 A similar line of thought holds for /bin/test.  The string and numeric
 tests (-n, -z, =, !=, , , -lt, -eq, -ne, etc.) can be very frequently
 used, and can lead to spawning unnecessarily many processes.  For the
 file test parameters (-e, -f, -d, -r, -x, -A, -L, -T, etc.), however,
 this argument isn't as strong.  Since the file tests have to stat(2) a
 path, they already require a call to the underlying file system, and an
 additional fork wouldn't be that much more expensive.  I could see the
 string and numeric tests being moved into rc(1) as a test builtin,
 with the file tests residing at /bin/ftest (note the f).  The test
 builtin could scan its arguments and call ftest if needed.  A wrapper
 script at /bin/test could provide compatibility for existing programs
 which expect an executable named /bin/test to exist.

 I understand the Unix/Plan 9 philosophy of connecting tools that do one
 job and do it well.  But I don't think /bin/read and /bin/test are
 places where that philosophy is practical (i.e., efficient).  After all,
 reading input lines really is the perogative of any program that
 processes line-oriented data (like rc(1) does).  In addition, /bin/read
 represents a simple and fairly stable interface that's not likely to
 change appreciably in the future.  Comparison of numeric and string
 values is also a fairly stable operation that's not likely to change,
 and is not likely to be needed outside of rc(1).  Most programming
 languages (C, awk, etc.) have their own mechanisms for integer and
 string comparison.  I suspect moving these operations into rc(1) (with
 appropriate replacement scripts to ensure compatibility) could
 appreciably increase the performance of shell scripts, with very little
 cost in modularity or compatibility.

 Any thoughts on this?

 I'm also a bit stumped by the fact that rc(1) doesn't have anything
 analogous to bash(1)'s string parsing operations: ${foo#bar},
 ${foo##bar}, ${foo%bar}, ${foo%%bar}, or ${foo/bar/baz}.  Is there any
 way to extract substrings (or single characters) from a string in rc(1)
 without having to fork a dd, awk, or sed?  I've tried setting ifs='' and
 using foo=($bar), but rc(1) always splits bar on spaces.  Perhaps, if
 rc(1) used the first character of $ifs to split $bar, $bar could be
 split into individual characters when ifs=''.  Then, the characters of
 $bar could be addressed without resort to dd and friends.

 (As a side note, if anyone goes into rc(1)'s source to implement any of
 this, please add a -- option (or similar) to the echo builtin while
 you're there.  Having to wrap echo in:

    # make 'myecho $foo' work even when $foo starts with '-n'
    fn myecho {
      if(~ $1 --) {
        shift
        if(~ $1 -n) {
          shift
          echo -n -n $*
          echo
        }
        if not echo $*
      }
      if not echo $*
    }

 can be rather inconvenient.)

 --
 +---+
 |E-Mail: smi...@zenzebra.mv.com             PGP key ID: BC549F8B|
 |Fingerprint: 9329 DB4A 30F5 6EDA D2BA  3489 DAB7 555A BC54 9F8B|
 +---+





Re: [9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread Tristan Plumb
 One thing that has come to concern me about rc(1) is that read(1) is
 not a builtin command.

The general idea here is that forking a new process is not usually
(ever?) the bottleneck, if you have a script that needs to run faster,
there's other overhead to trim first, and if you really need to, you can:
(giving up line at a time response).

ifs=($nl)
lines=`{cat}
for($lines as $line){...}

There isn't any such trick (that I know) for test, but how much is it
slowing you down?

 I'm also a bit stumped by the fact that rc(1) doesn't have anything
 analogous to bash(1)'s string parsing operations: ${foo#bar},
 ${foo##bar}, ${foo%bar}, ${foo%%bar}, or ${foo/bar/baz}.
I could never remember what these did, except the last one.

 Is there any way to extract substrings (or single characters) from a
 string in rc(1) without having to fork a dd, awk, or sed?
Sure, for some things, except it uses cat! Without any forking, I don't
know (see below).

On the other hand, echo -n is a wart. I wonder, does echo '' -n work?
(My plan9 machine is off and far away.)

On a more friendly note. Hi, I think I know you slightly, telephones.

Tristan

-- 
All original matter is hereby placed immediately under the public domain.



Re: [9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread Lyndon Nerenberg (VE6BBM/VE7TFX)
 (As a side note, if anyone goes into rc(1)'s source to implement any of
 this, please add a -- option (or similar) to the echo builtin while
 you're there.

Echo is not a builtin, and for one possible solution see
/n/sources/contrib/lyndon/echon.c




Re: [9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread Ethan Grammatikidis


On 4 Apr 2011, at 12:41 am, Tristan Plumb wrote:



I'm also a bit stumped by the fact that rc(1) doesn't have anything
analogous to bash(1)'s string parsing operations: ${foo#bar},
${foo##bar}, ${foo%bar}, ${foo%%bar}, or ${foo/bar/baz}.

I could never remember what these did, except the last one.


I could never remember what any of these did, except that they are a  
major reason I'm thankful I hardly have anything to do with bash any  
more. Cluttering up your working memory with 600 different cryptic  
ways to do things is stupid when you're trying to solve a hard  
problem. Spending time and effort learning 600 cryptic ways to get  
tiny improvements in performance is stupid when you want the machine  
to reduce your workload.


I'd also like to reiterate what pmarin wrote about trying it out  
first, except I'd say you will be surprised. :) Without dynamic  
linking, fork() -- or, to put the problem where it actually occurs,  
exec() -- is not particularly slow at all.



On the other hand, echo -n is a wart. I wonder, does echo '' -n work?
(My plan9 machine is off and far away.)


It works very well, doing exactly what it's supposed to, although I  
vaguely remember having problems with such a feature in Linux many  
years ago. It does look slightly warty, being an odd argument out,  
but if you think about it options are always odd arguments.




Re: [9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread erik quanstrom
 The general idea here is that forking a new process is not usually
 (ever?) the bottleneck, if you have a script that needs to run faster,
 there's other overhead to trim first, and if you really need to, you can:
 (giving up line at a time response).
 
 ifs=($nl)
 lines=`{cat}
 for($lines as $line){...}

i hate to be pedantic, but i see 2 syntax errors, a
unintended side effect and an extra set of parens.
ifs is not a list; it is a set of characters like strpbrk(2).

i think this is what you want

for(line in `{ifs=$nl cat}){...}

but i have no idea why one would avoid the read
idiom.  for large input, forking off a read for each
line keeps the memory footprint O(1).

if not dealing with large input, then a few forks don't
matter.

 On the other hand, echo -n is a wart. I wonder, does echo '' -n work?
 (My plan9 machine is off and far away.)

as per plan 9 tradition, the first non-option terminates
option processing.  lindon's echon is not required.  giving
echo -n as its first argument works fine.

- erik



Re: [9fans] Making read(1) an rc(1) builtin?

2011-04-03 Thread erik quanstrom
 Each line that's read by the script causes it to fork a new process,

we're not running out.  even with a mere four billion odd to choose from.

 I understand the Unix/Plan 9 philosophy of connecting tools that do one
 job and do it well.  But I don't think /bin/read and /bin/test are
 places where that philosophy is practical (i.e., efficient).  After all,
 reading input lines really is the perogative of any program that
 processes line-oriented data (like rc(1) does).  In addition, /bin/read
 represents a simple and fairly stable interface that's not likely to
 change appreciably in the future.

could you be concrete about your performance problem.
if you don't have a performance problem, then there's no
point to optimizing.

 I'm also a bit stumped by the fact that rc(1) doesn't have anything
 analogous to bash(1)'s string parsing operations: ${foo#bar},
 ${foo##bar}, ${foo%bar}, ${foo%%bar}, or ${foo/bar/baz}.  Is there any
 way to extract substrings (or single characters) from a string in rc(1)
 without having to fork a dd, awk, or sed?  I've tried setting ifs='' and
 using foo=($bar), but rc(1) always splits bar on spaces.  

false.

ifs=☺ {x=`{echo -n  'a b c ☺ d e f'}}
; whatis x
x=('a b c ' ' d e f')
; echo $#x
2

(you might not want to try splitting on non-ascii with the rc
on sources.  i'm unsure about p9p.)

 (As a side note, if anyone goes into rc(1)'s source to implement any of
 this, please add a -- option (or similar) to the echo builtin while
 you're there.  Having to wrap echo in:

when exactly does this come up?

- erik