Re: [9fans] Plan 9 on Routers?

2009-03-24 Thread Bakul Shah
On Tue, 24 Mar 2009 16:05:08 EDT Rahul Murmuria   
wrote:
> I am willing to explore this area. Maybe if /net reaches every router, such
> metrics can be retrieved and exchanged between the routers like other router
> OSes do (or maybe better than they already do) ?
> 
> I am planning to understand JUNOS using the documentation on their website,
> but I am not sure if I want to go though the CCNA books for Cisco IOS like
> you recommended. I have hardly any prior experience in the area, but initial
> design info finds me inclining towards JUNOS more.

OSPF and BGP are not exactly SoC projects but one place to
start may be openospfd and openbgpd from www.openbgp.org.

For any serious work you will need more than what JUNOS
documentation can give you.



Re: [9fans] Plan 9 on Routers?

2009-03-25 Thread Bakul Shah
On Wed, 25 Mar 2009 09:00:58 EDT "Devon H. O'Dell"   
wrote:
>  While creating an
> entire routing suite (such as Zebra/Quagga) is probably outside of the
> scope of a 3 month project, I think a diligent student could probably
> do something useful with OSPF or BGP. It's entirely possible that a 3
> month project could consist of analyzing Plan 9's ability to function
> in this environment and making changes to facilitate the
> implementation of routing protocols. Or creating a packet filter.

Thinking a bit more about it, extending /net/iproute to allow
routing metrics may be what is needed for porting/building
something like openospfd or openbgpd.  Basically
/net/{iproute,ipifc} etc need to do more or less what a
routing socket does under *BSD (man 4 route).  Of course,
there may be other things missing in the p9 IP stack that may
get in the way but now I think porting something like
openospfd in a summer is doable.



Re: [9fans] GSOC: Drawterm for the iPhone

2009-03-25 Thread Bakul Shah
On Wed, 25 Mar 2009 21:25:07 CDT Eric Van Hensbergen   wrote:
> Also, figuring out how multitouch works with plan 9 would be valuable
> in itself -- although admitadly could be done without an iPhone.

Exactly what I was thinking while reading this thread!  An
intuitive multitouch interface that goes beyond cut-n-paste
would go very well with a 3D graphics protocol. 9gl anyone?!



Re: [9fans] GSOC: Drawterm for the iPhone

2009-03-26 Thread Bakul Shah
I wasn't commenting on the GSoC; just reinforcing Eric's
point that a multitouch interface would be very interesting
in itself and pointing out that such a device in conjunction
with a 3d extension would be even more fun!  But yes, a
multitouch interface design would make a nice GSoC project.
Nothing directly useful may come of it but one never knows.
Look at bumptop.com -- that interface started out as a
student project. Look at the kind of things people do with
openframeworks.cc code. 

Plan9 can be a far simpler platform for things like that.
Imagine a multitouch device that dynamically creates a set
of pointer streams /dev/mt/{0,1,...} with each mt/n acting
like /dev/mouse. Or you can have a single multiplexed stream,
where each read returns, for example,

 keychar ptr-index x y msec [blob-size [blob-type]]

When you lift your finger the blob-size becomes 0.  If you
don't press it again within some time period and within a
small distance of its expected position, the ptr disappears.
Or something like that! A program to map camera input to
/dev/mt would give you a cheap multitouch device.

As for GSoC, if students pick projects that get their
creative juices flowing *and* if they can produce something
tangible (but not necessarily useful) in threee months,
that'd be success in my eyes.  FWIW.

On Thu, 26 Mar 2009 00:32:31 -0300 "Federico G. Benavento" 
  wrote:
> my questions were more about the real usage of iphone's dt
> my short sighted vision of the gsoc is this, I didn't use any
> of the stuff that gsoc 2007 got us, though I recognize the
> inferno ds port.
> but for the rest, it might be interesting, but is someone
> using that stuff?
> 
> iphone's drawterm sounds like something that very few
> people will use (the ones that have a cpu server and an
> iphone) in not that much often, of course it could be
> interesting to have it, but...
> 
> I think that gsoc is a good chance to get going stuff that
> we need and we will really use.
> 
> think of the openssh port, I did that, not for a gsoc and
> people use it, some guy even wrote a filesystem which
> suits lot's of people's needs.
> 
> 
> 
> On Thu, Mar 26, 2009 at 12:20 AM, Bakul Shah  wr=
> ote:
> > On Wed, 25 Mar 2009 21:25:07 CDT Eric Van Hensbergen  =
> =C2=A0wrote:
> >> Also, figuring out how multitouch works with plan 9 would be valuable
> >> in itself -- although admitadly could be done without an iPhone.
> >
> > Exactly what I was thinking while reading this thread! =C2=A0An
> > intuitive multitouch interface that goes beyond cut-n-paste
> > would go very well with a 3D graphics protocol. 9gl anyone?!
> >
> >
> 
> 
> 
> --=20
> Federico G. Benavento
> 



Re: [9fans] Stuck at partdisk

2009-04-05 Thread Bakul Shah
On Sun, 05 Apr 2009 15:54:19 EDT "Devon H. O'Dell"   
wrote:
> 2009/4/5 ron minnich :
> > On Sun, Apr 5, 2009 at 12:12 PM, Devon H. O'Dell  wr
> ote:
> >> 2009/4/5 Devon H. O'Dell :
> >>> Ideas?
> >>
> >> Works fine if I turn off DMA.
> >
> > no need to have DMA on on qemu anyway, so you have a workaround.
> 
> Except that it's now looking to take about 5 hours to install on my 3
> core AMD64 box with 8GB RAM :)

It is likely a problem with the version of qemu you are
running.  On freebsd-current with qemu-0.10.1 there is no
problem with DMA (just tried with today's ISO).



Re: [9fans] Stuck at partdisk

2009-04-05 Thread Bakul Shah
On Sun, 05 Apr 2009 16:56:51 EDT "Devon H. O'Dell"   
wrote:
> 2009/4/5 Bakul Shah :
> > On Sun, 05 Apr 2009 15:54:19 EDT "Devon H. O'Dell"  
>  wrote:
> >> 2009/4/5 ron minnich :
> >> > On Sun, Apr 5, 2009 at 12:12 PM, Devon H. O'Dell 
>  wr
> >> ote:
> >> >> 2009/4/5 Devon H. O'Dell :
> >> >>> Ideas?
> >> >>
> >> >> Works fine if I turn off DMA.
> >> >
> >> > no need to have DMA on on qemu anyway, so you have a workaround.
> >>
> >> Except that it's now looking to take about 5 hours to install on my 3
> >> core AMD64 box with 8GB RAM :)
> >
> > It is likely a problem with the version of qemu you are
> > running.  On freebsd-current with qemu-0.10.1 there is no
> > problem with DMA (just tried with today's ISO).
> 
> Must be FreeBSD-CURRENT i386, because it sure isn't working on
> FreeBSD-CURRENT amd64 with a tree I built last night :)

It's on an amd64 (3 years old, 2GB, Athlon X2).  -current as
of Mar 21, qemu & kqemu-devel were built on mar 29.  I am
using qemu, not qemu-system-x86_64 as the emulation of its 32
bit mode is not good enough.  Make sure you haven't loaded
aio.ko as async. IO is no longer necessary for kqemu.  Not
sure if 8GB is a problem but to test that you can disable
kqemu.



Re: [9fans] typed sh (was: what features would you like in a shell?)

2009-04-05 Thread Bakul Shah
On Thu, 02 Apr 2009 20:28:57 BST roger peppe   wrote:
> 2009/4/2  :
> i wanted to go a little beyond sh while stopping
> short of the type profligacy of most other languages,
> hoping to create a situation where many commands
> used exactly the same types, and hence were
> viable to pipeline together.

Nitpick: the output type of one command and the input type of
the next command in the pipeline has to match, not every
command.

> a pipeline is an amazingly powerful thing considering
> that it's not a turing-complete abstraction.

"f | g" is basically function composition, where f and g are
stream functions. Of course, this simple analogy breaks down
the moment we add more input/output channels -- may be that
is why anything beyond a simple pipeline seems to get people
in trouble (see the rc output redirection thread).

To go beyond simple char streams, one can for example build a
s-expr pipeline: a stream of self identifying objects of a
few types (chars, numbers, symbols, lists, vectors). In Q
(from kx.com) over an IPC connection you can send strings,
vectors, dictionaries, tables, or arbitray Q expressions. But
there the model is more of a client/server.



Re: [9fans] typed sh (was: what features would you like in a shell?)

2009-04-06 Thread Bakul Shah
On Mon, 06 Apr 2009 07:09:47 EDT erik quanstrom   wrote:
> > Nitpick: the output type of one command and the input type of
> > the next command in the pipeline has to match, not every
> > command.
> 
> i think this is wrong.  there's no requirement
> that the programs participating in a pipeline are compatable
> at all; that's the beauty of pipes. 

If program A outputs numbers in big-endian order and B
expects input in little-endian order, A|B won't do the "right
thing".  Even for programs like wc have a concept of a
'character' and if the prev prog. produces something else you
will be counting something meaningless.

Perhaps it is impossible to capture such type compatibility
in anything but runtime IO routines but the concept exists.

>  you can do things
> that were not envisioned at the time the programs were
> written.

That comes from composability.

> > To go beyond simple char streams, one can for example build a
> > s-expr pipeline: a stream of self identifying objects of a
> > few types (chars, numbers, symbols, lists, vectors). In Q
> > (from kx.com) over an IPC connection you can send strings,
> > vectors, dictionaries, tables, or arbitray Q expressions. But
> > there the model is more of a client/server.
> 
> or ntfs where files are databases.  not sure if streams
> can look the same way.

May be not.  What I was getting at was that one can do a lot
with a small number of IO types -- no need for "type
profligacy"!  [Unless your definition of profligacy is
anything more than one.]  The nice thing about s-expr is that
you have a syntax for structured IO and its printer, parser
are already written for you.  Anyway, a typed sh would be an
interesting experiment.



Re: [9fans] typed sh (was: what features would you like in a shell?)

2009-04-06 Thread Bakul Shah
On Mon, 06 Apr 2009 12:02:21 EDT erik quanstrom   wrote:
> > If program A outputs numbers in big-endian order and B
> > expects input in little-endian order, A|B won't do the "right
> > thing".  
> 
> non-marshaled data considered harmful.  film at 11.  ☺

In effect you are imposing a constraint (a type discipline).
Even if the programs themselves check such constraints, the
compatibility idea exists.

> what i said was not that A|B "makes sense" for all A and B
> and for any data but rather that using text streams makes
> A|B possible for any A and any B and any input.  the output
> might not be useful, but that is a problem on a completely
> different semantic level, one that computers are usually no good at.
> alsi, i don't think that "type compatability" is sufficient
> to insure that the output "makes sense".  what if A produces
> big-endian times in ms while B expects big-endian times in µs.

In effect you are saying that text streams allow nonsensical
pipelines as well as sensible ones and anything other than
text streams would imply giving up freedom to create sensible
pipelines as yet unthought of.  No disagreement there but see
below.

> > Even for programs like wc have a concept of a
> > 'character' and if the prev prog. produces something else you
> > will be counting something meaningless.
> 
> that's why plan 9 uses a single character set.
> 
> but forcing compability seems worse.  where are these decisions
> centralized?  how do you change decisions?  can you override
> these decisions (cast)?  how does the output of, say, awk get
> typed?

I am not suggesting forcing anything; I am suggesting
experimenting with s-expr streams (in the context of "typed
sh" idea). I don't know if that buys you anything more or if
you give up any essential freedom.  My guess is you'd build
something more scalable, more composable but I wouldn't
really know until it is tried.  I imagine s-expr-{grep,awk}
would look quite different from {grep,awk}.  May be you'd end
up with something like a Lisp machine.



Re: [9fans] a bit OT, programming style question

2009-04-09 Thread Bakul Shah
On Thu, 09 Apr 2009 15:28:58 EDT "Devon H. O'Dell"   
wrote:
> $ set | wc -l
> 64
> 
> I don't quite get that locally.

This must be on FreeBSD!

% bash
$ echo $BASH_VERSION
4.0.10(2)-release
$ set|wc
  72 1062107

I prefer the cadillac of shells (zsh) & the vw bug (rc).



Re: [9fans] extensions of "interest"

2009-04-09 Thread Bakul Shah
On Thu, 09 Apr 2009 15:31:35 MDT andrey mirtchovski   
wrote:
> ps, the quote is  "Simplify, then add lightness"

Makes perfect sense for Chapman's purposes.  Replace steel
with aluminium. Fiberglass instead of sheet metal and so on.
Unfortunately we don't have exact analogs in s/w.  We can
only simplicate; we can't add lightness!



Re: [9fans] extensions of "interest"

2009-04-09 Thread Bakul Shah
On Thu, 09 Apr 2009 17:10:47 MDT andrey mirtchovski   
wrote:
> > Unfortunately we don't have exact analogs in s/w.  We can
> > only simplicate; we can't add lightness!
> 
> but somehow we can add "weight". can't we? bash is perceivably
> "heavier" than rc, xml perceivably "heavier" than 9p... statlite()
> perceivably "heavier" than stat() :)

Yes of course.  But that is because they use a more
complicated design that results in use of more code.

What I meant is in a physical assembly you can carefully
hollow out a solid part or use a lighter material to get a
lighter part without changing its structural properties
(much) and no other parts or couplings have to be changed.

In a program one can use hand code asembly or inline code
instead of calling a function, or call a function instead of
RPC to a separate process and so on but in each case there is
a tighter coupling that reduces flexibility.

Design done by wizards have simpler and fewer parts -- they
are simply much better at design. They "simplicate".

But granted, the analogy is rather weak :-)



Re: [9fans] typed sh (was: what features would you like in a shell?)

2009-04-16 Thread Bakul Shah
On Thu, 16 Apr 2009 18:24:36 BST roger peppe   wrote:
> 2009/4/6 Bakul Shah :
> > On Thu, 02 Apr 2009 20:28:57 BST roger peppe  =C2=A0w=
> rote:
> >> a pipeline is an amazingly powerful thing considering
> >> that it's not a turing-complete abstraction.
> >
> > "f | g" is basically function composition, where f and g are
> > stream functions. Of course, this simple analogy breaks down
> > the moment we add more input/output channels -- may be that
> > is why anything beyond a simple pipeline seems to get people
> > in trouble (see the rc output redirection thread).
> 
> actually, the analogy works fine if we add more
> input channels - it's multiple output channels
> that make things hard, as they mean that you have
> an arbitrary directed graph rather than a tree, which doesn't
> have such a convenient textual representation
> and is harder to comprehend to boot.

True in general but certain graphs are relatively easy to
comprehend depending on what you are doing (trees, hub &
spokes, rings). Shells don't provide you a convenient
mechanism for constructing these graphs (I'd use macros in
Scheme/Lisp, or a graphics editor).

For DAGs you can use something like the example below but it
doesn't have the nice aesthetics of a pipeline!

let s0,s1 = function-with-two-output-streams
  function-with-two-input-streams(f0(s0), f1(s1), ...)

> > To go beyond simple char streams, one can for example build a
> > s-expr pipeline: a stream of self identifying objects of a
> > few types (chars, numbers, symbols, lists, vectors).
> 
> the difficulty with s-exprs (and most nested structures, e.g. XML)
> from a pipeline point of view is
> that their nested nature means that any branch might contain unlimited
> quantities
> of stuff, so you can't always process in O(1) space, which is one of the
> things i really like about pipeline processing.

You can have arbitrarily long lines in a text file so if you
operate on lines, you need arbitrary buffer space. It is the
same problem.

Also note that I was talking about a stream of s-exprs, not
one s-expr as a stream (which makes no sense).  For example,

(attach ...) (walk ...) (open ...) (read ...) (clunk ...)

> i found a nice counter-example in the fs stuff - the fundamental type
> was based around a "conditional-push" protocol for sending trees
> of files - the sender sends some information on a file/directory
> and the receiver replies whether to descend into that file or
> not. the tree had a canonical order (alphabetical on name), so
> tree merging could be done straightforwardly in O(1) space.
>
> this kind of streaming "feels" like a regular pipeline, but you can't
> do this with a regular pipeline. for instance, a later element in the
> pipeline can prevent an earlier from descending into a part
> of the file system that might block indefinitely.
> 
> every language has a trade-off between typed and untyped representations;
> with alphabet i was trying to create something where it was *possible*
> to create new kinds of types where necessary (as in the fs example),
> but where it wouldn't be customary or necessary to do so in the
> vast majority of cases.
> 
> perhaps it was folly, but i still think it was an interesting experiment,
> and i don't know of anything similar.



Re: [9fans] security questions

2009-04-16 Thread Bakul Shah
On Thu, 16 Apr 2009 21:25:06 EDT "Devon H. O'Dell"   
wrote:
> That said, I don't disagree. Perhaps Plan 9's environment hasn't been
> assumed to contain malicious users. Which brings up the question: Can
> Plan 9 be safely run in a potentially malicious environment?  Based on
> this argument, no, it cannot. Since I want to run Plan 9 in this sort
> of environment (and thus move away from that assumption), I want to
> address these problems, and I kind of feel like it's weird to be
> essentially told, ``Don't do that.''

Why not give each user a virtual plan9? Not like vmware/qemu
but more like FreeBSD's jail(8), "done more elegantly"[TM]!
To deal with potentially malicious users you can virtualize
resources, backed by limited/configurable real resources.

The other thought that comes to mind is to consider something
like class based queuing (from the networking world).  That
is, allow choice of different allocation/scheduling/resource
use policies and allow further subdivision. Then you can give
preferential treatment to known good guys.  Other users can
still experiment to their heart's content within the
resources allowed them.

My point being think of a consistent high level model that
you like and then worry about implementation details.



Re: [9fans] security questions

2009-04-16 Thread Bakul Shah
On Thu, 16 Apr 2009 22:19:21 EDT "Devon H. O'Dell"   
wrote:
> 2009/4/16 Bakul Shah :
> > Why not give each user a virtual plan9? Not like vmware/qemu
> > but more like FreeBSD's jail(8), "done more elegantly"[TM]!
> > To deal with potentially malicious users you can virtualize
> > resources, backed by limited/configurable real resources.
> 
> I saw a talk about Mult at DCBSDCon. I think it's a much better idea
> than FreeBSD jail(8), and its security is provable.
>
> See also: http://mult.bsd.lv/

But is it elegant?
[Interviewer: What do you think the analog for software is?
 Arthur Whiteny: Poetry.
 Interviewer: Poetry captures the aesthetics, but not the precision.
 Arthur Whiteny: I don't know, may be it does.
 -- ACM Queue Feb/Mar 2009, page 18.
http://mags.acm.org/queue/20090203]

Perhaps Plan9's model would be easier (and more fun) to
extend to accomplish this. One can already have a private
namespace.  How about changing proc(3) to show only your
login process and its descendents? What if each user can have
a separate IP stack, separate (virtualized) interfaces and so
on?  But you'd have to implement some sort of limits on
oversubcribing (ratio of virtual to real resources). Unlike
securitization in the hedge fund world.



Re: [9fans] security questions

2009-04-17 Thread Bakul Shah
On Fri, 17 Apr 2009 08:14:12 EDT "Devon H. O'Dell"   
wrote:
> 2009/4/17 erik quanstrom :
> >> What if each user can have a separate IP stack, separate
> >> (virtualized) interfaces and so on?
> >
> > already possible, but you do need 1 physical ethernet
> > per ip stack if you want to talk to the outside world.
> 
> I'm sure it wouldn't be hard to add a virtual ``physical'' interface,
> even though that seems a little bit pervasive, given the already
> semi-virtual nature due to namespaces. Not sure how much of a hassle
> it would be to make multiple stacks bindable to a single interface...
> but perhaps that's the better way to go?

You'd have to add a packet classifier of some sort.  Packets
to host A get delivered to logical interface #1, host B get
delivered to #2 and so on.  Going out is not a problem.

Alternatively put each virtual host on a different VLAN (if
your ethernet controller does VLANs).

> >> But you'd have to implement some sort of limits on
> >> oversubcribing (ratio of virtual to real resources). Unlike
> >> securitization in the hedge fund world.
> >
> > this would add a lot of code and result in the same problem
> > as today =97 you can be run out of a criticial resource.
> 
> Oversubscribing is the root of the problem. In fact, even if it was
> already done, on a terminal server, imagmem is also set to kpages. So
> if someone found a way to blow up the kernel's draw buffer, boom. I
> don't know how far reaching that is, as I've never really seen the
> draw code.

If you are planning to open up a system to the public, then
provisioning for the peak use of your system will result in a
lot of waste (even if you had the resources to so provision).
Even your ISP uses oversubscription (probably by a factor of
100, if not more. If his upstream data pipes give him N bps,
he will give out 100N bps of total bandwidth to his
customers.  If you want guaranteed bandwidth, you have to
shell out a lot more for a "gold" service level agreement).

What I meant is
a) you need to ensure that a single user can't exceed his resoucre limits,
b) enforce a sensible oversubscription limit (if you oversubscribe
   by a factor of 30, don't let in the 31st concurrent user), and
c) very likely you also want to put these users in different
   login classes (ala *BSD) and disallow each class to
   cumulatively exceed configured resource limit (*BSD
   doesn't do this) -- this is where I was thinking of CBQ.



Re: [9fans] Plan9 - the next 20 years

2009-04-21 Thread Bakul Shah
On Mon, 20 Apr 2009 16:33:41 EDT erik quanstrom   wrote:
> let's take the path /sys/src/9/pc/sdata.c.  for http, getting
> this path takes one request (with the prefix http://$server)
> with 9p, this takes a number of walks, an open.  then you
> can start with the reads.  only the reads may be done in
> parallel.
> 
> given network latency worth worring about, the total latency
> to read this file will be worse for 9p than for http.

Perhaps one can optimize for the common case by extending 9p
a bit: use special values for certain parameters to allow
sending consecutive Twalk, (Topen|Tcreate), (Tread|Twrite)
without waiting for intermeditate  R messages.  This makes
sense since the time to prepare /process a handful of
messages is much shorter than roundtrip latency.

A different performance problem arises when lots of data has
to be fetched.  You can pipeline data requests by having
multiple outstanding requests.  A further refinement would to
use something like RDMA -- in essence the receiver tells the
sender where exactly it wants the data delivered (thus
minimizing copying & processing).  You can very easily extend
the model to have data chunks delivered to different machines
in a cluster.  This is like separating a very high speed
"data plane" (with little or no processing) from a low speed
"control plane" (with lots of processing) in a modern
switch/router.



Re: [9fans] Plan9 - the next 20 years

2009-04-21 Thread Bakul Shah
On Tue, 21 Apr 2009 10:50:18 EDT erik quanstrom   wrote:
> On Tue Apr 21 10:34:34 EDT 2009, n...@lsub.org wrote:
> > Well, if you don't have flush, your server is going to keep a request
> > for each process that dies/aborts. 

If a process crashes, who sends the Tflush?  The server must
clean up without Tflush if a connection closes unexpectedly.

I thought the whole point of Tflush was to cancel a
potentially expensive operation (ie when the user hits the
interrupt key). You still have to cleanup.

> >If requests always complete quite
> > soon it's not a problem, AFAIK, but your server may be keeping the
> > request to reply when something happens. Also, there's the issue that
> > the flushed request may have allocated a fid or some other resource.
> > If you don't agree that the thing is flushed you get out of sync with the
> > client.
> > 
> > What I mean is that as soon as you get concurrent requests you really
> > ned to implement flush. Again, AFAIK.
> 
> isn't the tag space per fid?  a variation on the tagged queuing flush
> cache would be to force the client to make sure that reordered
> flush tags aren't a problem.  it would not be very hard to ensure that
> tag "overlap" does not happen.

Why does it matter?

> if the problem with 9p is latency, then here's a decision that could be
> revisisted.  it would be a complication, but it seems to me better than
> a http-like protocol, bundling requets together or moving to a storage-
> oriented protocol.

Can you explain why is it better than bundling requests
together?  Bundling requests can cut out a few roundtrip
delays, which can make a big difference for small files.
What you are talking about seems useful for large files [if I
understand you correctly].  Second, 9p doesn't seem to
restrict any replies other than Rflushes to be sent in order.
That means the server can still send Rreads in any order but
if a Tflush is seen, it must clean up properly.  The
situation is analogous what happens in an an OoO processor
(where results must be discarded in case of exceptions and
mis-prediction on branches).



Re: [9fans] Plan9 - the next 20 years

2009-04-21 Thread Bakul Shah
On Tue, 21 Apr 2009 17:03:07 BST roger peppe   wrote:
> the idea with my proposal is to have an extension that
> changes as few of the semantics of 9p as possible:
> 
> C->S Tsequence tag=3D1 sid=3D1
> C->S Topen tag=3D2 sid=3D1 fid=3D20 mode=3D0
> C->S Tread tag=3D3 sid=3D1 fid=3D20 count=3D8192
> C->S Tclunk tag=3D4 sid=3D1
> S->C Rsequence tag=3D1
> S->C Ropen tag=3D2 qid=3D...
> S->C Rread tag=3D3 data=3D...
> S->C Rclunk tag=3D4
> 
> would be exactly equivalent to:
> 
> C->S Topen tag=3D2 fid=3D20 mode=3D0
> S->C Ropen tag=3D2 qid=3D...
> C->S Tread tag=3D3 fid=3D20 count=3D8192
> S->C Rread tag=3D3 data=3D...
> C->S Tclunk tag=3D4
> S->C Rclunk tag=3D4
> 
> and the client-side interface could be designed so
> that the client code is the same regardless of whether
> the server implements Tsequence or not (for instance,
> in-kernel drivers need not implement it).

Do you really need a Tsequence? Seems to me this should
already work  Let me illustrate with a timing diagram:

Strict request/response:
 1 2 3 4 5
   012345678901234567890123456789012345678901234567890
C: Topen   Tread   Tclunk  |
S: Ropen   Rread   Rclunk

Pipelined case:
 1 2 3 4 5
   012345678901234567890123456789012345678901234567890
C: Topen Tread Tclunk  |
S: Ropen Rread Rclunk

Here latency is 8 time units (one column = 1 time unit). In
the first case it takes 48 time units from Topen to Rclunk
received by server. In the second case it takes 28 time
units.

In the pipelined case, from a server's perspective, client's
requests just get to it faster (and may already be waiting!).
It doesn't have to do anything special.  What am I missing?



Re: [9fans] P9P on Lemote Yeeloong

2009-05-13 Thread Bakul Shah
On Wed, 13 May 2009 21:57:11 +0200 lu...@proxima.alt.za  wrote:
> I thought things were running too smoothly.  I got P9P to compile on
> the Lemote Yeeloong with only very frequent ocurrences of warnings
> (they seem like compile-time warnings) to the effect that each of
> getcontext, makecontext and swapcontext "is not implemented and will
> always fail".
> 
> Now, the Yeeloong is a notebook based on a MIPS cpu and endowed with
> Open Architecture, Open BIOS (called PMON) and Linux (Debian).  The
> man page for getcontext() seems to suggest that it exists, but
> executing, say, acme fails with:
> 
> "threadalloc getcontext: function not implemented"
> 
> So close to getting there, but I must be missing something.  Does
> anyone know what?
> 
> ++L
> 

See $PLAN9/include/u.h. You may need to add something to the
/* OS-specific crap */ section. Adding

#include 
#define PLAN9PORT_USING_PTHREAD 1

for your version of linux just might do the trick.



Re: [9fans] off-topic: "small is beautiful" article

2009-06-25 Thread Bakul Shah
Nils Holm's Scheme interpreter @ http://t3x.org/s9fes has
been available for a few months now.  It runs on plan9 though
not on inferno.  Like Chibi-scheme it too is fairly small.
(about 5.5Klocs of C, 1.4Klock of Scheme).

I am more interested in Gambit as it is one of the fastest
Scheme implementations and supposed to be easy to port. See
http://jlongster.com/blog/2009/06/17/write-apps-iphone-scheme/
Though I suspect a port to plan9 may be harder than iphone.

On Thu, 25 Jun 2009 08:53:11 PDT David Leimbach   wrote:
> 
> COOL!  So there's a Scheme in the works for Inferno and Plan 9?
> Dave
> 
> On Thu, Jun 25, 2009 at 8:50 AM, Devon H. O'Dell wrote
> :
> 
> > 2009/6/25 andrey mirtchovski :
> > > Mentions Plan 9 just at the end in the context of C compilers,
> > > although the argument of the article, that being able to "do more with
> > > less" is better, is applicable to Plan 9 in the OS field too.
> > >
> > > http://synthcode.com/blog/2009/06/Small_is_Beautiful
> > >
> > > I'm sorry if this got posted earlier and I missed it.
> >
> > This is our GSoC student who is working on making his scheme
> > interpreter work in Plan 9. Check there more frequently for his
> > progress and musings!
> >
> > --dho



Re: [9fans] Fonts

2009-07-08 Thread Bakul Shah
> But how do you make them? I played with some TTF font generators about
> 10 years ago that I'm sure I illegally obtained somehow, but I realize
> that I have zero idea of how fonts are designed and packaged. Does
> anybody know anything about how fonts are created and packaged (info
> on subfonts would be great, info on TTF would be interesting).

Start with fontforge & its wikipedia page.  What exactly are
you trying to do?  Tengwar has already been done :-)



Re: [9fans] Google finally announces their lightweight OS

2009-07-09 Thread Bakul Shah
On Thu, 09 Jul 2009 13:44:20 -0800 Jack Johnson   wrote:
> On Thu, Jul 9, 2009 at 1:34 PM, erik quanstrom wrote:
> > the problem i have with "literate programming" is that it
> > tends to treat code like a terse and difficult-to-understand
> > footnote.
> 
> And thus, we have literate programming meets APL. ;)
> 
> -Jack

http://www.youtube.com/watch?gl=GB&hl=en-GB&v=a9xAKttWgP4&fmt=18

*This* is what can happen when a literate programmer meets APL!



Re: [9fans] Google finally announces their lightweight OS

2009-07-10 Thread Bakul Shah
On Fri, 10 Jul 2009 08:14:30 EDT erik quanstrom   wrote:
> > there has also been a lot of discussion in the past 1-2 months about
> > K, a successor to APL, in #plan9. you might ask there; i may have
> > missed a more recent development.
> 
> could someone please explain to the ignorant, what
> is interesting about apl?  the last surge of interest i
> recall in the language was in 1983.  ibm offered an
> rom for the mga (monochrome graphics adapter)
> that substituted the odd apl characters for the
> equally-odd pc character set's bucky-bit characters.

Ken Iverson's 1979 Turing Award lecture, "Notation as a Toool
of thought" is a good place to start.  Htmlized version at
http://www.jsoftware.com/papers/tot.htm
Google for p444-iverson.pdf for the original.

If you watched the Game of Life in APL video I pointed to,
you saw how the presenter develops the program. This is very
much like how one builds up a shell pipeline (both are
loopless as there are a lot of similarities between streams
and arrays).

APL and its successor languages such as j/k/q are not just
for number crunching.  I mostly use k or q for scripting.
Here is a quick example of piecewise development in q.  A
year ago I wanted a simple inverted index program so this is
what I implemented.

I first created a sample table "dt" where each row contains a
document id and a term.

q)dt
d1 t1
d1 t2
d2 t1
d2 t3
d3 t1
d3 t2
d3 t3

Then dt[;0] is the doc-id column, dt[;1] is the term column.
The following gives me row indices that have the same terms.

q)group dt[;1]
t1| 0 2 4   /t1 appears in rows 0 2 and 4
t2| 1 5 /etc
t3| 3 6

What I really want is doc-ids with the same term.

q) dt[;0] @ group dt[;1]
t1| `d1`d2`d3
t2| `d1`d3
t3| `d2`d3

Given this associative table I can find out which documents
contain t2. I first name the table idx.

q) idx: dt[;0] @ group dt[;1]
q) idx[`t2]
d1 d3

Now I have the data in the form I want and can implement
fancier things on top. But how do I get the data in?  If I
have a file foo where each line contains a space separated
doc-id and term, I can initialize dt from it.

q)dt:("SS";" ")0:`:foo

Code to read a bunch of files and create lines of  not shown.  This was fast enough for a few tens of MB
of data I was interested in.

See code.kx.com for syntax etc.  It has a wealth of
information on Q including tutorials.  You can download a
copy for your own use.

[Note that there is another Q language, an equational
language. UNlike this Q it is open source but not an array
language]



[9fans] channels across machines

2009-07-18 Thread Bakul Shah
Has anyone extended the idea of channels where the
sender/receiver are on different machines (or at least in
different processes)?  A netcat equivalent for channels!

Actual plumbing seems easy: one can add a `proxy' thread in
each process to send a message via whatever inter process
mechanism is available. One issue would be for the two sides
to identify a specific channel.  I imagine something like the
following would work.

// on the client:
chan = chanconnect(":", elemsize, nelem);

// on the server:
x = chanbind("", elemsize, nelem);
chan = chanaccept(x); // to allow multiple connections

Or one can build this on top of a socket or file descriptor.

Another issue is endianness (unless all processors are the
same type). Yet another issue is sending variable size
things. In a single address space you can pass a pointer +
may be a size but that trick doesn't work across process (or
machine boundaries) -- so pretty much have to add some
marshalling/unmarshalling that knows about types.

Or is there a better idea?  This certainly seems preferable
to RPC or plain byte pipes for communicating structured
values.

Thanks!

--bakul



Re: [9fans] channels across machines

2009-07-18 Thread Bakul Shah
On Sat, 18 Jul 2009 06:25:19 EDT erik quanstrom   wrote:
> On Sat Jul 18 03:46:01 EDT 2009, bakul+pl...@bitblocks.com wrote:
> > Has anyone extended the idea of channels where the
> > sender/receiver are on different machines (or at least in
> > different processes)?  A netcat equivalent for channels!
> 
> i think the general idea is that if you want to do this between
> arbitrary machines, you provide a 9p interface.  you can think
> of 9p as a channel with a predefined set of messages.  acme
> does this.  kernel devices do this. 
> 
> however inferno provides file2chan
> http://www.vitanuova.com/inferno/man/2/sys-file2chan.html.
> of course, somebody has to provide the 9p interface, even
> if that's just posting a fd to /srv.
> 
> if you wanted to do something like file2chan in plan 9 and c, you're
> going to have to marshal your data.  this means that chanconnect
> as specified is impossible.  (acme avoids this by using text only.
> the kernel devices keep this under control by using a defined
> byte order and very few binary interfaces.)  i don't know how you would
> solve the naming problems e.g. acme would have using dial-string
> addressing  (i assume that's what you ment; host-port addressing is
> ip specific.).  what would be the port-numbering huristic for allowing
> multiple acmes on the same cpu server?
> 
> after whittling away problem cases, i think one is left with pipes,
> and it seems pretty clear how to connect things so that
> chan <-> pipe <-> chan.  one could generalize to multiple
> machines by using tools like cpu(1).

First of all, thanks for all the responses!

I should've mentioned this won't run on top of plan9 (or
Unix).  What I really want is alt!  I brought up RPC but
really, this is just Limbo's "chan of " idea.  In the
concurrent application where I want to use this, it would
factor out a bunch of stuff and simplify communication.

The chanconnect(":") syntax was to get the idea
across.  A real building block might look something like
this:

int fd = 
Chan c = fd2chan(fd, );

Ideally I'd generate relevant struct definitions and
marshalling functions from the same spec.

-- bakul



Re: [9fans] channels across machines

2009-07-18 Thread Bakul Shah
On Sat, 18 Jul 2009 10:20:11 PDT Skip Tavakkolian <9...@9netics.com>  wrote:
> > Or is there a better idea?  This certainly seems preferable
> > to RPC or plain byte pipes for communicating structured
> > values.
> 
> i have some incomplete ideas that are tangentially related to this --
> more for handling interfaces.
> 
> it seems one could write a compiler that translates an interface
> definition (e.g.  IDL) into a server and a client library; 9p(2) and
> ioproc(2) can be readily used in the generated code to do the tricky
> stuff.  the main part then becomes how to translate the calls across.

I did something like this (map a file of ascii struct defns
to C++ classes that included serialization/deserialization)
for the only company both of us have worked in!

These days I am likely to just use s-exprs if the choice is mine.
The joy of sexpr :-)

On Sat, 18 Jul 2009 14:32:30 CDT Eric Van Hensbergen   wrote:
> getting a pipe end to something somehow is why you really want to
> leverage the namespace as a 9p file system.  

Indeed but it is a separable concern (just like authentication).



Re: [9fans] Parallels Vesa driver question

2009-08-04 Thread Bakul Shah
On Mon, 03 Aug 2009 20:12:08 PDT David Leimbach   wrote:
> Wow  Where's parallels 4.  I doubt I qualify for a free one.  And VMWare
> Fusion really sucks with Plan 9 at the moment :-(

qemu works well enough for me on FreeBSD & Linux but not on a
Mac.  VirtualBox doesn't run plan9 but it runs FreeBSD, Linux
and Windows fairly well so may be there is hope.  There is an
open source version of VirtualBox that might be worth
tinkering with.



Re: [9fans] Parallels Vesa driver question

2009-08-04 Thread Bakul Shah
On Tue, 04 Aug 2009 05:47:25 PDT David Leimbach   wrote:
> 
> On Tue, Aug 4, 2009 at 12:25 AM, Bakul Shah
> 
> > wrote:
> 
> > On Mon, 03 Aug 2009 20:12:08 PDT David Leimbach 
> >  wrote:
> > > Wow  Where's parallels 4.  I doubt I qualify for a free one.  And
> > VMWare
> > > Fusion really sucks with Plan 9 at the moment :-(
> >
> > qemu works well enough for me on FreeBSD & Linux but not on a
> > Mac.  VirtualBox doesn't run plan9 but it runs FreeBSD, Linux
> > and Windows fairly well so may be there is hope.  There is an
> > open source version of VirtualBox that might be worth
> > tinkering with.
> >
> > I was considering giving qemu a try on the mac. I believe there's a mac
> centric front-end for it even.

It's called Q. Don't bother.

> In fact, how much of virtualbox is using qemu?

I think vbox devices and recompiler are based on qemu but I
don't really know.  IIRC early qemu did seem to have similar
issues with plan9.

Since other OSes run pretty well, my guess is something plan9
depends on heavily has to be emulated (due to memory layout
assumptions or something).



Re: [9fans] Parallels Vesa driver question

2009-08-04 Thread Bakul Shah
On Tue, 04 Aug 2009 08:25:53 PDT David Leimbach   wrote:
> 
> On Tue, Aug 4, 2009 at 8:20 AM, Bakul Shah
> 
> > wrote:
...
> > I think vbox devices and recompiler are based on qemu but I
> > don't really know.  IIRC early qemu did seem to have similar
> > issues with plan9.
> >
> > Since other OSes run pretty well, my guess is something plan9
> > depends on heavily has to be emulated (due to memory layout
> > assumptions or something).
> >
> 
> I've also tried Minix and QNX and both have problems on Virtual Box.  Which
> other OSes did you try?  Linux and windows work but they're like necessary
> to even claim you can do anything virtualization wise.
> 
> Dave

Just FreeBSD, linux and Windows.

Anyway, a couple of areas to look into, if you want plan9 on
vbox: try changing the memory layout of plan9 or figure out
what qemu did to make plan9 run well and apply that change to
vbox.



Re: [9fans] Parallels Vesa driver question

2009-08-11 Thread Bakul Shah
On Tue, 04 Aug 2009 13:46:31 EDT erik quanstrom   wrote:
> > Anyway, a couple of areas to look into, if you want plan9 on
> > vbox: try changing the memory layout of plan9 or figure out
> > what qemu did to make plan9 run well and apply that change to
> > vbox.
> 
> what makes you think its a memory layout issue?

I can no longer remember but I think the following played some
part in thinking that.  Qemu internals document (on qemu.org):
For system emulation, QEMU uses the mmap() system call to
emulate the target CPU MMU. It works as long the emulated
OS does not use an area reserved by the host OS (such as
the area above 0xc000 on x86 Linux).
Elsewhere it says
Achieving self-virtualization is not easy because there
may be address space conflicts. QEMU solves this problem
by being an executable ELF shared object as the
ld-linux.so ELF interpreter. That way, it can be
relocated at load time.
It was a hypothesis and it could be all wet.



Re: [9fans] audio standards -- too many to choose from

2009-08-12 Thread Bakul Shah
On Wed, 12 Aug 2009 09:50:13 -1000 Tim Newsham   wrote:
> Still would love to hear if anyone knows the answer to these:
> 
> > - What software exists for each of these formats?

Are you asking about non p9 software? If so, have you looked
at SoX (Sound eXchange)? It is sort of like netpbm but for
audio formats. http://sox.sourceforge.net/



Re: [9fans] audio standards -- too many to choose from

2009-08-14 Thread Bakul Shah
On Sat, 15 Aug 2009 05:24:01 +0800 sqweek   wrote:
> On 12/08/2009, Tim Newsham  wrote:
> > Draw the line at what the hardware can be told to decode
> > with a flip of a register?  The driver interface can easily
> > accomodate arbitrary encoding names (see inferno's driver
> > for an example).
> 
>  One thing I haven't seen mentioned (perhaps because I misunderstand)
> is the added state that comes from allowing multiple formats.
> Actually, I think this problem exists with the current interface
> already - say I have a program playing music at 44kHz and then I get
> an email or something that I've set up to cause an aural alert, but
> the program that does that wants to work at 22kHz. This sort of
> interaction sounds bad enough to me, I don't even want to think about
> what it would sound like if the music was in mp3 and the alert changed
> the kernel format to wav.

The mixer would have to be smart.

>  I guess this is where mixerfs and the like enter the picture, but
> I've come to think of the kernel as a hardware multiplexer so I'm
> liking the idea of having the kernel support multiple formats.
>  Then again, I also respect the argument that the kernel is not a
> hardware multiplexer but a hardware abstractor - doesn't matter what's
> down there the kernel provides the same interface. From this point of
> view the simple approach is compelling, have /dev/audio exclusive
> write to avoid the above sort of interaction and mix in userspace (or
> is there still playback/recording sample rate interaction?). I think
> everyone agrees it doesn't make sense to have an mp3 decoder
> in-kernel...
> -sqweek

A decoder is just a resource (like a mouse, keyboard, disk,
network interface etc.).  If h/w provides an mp3 decoder it
would be nice to be able to use it.

One idea is to have a devicename for each format/samplerate.
Some would be backed by h/w, some provided solely by s/w.
Basically you need an ability to interpose some sort of
"stream" module to create a network.



Re: [9fans] Plan 9 via QEMU

2009-08-21 Thread Bakul Shah
On Fri, 21 Aug 2009 21:34:03 EDT Akshat Kumar   
wrote:
> If I start QEMU with the option to boot directly from
> the HD image, as opposed to booting from network,
> then it starts up fine - but then the kernel is different
> also. I don't know what part of this is really troublesome.
> Maybe the pcap device? I can't get into VNC if I start
> QEMU with -boot d and -net pcap,devicename=ath0 so
> -boot n remains my only options; but with this option
> we have the hung fossil... or something
> So many problems (due to so many limitations).

AFAIK, only one mac address is allowed with wifi (unlike
ethernet). Have you tried using NAT on the host?  When I used
to run qemu on a FreeBSD laptop with only a wifi link, this
is what worked best. I used something like this:

qemu -vnc : -net tap -net nic,macaddr=,model= -monitor stdio img

On the host I bridged all tap devices so that all the VMs
can talk to each other.

If you want any plan9 services accessible from outside the
qemu host, you will have to redirect any ports of interest.

[Note: FreeBSD runs on Acer Aspire One]



Re: [9fans] Interested in improving networking in Plan 9

2009-08-31 Thread Bakul Shah
On Mon, 31 Aug 2009 09:25:36 CDT Eric Van Hensbergen   wrote:
> 
> Why not have a synthetic file system interface to ndb that allows it
> to update its own files?  I think this is my primary problem.
> Granular modification to static files is a PITA to manage -- we should
> be using synthetic file system interfaces to to help manage and gate
> modifications.  Most of the services I have in mind may be transient
> and task specific, so there are elements of scope to consider and you
> may not want to write anything out to static storage.

ndb maps directly to a list of lisp's association lists but
how would you map this to a synthetic fs? Something like
/ to yield a tuple? For example:

% cat ndb/ip/198.41.0.4 # same as ndbquery ip 198.41.0.4
dom=A.ROOT-SERVERS.NET ip=198.41.0.4
% cat ndb/dom/A.ROOT-SERVERS.NET
dom=A.ROOT-SERVERS.NET ip=198.41.0.4

But this is nasty!
% cat ndb/dom/'' # same as ndbquery dom ''
dom= ns=A.ROOT-SERVERS.NET ns=B.ROOT-SERVERS.NET ns=C.ROOT-SERVERS.NET 
ns=D.ROOT-SERVERS.NET ns=E.ROOT-SERVERS.NET ns=F.ROOT-SERVERS.NET 
ns=G.ROOT-SERVERS.NET ns=H.ROOT-SERVERS.NET ns=I.ROOT-SERVERS.NET 
ns=J.ROOT-SERVERS.NET ns=K.ROOT-SERVERS.NET ns=L.ROOT-SERVERS.NET 
ns=M.ROOT-SERVERS.NET

And it is not clear how you would map
% ndbquery attr value rattr ...

Another alternative is to map each tuple to a directory:
% ls ndb/dom/A.ROOT-SERVERS.NET # just show the attributes!
dom ip

% grep '' ndb/dom/A.ROOT-SERVERS.NET/*
dom:A.ROOT-SERVERS.NET
ip:198.41.0.4

An intriguing idea that can point toward a synth fs interface
to a dbms or search results  But I don't think this would
be a lightweight interface.



Re: [9fans] scheme plan 9

2009-09-02 Thread Bakul Shah
On Wed, 02 Sep 2009 12:32:53 BST Eris Discordia   
wrote:
> Although, you may be better off reading SICP "as intended," and use MIT 
> Scheme on either Windows or a *NIX. The book (and the freaking language) is 
> already hard/unusual enough for one to not want to get confused by 
> implementation quirks. (Kill the paren!)

The second edition of SICP uses IEEE Scheme (basically R4RS
Scheme) and pretty much every Scheme implementation supports
R4RS -- s9fes from http://www.t3x.org/s9fes/ certainly
supports it.  It doesn't support rational or complex numbers
but as I recall no example in SICP relies on those.

Killing parens won't make you an adult :-)



Re: [9fans] Interested in improving networking in Plan 9

2009-09-02 Thread Bakul Shah
On Mon, 31 Aug 2009 11:33:13 CDT Eric Van Hensbergen   wrote:
> On Mon, Aug 31, 2009 at 11:16 AM, Bakul Shah wrote:
> >
> > An intriguing idea that can point toward a synth fs interface
> > to a dbms or search results But I don't think this would
> > be a lightweight interface.
> >
> 
> The fact that its not immediately clear is what makes it a good
> research topic.  It will likely take several iterations to identify a
> "best fit" interface with the likely result that multiple
> interfaces/views are required.  I think there are precious little
> information on synthetic file system design, given its continuing
> popularity in the non-Plan-9 world, we could use more published
> wisdom/experiences from the evolution of various file-system based
> interfaces.

Oh I don't know Shoehorning a DB interface into a FS
interface doesn't feel right but stranger things have
happened.



Re: [9fans] "Blocks" in C

2009-09-02 Thread Bakul Shah
On Wed, 02 Sep 2009 08:20:52 PDT Roman V Shaposhnik   wrote:
> On Wed, 2009-09-02 at 10:04 +0200, Anant Narayanan wrote:
> > Mac OS 10.6 introduced a new C compiler frontend (clang), which added  
> > support for "blocks" in C [1]. Blocks basically add closures and  
> > anonymous functions to C (and it's derivatives).
> 
> They are NOT closures in my book. They lack lexical scoping. A true
> closure makes the following possible (using JavaScript to stay closer 
> to C syntax):
>function outer() {
>var outer_var = 1;
>return function () {
>   outer_var = { simply: "a different object" }
>}
>}

>From reading the URL you cited, it seems you can't return a
block from a function but you can sort of achieve the same
effect by declaring a global `block variable' and assigning a
block to it -- now you can use this block var elsewhere.

int (^ugly)();

int outer() {
__block int outer_var = 1;
ugly = ^{ outer_var = 42; }
}

Presumably __block says outer_var is allocated on heap so now
it can live beyond the life of a particular invocation of
outer().  outer_var will have to be freed when ugly is
assigned to a different block. So it seems GC is a
requirement now.  The original C features were "just right"
to keep the compiler simple and still provide a lot of
expressive power. IMHO GC doesn't fit that model.

Because of the heap allocation, most likely you won't get a
proper closure.  For instance, in Scheme

(define (counter)
  (let ((x 0))
   (lambda () (set! x (+ x 1)) x)))

(define c1 (counter))
(c1) => 1
(c1) => 2
(define c2 (counter))
(c2) => 1
(c1) => 3
etc.

Thus on every invocation of the counter function you get a
fresh counter x and c1 and c2 increment their own copy of x
independently. With blocks you'd render the above as
something like:

typedef int(^ctr_t)();

int counter(ctr_t*c) {
__block int x = 0;
*c = ^{ return ++x; }
}

ctr_t c1, c2;

void foo()
{
counter(&c1);   // presumably taking address of a block var is allowed
printf("%d\n", c1());
printf("%d\n", c1());
counter(&c2);
printf("%d\n", c2());
printf("%d\n", c1());
}

I bet you'd get 1 2 3 4, and not 1 2 1 3. If they do get this
right, they'd have to allocate a fresh x on every invocation
of counter and this will add to the memory garbage. Now their
GC problem is even worse!  And I haven't even used
concurrency so far!

They very carefully delineate what you can and can not do
with lexically scoped variables etc. but the model is far
more complex.

> > How much effort would it be to support a feature similar to blocks in  
> > 8c (and family)? What are your thoughts on the idea in general?
> 
> Personally I think you'd be better off exploring a connection that a 
> language called Lua has to C. In the immortal words of Casablanca it
> just could be "the begging of a beautiful friendship".

Amen! Or kill one's prejudice against parens (talk to your
parens!) and use Scheme which is about as simple as it can
get.



Re: [9fans] scheme plan 9

2009-09-03 Thread Bakul Shah
On Thu, 03 Sep 2009 07:29:53 BST Eris Discordia   
wrote:
> 
> I mean, I never got past SICP Chapter 1 because that first chapter got me 
> asking, "why this much hassle?"

May be you had an impedance mismatch with SICP?

> P.S. I'm leaving. You may now remove your 
> arts-and-letters-cootie-protection suits and go back to normal tech-savvy 
> attire ;-)

This may not be your cup of tea or be artsy enough for you
but check out what happens when tech meets arts:

http://impromptu.moso.com.au/gallery.html

Start the first video; may be skip the first 3 minutes or
so but after that stay with it for a few minutes.  The author
is creating music by *coding* in real time (and doing a great
job!).  He uses Impromptu, a Scheme programming environment,
that supports realtime scheduling and low level sound
synthesis. Given Scheme one can then build arbitrarily
complex signal processing graphs.

For some subset of people this sort of thing just might be a
better introduction to programming than SICP. Basically
anything that allows them to do fun things with programming
and leaves them wanting more.

BTW, you too can download impromptu on OS X and synthesise
your own noize!



Re: [9fans] "Blocks" in C

2009-09-03 Thread Bakul Shah
On Fri, 04 Sep 2009 00:44:35 EDT erik quanstrom   wrote:
> > > that sucker is on the stack.  by-by no-execute stack.

I don't think so. See below.

> > > how does it get to the stack?  is it just copied from
> > > the text segment or is it compiled at run time?
> > >
> > 
> > I don't think I posted the whole code, so that's my bad.  The X was on the
> > stack to begin with as the first X was an automatic variable in a function.
> >  I'd be a little surprised to find an automatic variable in the text
> > segment, but perhaps that's just my not remembering things properly.
> >  (didn't mean that tongue in cheek, I don't think about that stuff much
> > these days, as I've spent the last year or so doing Erlang and Haskell.)
> 
> it is the block itself that apple claims is on the stacp
> (your grand centeral reference, p. 38).  and i wonder
> how it gets there.  is it just copied from the text segment?
> that seems kind of pointless.  why not just execute it
> from the text segment?  or is it modified (compiled?)
> at run time?

[Note: I am simply guessing and have no idea how they
 actually do this but this model seems workable]

Consider this example:

int foo(int a) {
__block int b = 0;
int (^g()) = ^{ return a + ++b; }
...
return g() + g();
}


My guess is the above will be translated to something like this:

struct anon1 {
int (*f)(struct anon1*);
const int   a;
int *const  bp;
};
int anon1_f(struct anon1* x) {
return x->a + ++(*x->bp);
}
int foo(int a) {
int *bp = malloc(sizeof *bp); // not quite. see the correction below
*bp = 0;
struct anon1 _anon = { &anon1_f, a, &b };
struct anon1* g = &_anon;
...
return g->f(&_anon) + g->f(&_anon);
}

As you can see, _anon will disappear when foo() is exited.
But if you were to Block_copy() _anon, it will be allocated
on the heap and a ptr to it returned. Now you do have
everything needed to call this anon function even after
returning from foo().  &_anon can also be passed to another
thread etc. with no problem.

Most likely __block variables are allocated on the heap and
ref counted.  Ref count is decremented on exit from the
lexical scope where a __block var is defined.  Block_copy()
increments refcount of every __block var referenced by the
block, Block_release() decrements it.

So this is basically a function closure. They seem to have
very carefully navigated around C's semantic sand bars.



Re: [9fans] "Blocks" in C

2009-09-04 Thread Bakul Shah
On Thu, 03 Sep 2009 22:35:35 PDT David Leimbach   wrote:
> 
> 
> Actually, reading on a bit more they deal with the "variable capture"
> talking about const copies.
> 
> Automatic storage variables not marked with __block are imported as
> const copies.
> 
> The simplest example is that of importing a variable of type int.
> 
>int x = 10;
>void (^vv)(void) = ^{ printf("x is %d\n", x); }
>x = 11;
>vv();
> 
> would be compiled
> 
> struct __block_literal_2 {
> void *isa;
> int flags;
> int reserved;
> void (*invoke)(struct __block_literal_2 *);
> struct __block_descriptor_2 *descriptor;
> const int x;
> };
> 
> void __block_invoke_2(struct __block_literal_2 *_block) {
> printf("x is %d\n", _block->x);
> }
> 
> static struct __block_descriptor_2 {
> unsigned long int reserved;
> unsigned long int Block_size;
> } __block_descriptor_2 = { 0, sizeof(struct __block_literal_2) };
> 
> and
> 
>   struct __block_literal_2 __block_literal_2 = {
>   &_NSConcreteStackBlock,
>   (1<<29), ,
>   __block_invoke_2,
>   &__block_descriptor_2,
> x
>};
> 
> In summary, scalars, structures, unions, and function pointers are
> generally imported as const copies with no need for helper functions.

Just read this after posting my last message.

But this has no more to do with parallelism than any other
feature of C. If you used __block vars in a block, you'd
still need to lock them when the block is called from
different threads.



Re: [9fans] "Blocks" in C

2009-09-04 Thread Bakul Shah
On Fri, 04 Sep 2009 00:47:18 PDT David Leimbach   wrote:
> On Fri, Sep 4, 2009 at 12:11 AM, Bakul Shah
> 
> > wrote:
> 
> > But this has no more to do with parallelism than any other
> > feature of C. If you used __block vars in a block, you'd
> > still need to lock them when the block is called from
> > different threads.
> >
> I just wrote a prime sieve with terrible shutdown synchronization you can
> look at here:
> 
> http://paste.lisp.org/display/86549

Not sure how your program invalidates what I said.  Blocks do
provide more syntactic sugar but that "benefit" is independent
of GCD (grand central dispatch) or what have you. Given that
__block vars are shared, I don't see how you can avoid locking
if blocks get used in parallel.



Re: [9fans] "Blocks" in C

2009-09-04 Thread Bakul Shah
On Fri, 04 Sep 2009 08:04:40 PDT David Leimbach   wrote:
> On Fri, Sep 4, 2009 at 7:41 AM, Bakul Shah
> 
> > wrote:
> 
> > On Fri, 04 Sep 2009 00:47:18 PDT David Leimbach 
> >  wrote:
> > > On Fri, Sep 4, 2009 at 12:11 AM, Bakul Shah
> > > <
> > bakul%2bpl...@bitblocks.com >
> > > > wrote:
> > >
> > > > But this has no more to do with parallelism than any other
> > > > feature of C. If you used __block vars in a block, you'd
> > > > still need to lock them when the block is called from
> > > > different threads.
> > > >
> > > I just wrote a prime sieve with terrible shutdown synchronization you can
> > > look at here:
> > >
> > > http://paste.lisp.org/display/86549
> >
> > Not sure how your program invalidates what I said.  Blocks do
> > provide more syntactic sugar but that "benefit" is independent
> > of GCD (grand central dispatch) or what have you. Given that
> > __block vars are shared, I don't see how you can avoid locking
> > if blocks get used in parallel.
> >

To be precise, I meant to write "avoid locking if blocks get
executed in parallel and access a __block variable".

> You've said it yourself.  "if blocks get used in parallel".  If the blocks
> are scheduled to the same non-concurrent queue, there shouldn't be a
> problem, unless you've got blocks scheduled and running on multiple serial
> queues.   There are 3 concurrent queues, each with different priorities in
> GCD, and you can't create any more concurrent queues to the best of my
> knowledge, the rest are serial queues, and they schedule blocks in FIFO
> order.
>
> Given that you can arrange your code such that no two blocks sharing the
> same state can execute at the same time now, why would you lock it?

Consider this example:

int
main(int c, char**v)
{
int n = c > 1? atoi(v[1]) : 1000;
__block int x;
x = 0;
parallel_execute(n, ^{ for (int i = 0; i < n; i++) ++x; });
while (x != n*n)
sleep(1);
}

Where parallel_execute() spawns off n copies of the block and
tries to execute as many of them in parallel as possible.
Presumably this is implementable?  Will this prorgam ever
terminate (for any value of n upto 2^15-1)?  How would you
avoid sharing here except by turning parallel_execute() in
serial_execute()?

> I should note that for some reason my code falls apart in terms of actually
> working as I expected it after MAX is set to something over 700, so I'm
> probably *still* not doing something correctly, or I did something Apple
> didn't expect.

:-)



Re: [9fans] "Blocks" in C

2009-09-08 Thread Bakul Shah
On Tue, 08 Sep 2009 08:31:28 PDT David Leimbach   wrote:
> 
> Having wrestled with this stuff a little bit, and written "something".  I
> can immediately see how one can get away from needing to "select" in code so
> much, and fire off blocks to handle client server interactions etc.  It's
> kind of neat.

alt(3) is a nicer way to avoid select().

I still say CSP is the way to go. In plan9/limbo channels
work across coroutines in one process. Seems to me extending
channels to work across preemptive threads (running on
multiple cores) or across processes or machines is might lead
to a more elegant and no less performant model.  It seems
to be a more natural model when you have zillions of
processors on a chip (like TileraPro64, with zillion = 64).
They can't all go to shared external memory without paying a
substantial cost but neighbor to neighbor communication is
far faster (tilera claims 37Tbps onchip interconnect b/w and
50Gbps of I/O bw).

It is nice that a Apple C block treats all non local
variables (except __block ones) as read only variables.  But
every time I look at blocks I see new problems. What if a
block calls a function that modifies a global like in the
example below? If this works, what is the point of treating
globals as readonly? If this doesn't work, how do ensure
trash_x() causes a seg fault, particularly when it is defined
in another file?

int x;

void trash_x() { x = -42; }

... ^{ trash_x(); } ...

My view: if you can't solve a problem cleanly and in a
general way with a feature, it does not belong in a language
(but may belong in a library).



Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-20 Thread Bakul Shah
On Mon, 14 Sep 2009 12:43:42 EDT erik quanstrom   wrote:
> > I am going to try my hands at beating a dead horse:)
> > So when you create a Venti volume, it basically writes '0's' to all the 
> > blocks of the underlying device right?  If I put a venti volume on a AoE 
> > device which is a linux raid5, using normal desktop sata drives, what 
> > are my chances of a successful completion of the venti formating (let's 
> > say 1TB raw size)?
> 
> drive mfgrs don't report write error rates.  i would consider any
> drive with write errors to be dead as fried chicken.  a more
> interesting question is what is the chance you can read the
> written data back correctly.  in that case with desktop drives,
> you have a
>   8 bits/byte * 1e12 bytes / 1e14 bits/ure = 8%

Isn't that the probability of getting a bad sector when you
read a terabyte? In other words, this is not related to the
disk size but how much you read from the given disk. Granted
that when you "resilver" you have no choice but to read the
entire disk and that is why just one redundant disk is not
good enough for TB size disks (if you lose a disk there is 8%
chance you copied a bad block in resilvering a mirror).

> i'm a little to lazy to calcuate what the probabilty is that
> another sector in the row is also bad.  (this depends on
> stripe size, the number of disks in the raid, etc.)  but it's
> safe to say that it's pretty small.  for a 3 disk raid 5 with
> 64k stripes it would be something like
>   8 bites/byte * 64k *3 / 1e14 = 1e-8

The read error prob. for a 64K byte stripe is 3*2^19/10^14 ~=
3*0.5E-8, since three 64k byte blocks have to be read.  The
unrecoverable case is two of them being bad at the same time.
The prob. of this is 3*0.25E-16 (not sure I did this right --
we have to consider the exact same sector # going bad in two
of the three disks and there are three such pairs).



Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-21 Thread Bakul Shah
> > >   8 bits/byte * 1e12 bytes / 1e14 bits/ure = 8%
> > 
> > Isn't that the probability of getting a bad sector when you
> > read a terabyte? In other words, this is not related to the
> > disk size but how much you read from the given disk. Granted
> > that when you "resilver" you have no choice but to read the
> > entire disk and that is why just one redundant disk is not
> > good enough for TB size disks (if you lose a disk there is 8%
> > chance you copied a bad block in resilvering a mirror).
> 
> see below.  i think you're confusing a single disk 8% chance
> of failure with a 3 disk tb array, with a 1e-7% chance of failure.

I was talking about the case where you replace a disk in a
mirror. To rebuild the mirror the new disk has to be
initialized from the remaining "good" disk (and there is no
redundancy left) so you have to read the whole disk. This
implies 8% chance of a bad sector. The situation is worse in
an N+1 disk RAID-5 when you lose a disk. Now you have N*8%
chance of a bad sector. And of course in real life this are
worse because usually disks in a cheap array don't have
independent power supplies (and the shared one can be
underpowered or under-regulated).

> i would think this is acceptable.  at these low levels, something
> else is going to get you — like drives failing unindependently.
> say because of power problems.

8% rate for an array rebuild may or may not be acceptable
depending on your application.

> > > i'm a little to lazy to calcuate what the probabilty is that
> > > another sector in the row is also bad.  (this depends on
> > > stripe size, the number of disks in the raid, etc.)  but it's
> > > safe to say that it's pretty small.  for a 3 disk raid 5 with
> > > 64k stripes it would be something like
> > >   8 bites/byte * 64k *3 / 1e14 = 1e-8
> > 
> > The read error prob. for a 64K byte stripe is 3*2^19/10^14 ~=
> > 3*0.5E-8, since three 64k byte blocks have to be read.  The
> > unrecoverable case is two of them being bad at the same time.
> > The prob. of this is 3*0.25E-16 (not sure I did this right --
> 
> thanks for noticing that.  i think i didn't explain myself well
> i was calculating the rough probability of a ure in reading the
> *whole array*, not just one stripe.
> 
> to do this more methodicly using your method, we need
> to count up all the possible ways of getting a double fail
> with 3 disks and multiply by the probability of getting that
> sort of failure and then add 'em up.  if 0 is ok and 1 is fail,
> then i think there are these cases:
>
> 0 0 0
> 1 0 0
> 0 1 0
> 0 0 1
> 1 1 0
> 1 0 1
> 0 1 1
> 1 1 1
> 
> so there are 4 ways to fail.  3 double fail have a probability of
> 3*(2^9 bits * 1e-14 1/ bit)^2

Why 2^9 bits? A sector is 2^9 bytes or 2^12 bits. Note that
there is no recovery possible for fewer bits than a sector.

> and the triple fail has a probability of
> (2^9 bits * 1e-14 1/ bit)^3
> so we have
> 3*(2^9 bits * 1e-14 1/ bit)^2 + (2^9 bits * 1e-14 1/ bit)^3 ~=
>   3*(2^9 bits * 1e-14 1/ bit)^2
>   = 8.24633720832e-17

3*(2^12 bits * 1e-14 1/ bit)^2 + (2^12 bits * 1e-14 1/ bit)^3 ~=
3*(2^12 bits * 1e-14 1/ bit)^2
~= 3*(1e-11)^2 = 3E-22

If per sector recovery is done, you have
3E-22*(64K/512) = 3.84E-20

> that's per stripe.  if we multiply by 1e12/(64*1024) stripes/array,
>
> we have
>   = 1.2582912e-09

For the whole 2TB array you have

3E-22*(10^12/512) ~= 6E-13

> which is remarkably close to my lousy first guess.  so we went
> from 8e-2 to 1e-9 for an improvement of 7 orders of magnitude.
> 
> > we have to consider the exact same sector # going bad in two
> > of the three disks and there are three such pairs).
> 
> the exact sector doesn't matter.  i don't know any
> implementations that try to do partial stripe recovery.

If by partial stripe recovery you mean 2 of the stripes must
be entirely error free to recreate the third, your logic
seems wrong even after we replace 2^9 with 2^12 bits.

When you have only stripe level recovery you will throw away
whole stripes even where sector level recovery would've
worked.  If, for example, each stripe has two sectors, on a 3
disk raid5, and you have sector 0 of disk 0 stripe and sector
1 of disk 1 stripe are bad, sector level recovery would work
but stripe level recovery would fail.  For 2 sector stripes
and 3 disks you have 64 possible outcomes, out of which 48
result in bad data for sector level recovery and 54 for
stripe level recovery (see below).  And it will get worse
with larger stripes.  [by bad data I mean we throw away the
whole stripe even if one sector can't be recovered]

I did some googling but didn't discover anything that does a
proper statistical analysis.

-
disk0 stripe sectors (1 means read failed)
|  disk1 stripe sectors
|  |  disk2 stripe sectors
|  |  |  sector level recovery possible? (if so, the stripe can be recovered)
|  |  |  | stripe level recovery possible?
|  |  |  | |
00 00 0

Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-21 Thread Bakul Shah
On Mon, 21 Sep 2009 14:02:40 EDT erik quanstrom   wrote:
> > > i would think this is acceptable.  at these low levels, something
> > > else is going to get you -- like drives failing unindependently.
> > > say because of power problems.
> > 
> > 8% rate for an array rebuild may or may not be acceptable
> > depending on your application.
> 
> i think the lesson here is don't by cheep drives; if you
> have enterprise drives at 1e-15 error rate, the fail rate
> will be 0.8%.  of course if you don't have a raid, the fail
> rate is 100%.
>
> if that's not acceptable, then use raid 6.

Hopefully Raid 6 or zfs's raidz2 works well enough with cheap
drives!

> > > so there are 4 ways to fail.  3 double fail have a probability of
> > > 3*(2^9 bits * 1e-14 1/ bit)^2
> > 
> > Why 2^9 bits? A sector is 2^9 bytes or 2^12 bits.
>
> 
> cut-and-paste error.  sorry that was 2^19 bits, e.g. 64k*8 bits/byte.
> the calculation is still correct, since it was done on that basis.

Ok.

> > If per sector recovery is done, you have
> > 3E-22*(64K/512) = 3.84E-20
> 
> i'd be interested to know if anyone does this.  it's not
> as easy as it would first appear.  do you know of any
> hardware or software that does sector-level recovery?

No idea -- I haven't really looked in this area in ages.  In
case of two stripes being bad it would make sense to me to
reread a stripe one sector at a time since chances of the
exact same sector being bad on two disks is much lower (about
2^14 times smaller for 64k stripes?).  I don't know if disk
drives return a error bit array along with data of a
multisector read (nth bit is set if nth sector could not be
recovered).  If not, that would be a worthwhile addition.

> i don't have enough data to know how likely it is to
> have exactly 1 bad sector.  any references?

Not sure what you are asking.  Reed-solomon are block codes,
applied to a whole sector so per sector error rate is
UER*512*8 where UER == uncorrectable error rate. [Early IDE
disks had 4 byte ECC per sector.  Now that bits are packed so
tight, S/N ratio is far worse and ECC is at least 40 bytes,
to keep UER to 1E-14 or whatever is the target].



Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-21 Thread Bakul Shah
On Mon, 21 Sep 2009 16:30:25 EDT erik quanstrom   wrote:
> > > i think the lesson here is don't by cheep drives; if you
> > > have enterprise drives at 1e-15 error rate, the fail rate
> > > will be 0.8%.  of course if you don't have a raid, the fail
> > > rate is 100%.
> > >
> > > if that's not acceptable, then use raid 6.
> > 
> > Hopefully Raid 6 or zfs's raidz2 works well enough with cheap
> > drives!
> 
> don't hope.  do the calculations.  or simulate it.

The "hopefully" part was due to power supplies, fans, mobos.
I can't get hold of their reliability data (not that I have
tried very hard).  Ignoring that, raidz2 (+ venti) is good
enough for my use.

> this is a pain in the neck as it's a function of ber,
> mtbf, rebuild window and number of drives.
> 
> i found that not having a hot spare can increase
> your chances of a double failure by an order of
> magnitude.  the birthday paradox never ceases to
> amaze.

I plan to replace one disk every 6 to 9 months or so. In a
3+2 raidz2 array disks will be swapped out in 2.5 to 3.75
years in the worst case.  What I haven't found is a decent,
no frills, sata/e-sata enclosure for a home system.



Re: [9fans] zero length arrays in gcc

2009-09-22 Thread Bakul Shah
On Wed, 23 Sep 2009 02:49:44 +0800 Fernan Bolando   
wrote:
> Hi all
> 
> nhc98 uses a few of
> 
> static unsigned startLabel[]={};
> 
> which is a zero length array. It appears that it uses this as
> reference to calculate the correct pointer for a bytecode.
> 
> pcc does not allow this since zero lenth array is another gcc
> extension. I tried declaring it as
> 
> static unsigned startLabel[];
> 
> The resulting bytecode can then be compiled however it will only
> crash. I traced it a pointer that tries to read an unallocated section
> in memory.
> 
> Is it possible emulate zero pointer in pcc??

Would something like this work?

unsigned foo[1];
#define startLabel (&foo[1])



Re: [9fans] mishandling empty lists - let's fix it

2009-10-03 Thread Bakul Shah
On Sun, 04 Oct 2009 03:03:27 +1100 Sam Watkins   wrote:
> 
>   find -name '*.c' | xargs cat | cc - # this clever cc can handle it :)
> 
> This program works fine until there are no .c files to be found, in that case
> it hangs, waiting for one on stdin!  This is a hazard to shell scripters, and
> a potential source of security holes.

Your example doesn't hang (and if it does, your xargs is
broken).  You are thinking of something like this:

$ echo 'cat $*' > foo.sh
$ sh foo.sh

This is not elegant but a reasonable tradeoff.  A common use
of many tools is in a pipeline and having to type - every
time can get annoying. 

To "fix" this you may think of changing your shell to append
/dev/null if a command is given no arguments but that will
fail in cases like `cat -v'.  In unix it is upto each command
to interprets its arguments and a shell can not assume
anything about command arguments.



Re: [9fans] Barrelfish

2009-10-17 Thread Bakul Shah
On Sun, 18 Oct 2009 01:15:45 - Roman Shaposhnik   
wrote:
> On Thu, Oct 15, 2009 at 10:53 AM, Sam Watkins  wrote:
> > On Wed, Oct 14, 2009 at 06:50:28PM -0700, Roman Shaposhnik wrote:
> >> > The mention that "... the overhead of cache coherence restricts the ab=
> ility
> >> > to scale up to even 80 cores" is also eye openeing. If we're at aprox =
> 8
> >> > cores today, thats only 5 yrs away (if we double cores every
> >> > 1.5 yrs).
> >
> > Sharing the memory between processes is a stupid approach to multi-proces=
> sing /
> > multi-threading. =A0Modern popular computer architecture and software des=
> ign is
> > fairly much uniformly stupid.

> It is. But what's your proposal on code sharing? All those PC
> registers belonging to
> different cores have to point somewhere. Is that somewhere is not shared me=
> mory
> the code has to be put there for every single core, right?

Different technoglogies/techniques make sense at different
levels of scaling and at different points in time so sharing
memory is not necessarily stupid -- unless one thinks that
any compromise (to produce usable solutions in a realistic
time frame) is stupid.

At the hardware level we do have message passing between a
processor and the memory controller -- this is exactly the
same as talking to a shared server and has the same issues of
scaling etc. If you have very few clients, a single shared
server is indeed a cost effective solution.

When you absolutely have to share state, somebody has to
mediate access to the shared state and you can't get around
the fact that it's going to cost you.  But if you know
something about the patterns of sharing, you can get away
from a single shared memory & increase concurrency.  A simple
example is a h/w fifo (to connect producer/consumer but you
also gave up some flexibility).

As the number of processors increases on a device, sharing
state between neighbors will be increasingly cheaper compared
any global sharing. Even if you use message passing, messages
between near neighbors will be far cheaper than between
processors in different neighboorhoods. So switching to
message passing is not going to fix things; you have to worry
about placement as well (just like in h/w design).



Re: [9fans] Barrelfish

2009-10-18 Thread Bakul Shah
On Sun, 18 Oct 2009 06:22:33 PDT Roman Shaposhnik   wrote:
> On Sun, Oct 18, 2009 at 6:06 AM, Roman Shaposhnik  wrot
> e
> >> It is. But what's your proposal on code sharing? All those PC
> >> registers belonging to
> >> different cores have to point somewhere. Is that somewhere is not shared m
> e=
> >> mory
> >> the code has to be put there for every single core, right?
> >
> > At the hardware level we do have message passing between a
> > processor and the memory controller -- this is exactly the
> > same as talking to a shared server and has the same issues of
> > scaling etc. If you have very few clients, a single shared
> > server is indeed a cost effective solution.
> 
> I guess I'm not following. My question to OP was strictly about
> code sharing. Basically were do the cores get instructions from
> if not from shared memory.

Sorry, I should've done a better job of editing.  I was
really responding to the OP's point that sharing memory
between processes is a stupid approach. My point was that
"sharing memory" is just a low level programming interface
(implemented by message passing in h/w) and it makes sense at
some scale. 



Re: [9fans] bison problem, not plan9 related

2009-10-21 Thread Bakul Shah
Is this what you are trying to do?

$ cat b.y <<'EOF'
%token ATOM REP
%%
blocks: block | block blocks;
block: ATOM | REP block | '[' blocks ']';
%%
EOF
$ bison b.y
$

On Wed, 21 Oct 2009 19:52:41 +0200 Rudolf Sykora   
wrote:
> Hello,
> sorry for an off-topic thing. But I guess somebody here could help me...
> I have a problem with bison grammer
> 
> Having
> 
> %tokenATOM
> %left '+'
> %left REP
> 
> and a grammar:
> 
> block:  ATOM
>   | REP block
>   | block '+' block
> ;
> 
> is ok. Having another grammer:
> 
> block:  ATOM
>   | REP block
>   | block block %prec '+'
> ;
> 
> has 2 shift/reduce conflicts, similar to
> state 7
> 
> 5 block: REP block .
> 6  | block . block
> 
> ATOM  shift, and go to state 3
> 
> ATOM  [reduce using rule 5 (block)]
> $default  reduce using rule 5 (block)
> 
> block  go to state 9
> 
> or
> state 9
> 
> 6 block: block . block
> 6  | block block .
> 
> ATOM  shift, and go to state 3
> REP   shift, and go to state 4
> 
> ATOM  [reduce using rule 6 (block)]
> $default  reduce using rule 6 (block)
> 
> block  go to state 9
> 
> What I want is to have a parser that can read e.g. (the spaces are
> left out by lex, they are not in what bison sees; I only write them
> here for better readability)
> 12 Au 13 Cu 2 Ag
> the former grammer (REP is for repetition) is able to read
> 12 Au + 13 Cu + 2 Ag
> but I don't like those pluses, which are redundant.
> 
> Also important: I have those 'block' non-terminals there, since I want
> to add another rule
> block: '[' block ']'
> so that I can use brackets and can parse things like
> 12 [ 2 Cu 3 Co]
> 
> Could anyone explain to me what goes wrong?
> I can't figure it out...
> 
> Thanks a lot
> Ruda
> 
> PS.: the grammer is actually identical to a grammer that can evaluate
> expressions with +, *, and brackets, with usual operator precedence.
> 



Re: [9fans] ideas for helpful system io functions

2009-12-05 Thread Bakul Shah
On Sat, 05 Dec 2009 08:24:45 -1000 Tim Newsham   wrote:
> >> I can see two possible solutions for this, both of which would be useful i
> n 
> >> my
> >> opinion:
> >>
> >>  - an "unread" function, like ungetc, which allows a program to put back 
> >> some
> >>data that was already read to the OS stdin buffer (not the stdio 
> >> buffer).
> >>This might be problematic if there is a limit to the size of the 
> >> buffers.
> >
> > Wouldn't it be a lot easier to change the convention of the
> > program you're forking and execing to take 1) a buffer of data
> > (passed via cmd line, or fd, or whatever) and 2) the fd with
> > the unconsumed part of the data?  The only data that would have
> > to be copied would be the preconsumed data that you would have
> > wanted to "unget".
> 
> ps. if you wanted to hide this ugliness of passing a buffer and
> fd to a child process instead of just passing an fd, you could
> still solve it in userland without a syscall.  Write a library
> that does buffered IO.  Include unget() if you like.  Write the
> library in a way that you can initialize it after a fork/exec
> to pick up state from the parent (ie. by taking two fds,
> reading the buffer from the first, and continuing on with the
> 2nd when it is exhausted).
> 
> Is there much benefit in doing this in the kernel instead?

Some OS support will help... but first let me provide some
motivation!

A useful abstraction for this sort of thing is "streams" as
in functional programming languages, where the tail of a
stream is computed as needed and the computed prefix of the
stream can be reread as many times as you wish (stuff no one
can reference any more will be garbage collected).  So for
example, if I define a "primes" stream, I can do

100 `take` primes

in Haskell any number of times and always get the first 100
primes. If I wanted to pass entire primes stream *minus* the
first 100 to a function, I'd use "100 `drop` primes" to get
a new stream.

In the example given you'd represent your http data as a
stream (its tail is "computed" as you read from the
socket/fd), do any preprocessing you want and then pass the
whole stream on.  Data already read is buffered and you can
reread it from the stream.

Now unix/plan9 sort of do this for files but not when an fd
refers to a fifo of some sort. For an open file, after a fork
both the parent and the child start off at the same place in
the file but then they can read at different rates. But io to
fifos/sockets don't share this behavior.

The OS support I am talking about:
a) the fork behavior on an open file should be available
   *without* forking.  dup() doesn't cut it (both fds share
   the same offset on the underlying file). I'd call the new
   syscall fdfork().  That is, if I do

   int newfd = fdfork(oldfd);

   reading N bytes each from newfd and oldfd will return
   identical data.

b) there should be a way to implement the same semantics for
   fifos or communication end points (or any synthetic file).
   In the above example same N bytes must be returned even if
   the underlying object is not a file.

c) there should be a way to pass the fd (really, a capability)
   to another process.

Given these, what the OP wants can be implemented cleanly.
You fdfork() first, do all your analysis using one fd, close
it and then pass on the other fd to a helper process.

Implementing b) ideally requires the OS to store potentially
arbitrary amount of data.  But an implementation must set
some practical limit (like that on fifo buffering).



Re: [9fans] ideas for helpful system io functions

2009-12-05 Thread Bakul Shah
On Sat, 05 Dec 2009 15:03:44 EST erik quanstrom   wrote:
> > The OS support I am talking about:
> > a) the fork behavior on an open file should be available
> >*without* forking.  dup() doesn't cut it (both fds share
> >the same offset on the underlying file). I'd call the new
> >syscall fdfork().  That is, if I do
> > 
> >int newfd = fdfork(oldfd);
> > 
> >reading N bytes each from newfd and oldfd will return
> >identical data.
> 
> i can't think of a way to do this correctly.  buffering in the
> kernel would only work if each process issued exactly the
> same set of reads.  there is no requirement that the data
> from 2 reads of 100 bytes each be the same as the data
> return with 1 200 byte read.

To be precise, both fds have their own pointer (or offset)
and reading N bytes from some offset O must return the same
bytes.  The semantics I'd choose is first read gets bufferred
and reads get satisfied first from buffered data and only
then from the underlying object. Same with writes.  They are
'write through".  If synthetic files do weird things at
different offsets or for different read/write counts, I'd
consider them uncacheable (and you shouldn't use fdfork with
them).  For disk based files and fifos there should be no
problem.

Note that Haskell streams are basically cacheable!

> before you bother with "but that's a wierd case", remember
> that the success of unix and plan 9 has been built on the
> fact that there aren't syscalls that fail in "wierd" cases.

I completely agree. But hey, I just came up with the idea and
haven't worked out all the design bugs (and may never)!  It
seemed worth sharing to elicit exactly the kind of feedback
you are giving.



Re: [9fans] ideas for helpful system io functions

2009-12-05 Thread Bakul Shah
On Sat, 05 Dec 2009 15:27:02 EST erik quanstrom   wrote:
> > To be precise, both fds have their own pointer (or offset)
> > and reading N bytes from some offset O must return the same
> > bytes.
> 
> wrong.  /dev/random is my example.

You cut out the bit about buffering where I explained what I
meant.  As I said, those are the semantics I would choose so
by definition it is not "wrong"! Though it may not do what
you expect.  As a matter of fact I do see a use case for
/dev/random for getting repeatable random numbers! If you
want an independet stream of random numbers, just open
/dev/random again (or dup()), and not use fdfork().



Re: [9fans] du and find

2010-01-02 Thread Bakul Shah
On Sat, 02 Jan 2010 14:47:26 EST erik quanstrom   wrote:
> 
> my beef with xargs is only that it is used as an excuse
> for not fixing exec in unix.  it's also used to bolster the
> "that's a rare case" argument.

I often do something like the following:

  find . -type f  | xargs grep -l  | xargs 

If by "fixing exec in unix" you mean allowing something like

   $(grep -l  $(find . -type f ))

then  would take far too long to even get started.
And can eat up a lot of memory or even run out of it.  On a
2+ year old MacBookPro "find -x /" takes 4.5 minutes for 1.6M
files and 155MB to hold paths.  My 11 old machine has 64MB
and over a million files on a rather slow disk. Your solution
would run out of space on it.  Now granted I should update it
to a more balanced system but mechanisms should continue
working even if one doesn't have an optimal system.  At least
xargs gives me that choice.

Basically this is just streams programming for arguments
instead of data. Ideally all the args would be taken from a
stream (and specifying args on a command line would be just a
convenience) but it is too late for that.  Often unix
commands have a -r option to walk a file tree but it would've
been nicer to have the tree walk factored out. Then you can
do things like breadth first walk etc. and have everyone
benefit.



Re: [9fans] du and find

2010-01-02 Thread Bakul Shah
On Sat, 02 Jan 2010 20:49:39 EST erik quanstrom   wrote:
> > And can eat up a lot of memory or even run out of it.  On a
> > 2+ year old MacBookPro "find -x /" takes 4.5 minutes for 1.6M
> > files and 155MB to hold paths.  My 11 old machine has 64MB
> > and over a million files on a rather slow disk. Your solution
> > would run out of space on it.
> 
> modern cat wouldn't fit in core on the early pdps unix was
> developed on!

No point in gratuitously obsoleting old machines.  I am
running FreeBSD-7.2 on it my 11yo machine and so far it has
stood up well enough.

> just to be fair, could you fit your 1.6m files on your 11yu machine?
> i'm guessing you couldn't.

Yes. It's on its third disk. A 6yo 80G IDE disk.

> > Basically this is just streams programming for arguments
> > instead of data. 
> 
> that's fine.  but it's no excuse to hobble exec.  not unless
> you're prepared to be replace argument lists with an argument
> fd.

Not sure how exec is hobbled.  Given the way Unix programs
behave you can't replace arg list with an arg fd (I used to
carry around a libary to do just that but the problem is all
the standard programs). Anyway, I don't see how xargs can be
gotten rid of.



Re: [9fans] parallels

2010-01-08 Thread Bakul Shah
On Fri, 08 Jan 2010 14:12:39 EST ge...@plan9.bell-labs.com  wrote:
> I don't have enough experience with VirtualBox to make a sensible
> comparison.

Plan9 on virtualBox is unusably slow.

> The thing that none of the VM monitors seem to offer (though I'd love
> to be proven wrong) is debugging tools for the guest operating
> systems.  This is odd, as it was one of the major uses of VM/370.  So
> if a guest kernel goes off into space, the VM monitor shuts down the
> virtual machine or resets it, but provides no means to find out what
> happened, though it's in a perfect position to easily do so.

I have used qemu + host gdb to debug a guest FreeBSD kernel.
FreeBSD does have remote gdb support.



Re: [9fans] NaN, +Inf, and -Inf, constants?

2010-02-07 Thread Bakul Shah
On Sun, 07 Feb 2010 15:19:58 MST "Lyndon Nerenberg (VE6BBM/VE7TFX)" 
  wrote:
> > i suspect the rationale was that, finally, C provided a way
> > outside the preprocessor to give symbolic names to constants.
> > why restrict that to int?
> 
> Because enum's have been int's since their inception?
> 
> I'm sympathetic to the underlying need, but making a fundamental
> type of the language suddenly become variable does not seem to
> be the right way of going about this.
> 
> E.g., what is the type of:
> 
> enum {
>   a = 1,
>   b = 2.4400618549L,
>   c = 2.44F,
>   d = "this is weird",
>   e = 1LL<<62,
> } foo;

The cleanest solution would be to treat C's _static const_
"variables" as as compile time constants. I wish the standard
dictated this.



Re: [9fans] recreational programming of an evening

2010-03-21 Thread Bakul Shah
On Sat, 20 Mar 2010 22:48:53 PDT ron minnich   wrote:
...
> So here is the result: very minor extension to the kernel code, shell
> script a bit longer (25 lines!) but what happens is e.g. you trace an
> rc, and for each fork/exec that happens, a new truss display pops up
> in a new window and you can now watch the kid. And yes it does work
> fine if the thing you start is an rc.
...
>   What's interesting to me about this is that
> I can not imagine even attempting this on any other os or windowing
> system. It was just too easy on Plan 9 however.
> 
> ron

Very nice!

Users of other OSes won't even believe it is this easy!  Have
you considered writing this up for a publication or blogging
about it?  This sort of "joy of programming with plan9"
stories need to be known more widely.

What's really missing is a whole book on hands on OS hacking
along the lines of the Art of Electronics or SICP (Structure
and Interpretation of Computer Programs).  And with a kit of
h/w & i/o devices so that you can build some widgets and
give'em a real OS!



Re: [9fans] recreational programming of an evening

2010-03-21 Thread Bakul Shah
On Sun, 21 Mar 2010 14:03:14 EDT "Devon H. O'Dell"   
wrote:
> 2010/3/21 Bakul Shah :
> [snip]
> > What's really missing is a whole book on hands on OS hacking
> > along the lines of the Art of Electronics or SICP (Structure
> > and Interpretation of Computer Programs).  And with a kit of
> > h/w & i/o devices so that you can build some widgets and
> > give'em a real OS!
> 
> I've wanted to do something like this for a while, but it's hard to
> find a publisher for such a thing.

Go for it!

These days you can self publish or put it on the web.  Or
post your tentative Table of Contents here and may be it can
turn into a cooperative effort.

A web based "book" can be pretty interactive and flexible.
For example, it can start assuming the readers have a plan9
virtual machine with your teaching package so they can
experiment right away.  Then they can explore plan9 in any
direction they want (to learn OS concepts by experimenting,
or "move up" to build apps or interface with other apps, or
"move down" to add device drivers or replace some kernel
parts, or "further down" to bare metal, to move to real h/w
and add some h/w devices).

One project I have been interested in (but lack time for) is
to build a h/w building block that speaks P9 (instead of just
a low level USB interface).  Given that one can add h/w for a
specific purpose, add a bit of glue logic and can make the
new h/w functionality easily available from plan9.  Such a
thing can be a great boon to hobbyists and scientists alike,
as they can stop worrying about low level computer interface
details and spend time on things *they* want to spend time
on.  And make them want to learn plan9, which is where your
book can come in!



Re: [9fans] Raspberry Pi: won't recognize the USB mouse

2014-03-04 Thread Bakul Shah
On Tue, 04 Mar 2014 11:26:31 EST erik quanstrom  wrote:
> 
> first, apply the patch to the source, then build all of usb
> 
>   9fs sources
>   cd /sys/src/cmd/usb/lib
>   cp /n/sources/patch/usbshortdesc/dev.c dev.c
>   cd ..
>   mk install
> 
> then, build a new kernel
> 
>   cd /sys/src/cmd/bcm; mk
> 
> i believe you can give it a quick test by doing a hot reboot.
> 
>   fshalt -r ./9pi
> 
> if you'd like to install the new kernel on the flash, then
> 
>   dossrv
>   mount /srv/dos /n/9fat /dev/sdM0/dos
>   cd /n/9fat
>   cp 9pi 9pi-dist
>   cp /sys/src/9/bcm/9pi 9pi-usbfix
>   cp 9pi-usbfix 9pi
> 
> and reboot.  if this doesn't work out, you can rescue yourself
> by using anything that understands fat and replacing 9pi
> with the contents of 9pi-dist.

IIRC Ramakrishnan doesn't have a working 9pi system. 

I put 9pi with this patch in contrib/bakul on sources. It was
cross-built on a 386 VM with a pull done this morning + the
above patch. I had to rebuild the host 5c due to rune related
errors so I am not 100% certain this will work (I probably
should've done a full rebuild of the host binaries too).

He should be able to mount the dos partition on another
machine and replace 9pi on it.



Re: [9fans] first questions from a lurker

2014-03-10 Thread Bakul Shah
On Mon, 10 Mar 2014 08:41:12 MDT arn...@skeeve.com wrote:
> Hello All.
> 
> I've been a lurker on 9fans for many years. Today I finally did an
> install - +9atom.iso.bz2 into a virtual box VM.  The VM is NAT'ed
> to a corporate network and I can ping by IP address with no problem.
> 
> First questions.
> 
> 0. How to see my IP address? Cat a file in /net/... somewhere?

/net/ndb has your ip address, gateway, dns servers, domain.

> 2. How to use DHCP to get to the corporate DNS servers?  (This worked
>pretty much OOTB on the labs dist, but I think that 9atom will be
>the better match for the work we want to do.)  Right now all
>hostname lookups (via ping) fail.

Others have answered what to do within plan9. On the
virtualbox side you need to attach to 'bridged adapter'
instead of NAT.  Pick the right "name" (which is really the
interface name on the host) -- wi-fi (airport) for instacnce
if you are on a MBP.  I think the default adapter type should
work (Intel pro/1000 MT desktop). You may also have to tell
the dhcp server about mac address  etc. depending on how it is
set up.



Re: [9fans] two nics 8139

2014-03-13 Thread Bakul Shah
> bind -b '#l1' /net.alt
> bind -b '#I1' /net.alt
> ip/ipconfig -x /net.alt ether /net.alt/ether1
> ndb/cs -x /net.alt -f /lib/ndb/external
> ndb/dns -sx /net.alt -f /lib/ndb/external

So what do you do when you have N ethernet ports and you want
to forward packets between these ports either @ layer2
(bridge) or @ layer3 (route)?



Re: [9fans] GSoC proposal: Alternative window system

2014-03-19 Thread Bakul Shah
On Wed, 19 Mar 2014 04:36:34 EDT Caleb Malchik  wrote:
> For my project, I would build a tiling window manager similar to dwm 
> (what I use on Linux). I think a dwm-style interface that could be 
> controlled from the keyboard would provide a nice contrast to what we 
> already have with rio, and as we see from dwm the implementation of such 
> an interface needn't be complex. Development would involve modifying the 
> rio source code to implement the basic functions of a 
> tiling/keyboard-controlled window manager one by one.

I will throw out some rio/acme related ideas. Hope people find
them interesting enough to want to experiment.

- display size aware (when attached to an external display vs
  the builtin one of a laptop).
- dpi aware (pick the right size font)
- borderless windows (adjoining windows have different color)
  (but edges are sensitive to resizing etc)
- auto splitting of windows. The idea is to see if window size
  fiddling can be minimized (ideally it should /learn/ personal
  preferences)
- allow the window with "focus" to be made much much larger to
  help a user's focus (make other windows less distracting --
  IIRC there was a web based spreadsheet that did that. Don't
  recall its name).
- allow windows to be scrolled in lockstep (in x, y or both
  directions)
- support for multiple displays
- allow programmatic control of all this via a synthetic FS

Not sure if there exists a usable unification of acme + rio
but that would be nice.



Re: [9fans] usb/serial control open

2014-03-23 Thread Bakul Shah
On Sun, 23 Mar 2014 16:32:12 EDT erik quanstrom  wrote:
> On Sun Mar 23 15:56:52 EDT 2014, pau...@gmail.com wrote:
> > On Sun, Mar 23, 2014 at 8:47 PM, Gorka Guardiola  wrote:
> > 
> > >
> > > if(!setonce){
> > >   setonce = 1;
> > >   serialctl(p, "l8 i1");  /* default line parameters */
> > > }
> > 
> > And setonce needs to live in the interface, and it needs to be locked, etc.
> 
> another idea: since this is only needed by some hardware.  and then only in i
> nit.
> why not make it the responsibility of such hardware to do this in the init
> fn.  then the problem can be addressed without any special cases like
> !setonce.

On FreeBSD:

 The sio driver also supports an initial-state and a lock-state control
 device for each of the callin and the callout "data" devices.  The
 termios settings of a data device are copied from those of the corre-
 sponding initial-state device on first opens and are not inherited from
 previous opens.  Use stty(1) in the normal way on the initial-state
 devices to program initial termios states suitable for your setup.

A similar idea here would be to have a "default" command to
for default settings. When a device is opened, it is
initialized with these settings. The reason I like this is
because then I don't need to teach every serial IO program
what setting to use (often the other end is a dumb device
requiring its own fixed and peculiar settings).



Re: [9fans] usb/serial control open

2014-03-23 Thread Bakul Shah
On Sun, 23 Mar 2014 17:53:22 EDT erik quanstrom  wrote:
> > A similar idea here would be to have a "default" command to
> > for default settings. When a device is opened, it is
> > initialized with these settings. The reason I like this is
> > because then I don't need to teach every serial IO program
> > what setting to use (often the other end is a dumb device
> > requiring its own fixed and peculiar settings).
> 
> i think it is even easier to set the state up properly with cpurc or
> consolefs' configuration file, and have the various programs not even
> care that they're talking to a serial port.  

Not my experience. Occasionally programs do have to care about
some serial port parameters and if such a program crashes you
have a partially working interface. Think CTS/RTS state etc.



Re: [9fans] usb/serial control open

2014-03-23 Thread Bakul Shah
On Sun, 23 Mar 2014 20:32:07 EDT erik quanstrom  wrote:
> > > i think it is even easier to set the state up properly with cpurc or
> > > consolefs' configuration file, and have the various programs not even
> > > care that they're talking to a serial port.  
> > 
> > Not my experience. Occasionally programs do have to care about
> > some serial port parameters and if such a program crashes you
> > have a partially working interface. Think CTS/RTS state etc.
> 
> doesn't sound like the common case.  in any event, seems like a
> program fiddling with CTS/RTS should do the whole setup, especially
> if it plans on crashing.  :-)

The issue is program A can leave things in non-working order
and program B running after A has to deal with this. This is
no different from bringing up a system in a known good state.



[9fans] bcm2835 random number generator

2014-03-28 Thread Bakul Shah
I have put an initial version of raspberryPi hardware RNG
driver on sources.  Supposedly bcm2835 uses a reverse bias
transistor as a noise source (though I couldn't find anything
a definitive source for this). FWIW, I ran the output through
rngtest (does FIPS 140-2 tests) and the failure rate is about
the same as FreeBSD's (about 1 out of 1000).  rngtest tests at
a minimum 2*10^6 bits and plan9's stock /dev/random seems far
too slow so I didn't bother testing it.

Just copy /n/sources/contrib/bakul/random.c to sys/src/9/bcm
and rebuild 9pi.

Arguably this should be a new device, /dev/hrng or something,
but plan9's /dev/random code is very simple compared to the
elaborate versions on *BSD's. I have kept this one simple too.
If you are concerned about the quality of RNG or paranoid
about bcm2835's unpublic algorithm, feel free to complexify!



Re: [9fans] bcm2835 random number generator

2014-03-28 Thread Bakul Shah
On Fri, 28 Mar 2014 17:09:43 EDT erik quanstrom  wrote:
> On Fri Mar 28 13:21:35 EDT 2014, ba...@bitblocks.com wrote:
> > I have put an initial version of raspberryPi hardware RNG
> > driver on sources.  Supposedly bcm2835 uses a reverse bias
> > transistor as a noise source (though I couldn't find anything
> > a definitive source for this). FWIW, I ran the output through
> > rngtest (does FIPS 140-2 tests) and the failure rate is about
> > the same as FreeBSD's (about 1 out of 1000).  rngtest tests at
> > a minimum 2*10^6 bits and plan9's stock /dev/random seems far
> > too slow so I didn't bother testing it.
> > 
> > Just copy /n/sources/contrib/bakul/random.c to sys/src/9/bcm
> > and rebuild 9pi.
> > 
> > Arguably this should be a new device, /dev/hrng or something,
> > but plan9's /dev/random code is very simple compared to the
> > elaborate versions on *BSD's. I have kept this one simple too.
> > If you are concerned about the quality of RNG or paranoid
> > about bcm2835's unpublic algorithm, feel free to complexify!
> 
> i have not had luck with this.  i get data aborts when booting
> from flash, and hangs when rebooting via /dev/reboot.

Sorry about that! I forgot to mention that this facility was
added around Jan 30, 2013.  See Dom's message on page 12 on
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=29&t=19334&p=273944#p273944
Make sure you have firmware as least as recent as that.  Not
sure if Richard's 9pi image on sources has something more
recent. I will check this evening.



Re: [9fans] bcm2835 random number generator

2014-03-28 Thread Bakul Shah
On Fri, 28 Mar 2014 19:51:50 EDT erik quanstrom  wrote:
> > Sorry about that! I forgot to mention that this facility was
> > added around Jan 30, 2013.  See Dom's message on page 12 on
> > http://www.raspberrypi.org/phpBB3/viewtopic.php?f=29&t=19334&p=273944#p2739
> 44
> > Make sure you have firmware as least as recent as that.  Not
> > sure if Richard's 9pi image on sources has something more
> > recent. I will check this evening.
> 
> no problems.  i am using very old firmware because of the problems
> experienced in upgrading at one point.  can you post the exact version
> required?  i may do as i did for the amd64 RDRAND.

/n/sources/extra/pi.uboot.sd.img.gz has start.elf from Feb 7 2013.
/n/sources/contrib/miller/9pi.img.gz has start.elf from Nov 15 2013.

Either should work. Let me know if you are still seeing a
crash.



Re: [9fans] a research unix reader

2014-03-30 Thread Bakul Shah


> On Mar 30, 2014, at 12:47 PM, Nick Owens  wrote:
> 
> 9fans,
> 
> a few months ago, at a friends request, i acquired a copy of a research
> unix reader and scanned it, and i put it on archive.org.
> 
> the pdf and some other formats are available, but the conversion is not
> very good, so the pdf is the best bet.
> https://archive.org/details/a_research_unix_reader

http://doc.cat-v.org/unix/unix-reader/




Re: [9fans] [GSOC] plan9 which arch code to use?

2014-05-07 Thread Bakul Shah
> (come to mention it, i did Dan Brown a favour last year, unwittingly.)

There you go again.  More secrets  :-)



Re: [9fans] OT: hard realtime, timing diagram GUI.

2014-05-08 Thread Bakul Shah


> On May 8, 2014, at 2:15 AM, "Steve Simon"  wrote:
> 
> Anyone done any hard realtime programming? I am looking for a simple
> GUI tool which will read a text file I can generate from my code
> and display a timing diagram. This should allow either events
> triggered by the clock, by an interrupt, or by another event.
> 
> Anyone know of such a tool? I see masses of tools for drawing
> digital logic timing diagrams but nothing that seems to give
> me what I need for realtime code.

I don't understand why realtime matters.  How do you want these events 
represented on the timing diagram?


Re: [9fans] OT: hard realtime, timing diagram GUI.

2014-05-08 Thread Bakul Shah
Would https://github.com/drom/wavedrom do? See the tutorial. Step 8 shows 
bezier arrows linking waveforms. And it seems to be actively developed. There 
is a command line version as well.

On May 8, 2014, at 5:52 AM, "Steve Simon"  wrote:

>> I don't understand why realtime matters.
> 
> Only that such diagrams are more important in realtime systems.
> 
>> How do you want these events represented on the timing diagram?
> 
> I suspose a clock line, left to right, at the top.
> 
> events appear as signals, one below the other running paralle to the clock 
> line.
> These  change state on a rising edge of a clock, and a different coloured 
> bezier curve
> (with optional label) links an event to any events it triggers.
> 
> allow me to colour signals so interrupts and clock are clearly different
> and add labels to signals and I would be happy.
> 
> The idea is this diagram would be built by a cron job from regression tests 
> every night
> and if the timings drifted in the system it should be quite easy to see where 
> the time
> has been wasted.
> 
> Alternatively A GUI interface could be used - this might have advantages 
> (cursors?)
> but really a PDF and page(1) would probably do.
> 
> Somthing like graphviz for timing diagrammes.
> 
> -Steve
> 



Re: [9fans] OT: hard realtime, timing diagram GUI.

2014-05-08 Thread Bakul Shah
On May 8, 2014, at 7:53 AM, "Steve Simon"  wrote:

>> Would https://github.com/drom/wavedrom do?
> 
> Yep, pretty darn good.
> 
> maybe a little teeth gritting as its JS but
> what the heck, its a tool and that is all
> that really matters.

Wavedrom's input language seems simple enough that you can probably reimplement 
wavedrom in your favorite language easily enough.

There is also http://drawtiming.sourceforge.net/samples.html
Someone remarked it is a bit like graphviz for timing diagrams...

Visualization can be very useful but wouldn't it be easier to just process the 
text file to check for timing violations? That would be my first instinct. 





Re: [9fans] radio

2014-05-08 Thread Bakul Shah
On Thu, 08 May 2014 12:06:11 BST "Steve Simon"  wrote:
> 
> A little radio app for plan9. This has few features and may not
> seem worth the effort to some but it is planned to be the basis for
> an embedded radio device so it needs a little GUI and user interface.
> 
> Currently I use this at work every day, the radio appliance has
> stalled for now but should restart soon.

Nice!

Looks like you forgot to include json.h

What DAC are you planning to use? And display?
The PiTFT touchscreen display looks interesting.

I need to find time to play with a couple of toys:

1. A software defined radio receiver. This may not be very
   straightforward as it goes through the dreaded USB.
   http://www.adafruit.com/product/1497

2. A GPS device, interfaces via a 9600 baud link.
   My goal is to have a stratum one clock.
   http://www.adafruit.com/products/746



Re: [9fans] radio

2014-05-08 Thread Bakul Shah
On Thu, 08 May 2014 18:58:31 BST "Steve Simon"  wrote:
> I have a hifiberry (http://www.hifiberry.com/) nicely minimalist,
> though no driver at present - I will await the GSOC project :-)

> I have some itron VFDs from work, 256 x 64 pixel. I like these as
> the visibility is excellent. The only annoyance is they have
> a parallel interface and I use up all the PI's GPIOs.

> I also need to interface a rotary encoder for the tuning knob which
> is also a pain - not complex enough to justify an FPGA, but a bit too
> much to poll when the PI is doing audio decode as well.

No need for polling.  The BCM2835 can handle edge triggered
interrupts.  Or you can use a PAL or CPLD.

> Thinking of adding a PIC or an AVR just for the encoder / VFD interface
> and talking i2c to it.



Re: [9fans] what arch

2014-05-09 Thread Bakul Shah
On Fri, 09 May 2014 13:37:04 PDT ron minnich  wrote:
> somebody referred me to the discussion.
> 
> Sometimes we found people wanted to build on their existing OS (Linux,
> OSX, whatever) in a cross-build way, and, further, didn't want to do
> that in a VM, because they had tools they liked.
> 
> github.com/rminnich/NxM is the last snapshot of the Sandia/BL fork,
> and it has scripts and instructions to crossbuild it all on Linux.
> It's not elegant but it works. At the time, we used Gerrit and Jenkins
> for our control and validation. For each commit, gerrit would kick off
> a jenkins run, which would do the full build from scratch, boot in
> qemu, and run a set of regression tests. Gerrit would -1 the patch if
> the jenkins pass did not work.
> 
> Full build, starting from nothing, of tools, libs, bin, kernels, was
> about two minutes on Linux. If you added gs into the mix, it was more
> like 4 minutes IIRC. Ran fine on amd64.

Seems very slow : )

Full plan9 *native* build of the kernel, libs and bin on a
/RapsberryPi/ is about 4 minutes Crossbuilding i386 kernel on
it takes about 3 minutes (I haven't tried a full crossbuild).
Building the 9pi kernel under 9vx takes about 11 seconds on my
MBP @ home. Don't recall the full build time.

For comparison, a native Linux kernel build on RPi takes over
10 hours.

For another comparison, RPi seems about 16-20 time slower
compared to a MBP. 

> One suggestion I'd like to float here: the LPL is a problem for both
> BSD and GPL worlds (see Theo's notes from 2003 on that issue). It
> might be useful for new from-scratch software to be released under
> 3-clause BSD; or under the Inferno license for that matter. In other
> words, if you don't have to use the LPL, use 3-clause BSD instead. One
> person has already very kindly allowed us to use their drivers in
> Akaros with the LPL replaced with 3-clause BSD.

Agree.



Re: [9fans] radio

2014-05-09 Thread Bakul Shah
On Fri, 09 May 2014 16:11:00 +0200 Krystian Lewandowski 
 wrote:
> 
> I was working on GPIO under Plan9 - very simple thing but also supports
> edge-raising/falling events. I had simple C code to print what pin
> triggered an event. I'll try to push this simple test to github during
> weekend. Though i'm not sure how it can be integrated - is events counting
> enough?

I think for a rotary encoder you'd want to know about every
transition. One idea is to return 1/0 on a up/down transition
to a blocking read. Let the user re-enable the corresponding
interrupt by writing 1 to the same fd.

The user code reads from two pins and figures out which way
the knob is turning. Similarly you can have user code count
events if that is what you want.



Re: [9fans] RaspberryPi, monitor energy saving

2014-05-15 Thread Bakul Shah
On Thu, 15 May 2014 22:04:34 BST "Steve Simon"  wrote:
> Its just wonderful to have a raspberry pi as a plan9 terminal,
> but the energy saving of the pi is outweighed by the monitor I use.
> 
> The Pi's display code blanks the screen after a while but this does
> not shutdown the monitor.
> 
> I dug a little and it seems I need to send CEC (Consumer Electronics Control)
> messages over HDMI - Via a Pi GPU entrypoint.
> 
> 
> Porting libcec looks a little painful especially as I only need to be able
> to send two messages (turn on and turn off).
> 
> Anyone know anything about this stuff? is CEC what I need or is there some
> other (simpler) way?

May be use a gpio pin to control a switch?!



Re: [9fans] RaspberryPi, monitor energy saving

2014-05-16 Thread Bakul Shah

> On May 16, 2014, at 5:43 AM, erik quanstrom  wrote:
> 
>> On Fri May 16 08:33:41 EDT 2014, st...@quintile.net wrote:
>> Mmm, that feels like good and bad news.
>> 
>> I know richard did what he could to shut down the screen when
>> its idle for a while so that seems to do the right thing with vga
>> monitors, but I guess I do need CEC.
>> 
>> Oh well, time for more digging.
> 
> is a hdmi->vga connector too gruesome, or unworkable?

Adafruit sells one for $19.



Re: [9fans] syscall 53

2014-05-19 Thread Bakul Shah
On Mon, 19 May 2014 13:25:42 EDT erik quanstrom  wrote:
> 
> i would be very surprised if there were any gain in accuracy.  the
> accuracy is going to be dominated by the most inaccurate term, and
> that's likely going to be timesync, and on the order of milliseconds.

Speaking of time and accuracy

I am adding some logic to synchronize with the PPS signal from
the GPS device that I hooked up to a RaspberryPi.  With this
change the TOD clock should be accurate to within 10 to 20 µs.
So I for one welcome the new syscall! [Though its introduction
could've been better managed]

But using a TOD clock for measuring performance seems wrong
since it will also have to account for leapseconds (at the
moment timesync happily ignores leapseconds).



Re: [9fans] syscall 53

2014-05-20 Thread Bakul Shah
On Mon, 19 May 2014 17:34:24 EDT Anthony Sorace  wrote:
> 
> Ron wrote:
> 
> > That said, the problems were due (IMHO) to a limitation in the
> > update mechanism, not to the inclusion of a new system call.
> 
> This is true depending on how you define "update mechanism".
> A simple note from whoever made the decision to push the
> change out to the effect of "hey, we're going to add a new
> syscall, update your kernels before pulling new binaries" a
> while before the push would have been sufficient.

I never understood why binaries are pulled. Even on a lowly
RPi it takes 4 minutes to build everything (half if you cut
out gs). And the 386 binaries are useless on non-386
platforms!

Why not just separate binary and source distributions?  Then
include a file in the source distribution to warn people about
changes such as this one (or the one about 21bit unicode) and
how to avoid painting yourself in a corner. The binary distr.
should have a provision for *only* updating the kernel and
insisting the user boots off of it before further updates can
proceed.

This is a solved problem; not exactly rocket science.  The
harder problem is the social one.



Re: [9fans] syscall 53

2014-05-21 Thread Bakul Shah
On Wed, 21 May 2014 09:56:26 PDT Skip Tavakkolian  
wrote:
> 
> i like git.  as it is a kind of archival file system, one should be able to
> build a plan9 file system interface for it.

Have you looked at porting git to plan9? 178K lines of *.[ch].
20K lines of shell scripts (+ 100K+ lines of test scripts).
Also python and perl scripts.

All SCM are in some sense archival systems but the actual
storage scheme used is just one tiny part of the system.
There are a few more requirements which a simple filesystem
model won't satisfy by itself.  Consider:

This is the most fundamental operation of a SCM is to manage
source code updates sanely.  Suppose you had to change files
f1..fn to implement a feature/bugfix.  Now you want to check
them in and share these changes with others. But the checkin
should succeed *only if* the latest versions of f1..fn in the
underlying filesystems are the *same* as the ones you started
from. If this is not true, the entire checkin must be aborted.
It is upto the developer to merge his changes with the latest
versions and try again. [Essentially you are doing a
compare-and-swap operation on a set of files]

You can single thread this through a human but the problem
remains the same and a human (without additional tools) will
not be able to check all this reliably.  For a collaborative,
distributed project where different people may work on
different parts, a human in the middle is not a scalable
solution.  The only reason for a manual SCM is to *slow down*
the rate of change.

Then there other operations like maintaining separate
branches, merging/cherry picking changes from another branch,
reverting from local changes, rolling back a bad checkin,
pushing changes to a shared repo, pulling from it, access
control, etc. etc.

You can certainly build something on top of venti to build a
nice SCM but that would be a researchy project.

Given all this I think hg makes a good compromise if you want
to move forward from the status-quo.



Re: [9fans] CMS/MMS (VCS/SCM/DSCM) [was: syscall 53]

2014-05-21 Thread Bakul Shah
On Wed, 21 May 2014 22:25:55 CDT Jeff Sickel  wrote:
> At the base level I find that sources and sourcesdump are much
> more accessible than many of the DSCMs (e.g., darcs, git, hg)
> out there.  Yes it's great to use hg to snapshot state and
> allow easy migration across various systems, but it still
> clutters the model.

Wouldn't something like cinap's hgfs give you the same thing?

> One of the advantages of having a real archival store, like
> Venti, is that it changes the conceptual level of how you deal
> with metadata about a project.  When the default is everything
> is saved and you can manipulate the namespace to represent
> various portions of the history then you don't get caught
> up in all the branching, rebasing, queues, merges, and other
> general contortions that would make us happy to warp back in
> time to an early copy of Dr. Dobb's Journal of Computer
> Calisthenics & Orthodontia when the future looked bright and
> we really could do anything with 8 bits.  Sure working with
> an automatic snapshot system can be a headache at times, but
> it's one of those that easily passes, not like sitting down for
> a [git] root canal because your tooth has been rotting to the
> core while you worry about the configuration for the hottest
> continuous integration system with a butler name that shows we
> really didn't learn anything about the 18th or 19th century
> transitions to the modern age...

Branch/merge features evolved in response to people's needs.
Merging is necessary if you (as an organization) have
substantial local changes for your product (or internal
use) and you also want to use the latest release from your
vendor. No amount of namespace manipulation is going to
help you. Parallel development is inherently more headachy!

> Back on topic: be careful of the dependencies required to
> get a system bootstrapped.  The FreeBSD community took BIND
> out of the default system and re-wrote a new basic resolver
> because the BIND 10+ versions would require packaging Python
> into the core distribution.  There's no reason for
> bringing in more than is necessary to build, and putting a
> dependency on Python would significantly increase the build
> time and total lines of code to maintain just to have hg.
> Darcs is in the same boat in that you'd have to keep a version
> of Haskell in the system.  Git is the closest as it's just C,
> sort of: it's a whole lot of code.  But why would you want to
> bring in "178K lines of *.[ch], 20K lines of shell scripts, 100K+
> lines of test scripts" and have to lug in the massive payload
> of Python and Perl just to make it functional?

I was certainly not suggesting porting git to plan9!  Just
pointing out how painful and big the task would be -- I was in
fact saying use hg!

I looked at a few alternative and felt hg is the only workable
alternative that is usable right now.

A dependency on python doesn't seem much worse than one on
ghostscript. Neither should be built every time you do `mk all'
in /sys/src but it should be possible to build them. And at
least python would be much more useful than gs!

Though the idea of a scmfs (for checkins as well) and using
vac/venti as a backend is starting to appeal to me : )

> At the end of the day, it's the communication with people that's
> the largest benefit.  Let's continue building systems based on the
> ideas that drew us all to Plan 9 in the first place.

Well, nothing prevents us from continuing to use the existing
system.



Re: [9fans] syscall 53

2014-05-21 Thread Bakul Shah
On Thu, 22 May 2014 03:43:15 - Kurt H Maier  wrote:
> 
> But all the DVCS in the world doesn't let us see code that is never uploaded
> in the first place.  I can't even count the number of programs that are only
> even known by oral tradition, mentioned only in passing, then never to be
> heard of again. "Oh, I'll upload it when it's ready."  Years go past: nobody
> has even defined 'ready.'  Nowadays when someone says "it's not ready for
> the world to see" I just read "I am a liar and I have nothing."

I submit not having a proper DVCS is part of the problem for
this.  The reason github is so successful is because it is so
easy to upload code and then to collaborate, get bug fixes
etc.  While some incomplete code in one's own src tree may not
get looked at for a long time and ultimately may never see the
light of the day. Github should use the slogan "it doesn't
/have/ to be ready for the world to see!".



Re: [9fans] CMS/MMS (VCS/SCM/DSCM) [was: syscall 53]

2014-05-21 Thread Bakul Shah
On Thu, 22 May 2014 07:36:54 +0200 lu...@proxima.alt.za wrote:
> > Though the idea of a scmfs (for checkins as well) and using
> > vac/venti as a backend is starting to appeal to me : )
> 
> Let's open the discussion, Plan 9 has some baseline tools other OSes
> are still thinking about and will probably implement stupidly.  Since
> RCS I've been thinking that there has to be a better way and CVS went
> a long way to satisfy my personal requirements.  Now may well be the
> time to look at fresher options. 

I am attaching an excerpt from an old email (May 26, 2011)
that I never sent.  These ideas are not even half baked.  But
may be they will trigger some creative thoughts.

On Thu, 26 May 2011 20:16:11 EDT erik quanstrom   wrote:
> 
> file servers are great, but hg concepts aren't really file-system oriented.
> it might be nice to be able to
>   diff -c (default mybranch)^/sys/src/9/pc/mp.c
> but hg diff figures out all the alloying stuff, like the top-level of the
> repo anyway.  also, ideas like push, pull, update, etc., don't map very well.
> so a hg file server seems to me a bit like, "have hammer, see nail".

I thik the filesystem model is more like an electric motor
that powers all sorts of things given the right attachments!

> if i'm missing why this is an interesting idea, i'd love to know what
> i don't see.

I partially agree with you; hence the suggestion about editor
integration.

It is just that I have wondering about just how far the FS
model can be pushed or extended seamlessly in various
directions.

In the context of SCMs, we can map a specific file version to
one specific path -- this is what hgfs above does.  But what
about push/pull/commit etc.? One idea is to to map them to
operations on "magic" files.

For example,
- local file copies appear as normal files.
- cat .hg/status  == hg status
- cat > .hg/commit == hg commit
- cat .hg/log == hg log
- echo -a > .hg/revert == hg revert -a
- echo $url > .hg/push == hg push $url
- echo $url > .hg/pull == hg push -u $url
- ls .hg/help
- cat .hg/help/push 

In fact the backend SCM need not be mercurial; it can git,
venti etc. Can we come up with a minimal set of features?

Do we gain anything by mapping $SCM commands to special files?
The same question can be asked about many of existing plan9
filesystems. At least for me visualizing new uses is easier
when more things can be fitted in a simpler model.

Features such as atomic commits, changesets, branches, push,
pull, merge etc. can be useful in multiple contexts so it
would be nice if they can integrated smoothly in an FS.

Example uses:
- A backup is nothing but a previous commit. A nightly
  backup cron job to do "echo `{date} > .commit"
- an offsite backup is a `push'.
- Initial system install is like a clone.
- An OS upgrade is like a pull.
- Installing a package is like a pull (or if you built it
  locally, a commit)
- Uinstall is reverting the change.
- Each machine's config can be in its own branch.
- You can use clone to create sandboxes.
- A commit makes your private temp view permanent and
  potentially visible to others.
- Conversely old commits can be spilled to a backup
  media (current SCMs want the entire history online).
- Without needing a permanent connection you can `pull' data.
  [never have to do `9fs sources; ls /n/sources/contrib'.]

A better integration can inspire new uses:
- a timemachine like a gui can help quickly scroll through
  changes in some file (or even a bird's eye view of changes
  in all the files).
- combining versioning + auto push/pull with other filesystems
  can open up new uses. You can keep your own daily archive of
  http://nyt.com/ or see how a story develops by scrolling through
  changes.

Just some things I was mulling about. As you can see there are
lots of open questions & half-baked ideas to sort through.



Re: [9fans] hgfs

2014-05-22 Thread Bakul Shah
On Thu, 22 May 2014 09:17:18 PDT ron minnich  wrote:
> has anyone looked at camlistore as a starting point? Written in Go,
> which means it works on Plan 9.

I will take a look at it but Ron, if you are still on this
channel, may be you can describe how it will help here?
[And, please don't overload /dev/null. Just use a 'killfile']



Re: [9fans] hgfs

2014-05-22 Thread Bakul Shah
On Thu, 22 May 2014 12:41:05 +0200 =?UTF-8?B?QXJhbSBIxIN2xINybmVhbnU=?= 
 wrote:
> 
> What would be the point of this? Once you have a version (revision)
> you can just bind the subtree where you want it. I don't see the
> point in having this special switching code inside hgfs. Plan 9
> provides the necessary functionality.

I agree on this specific point.

If I am working on branch A and I want to switch to B, git
forces me to commit or "stash" my changes. So even after
gadzillion lines of code it still doesn't do the right thing.
A tool should not force singlethreading on me.  And hg is no
better in this regard.  This limitation need not exist on
plan9.



Re: [9fans] CMS/MMS (VCS/SCM/DSCM) [was: syscall 53]

2014-05-22 Thread Bakul Shah
On Thu, 22 May 2014 08:45:48 EDT erik quanstrom  wrote:
> > Features such as atomic commits, changesets, branches, push,
> > pull, merge etc. can be useful in multiple contexts so it
> > would be nice if they can integrated smoothly in an FS.
> > 
> > - Installing a package is like a pull (or if you built it
> >   locally, a commit)
> > - Uinstall is reverting the change.
> > - Each machine's config can be in its own branch.
> 
> what is the advantage over seperate files?  imo, this complicates the issue.

I don't quite recall what I was thinking 3 years ago in a 30
minute window but I think the idea was that you have a set of
configuration files which all need to be consistent and
if you wanted to "roll back" changes, you'd have to undo
all those changes.

> > - You can use clone to create sandboxes.
> > - A commit makes your private temp view permanent and
> >   potentially visible to others.
> > - Conversely old commits can be spilled to a backup
> >   media (current SCMs want the entire history online).
> > - Without needing a permanent connection you can `pull' data.
> >   [never have to do `9fs sources; ls /n/sources/contrib'.]
> 
> this is a nice list, but i think a key point is not being teased out.
> the dump file systems are linear.  there is a full order of system
> archives.  in hg, there is a full order of the tip, but not so of
> branches in general.  in git, multiple orders (not sure if they're
> full or not) can be imposed on a set of hashes.

So if we do SCM right, backups are just a subset, right?!  No,
don't believe that.  No time to explore this right now but I
think dumps are at a different (lower) level from SCM data.

> another key point is that all distributed scms that i've used clone
> entire systems.  but what would be more interesting is to clone, say,
> /sys/src or some proto-based subset of the system, while using the
> main file system for everything else.  imagine you want to work on
> the kernel, and imagine that you keep console log files.  clearly
> you want to see both the new log entries, and the modified kernel.

Actually with something like venti as the store, `clone' is
trivial! Just find the hash for a particular changeset you want
to clone and you can build the rest on demand. `rebase' or
`pull' will be more painful.

> i would be concerned that this really challenges the plan 9 model
> of namespaces.  one would have to figure out how to keep oneself
> out of namespace hell if one were to build this into a file system and
> use it heavily.

Your concern is a bit premature.  We are just handwaving right
now!  I am interested in finding out just how far we can push
the plan9 model -- and if the current model doesn't naturally
fall out of any extended model, we'd know.



Re: [9fans] [GSOC] Dial between two computers

2014-05-26 Thread Bakul Shah
Does

9fs localhost
ls /n/localhost

work on your VM? If that works, and if you can ping in both directions, the 
other possibilities are
a. firewall rules on the linux box or
b. how you have set up your VM. If you are using it in the "bridge" mode, it 
should work (except for a.). If you are using using the   virtualizer's (QEMU 
or VirtualBox or Parallels etc.) stack, you have to setup some port forwarding 
rules.

On May 26, 2014, at 7:37 PM, yan cui  wrote:

> sure. 
> 
> cat ndb
> ip=192.168.122.71 ipmask=255.255.255.0 ipgw=192.168.122.1
> sys=super
> dns=192.168.122.1
> 
> cat netstat
> tcp  0bootes Listen   5640  ::
> tcp  1bootes Listen   5670  ::
> tcp  2none   Listen   1100  ::
> tcp  3none   Listen   1130  ::
> tcp  4none   Listen   1430  ::
> tcp  5none   Listen   17005  0  ::
> tcp  6none   Listen   17006  0  ::
> tcp  7none   Listen   17007  0  ::
> tcp  8none   Listen   17009  0  ::
> tcp  9none   Listen   17010  0  ::
> tcp  10   none   Listen   19 0  ::
> tcp  11   none   Listen   21 0  ::
> tcp  12   none   Listen   22 0  ::
> tcp  13   none   Listen   23 0  ::
> tcp  14   none   Listen   25 0  ::
> tcp  15   none   Listen   5130  ::
> tcp  16   none   Listen   53 0  ::
> tcp  17   none   Listen   5650  ::
> tcp  18   none   Listen   7  0  ::
> tcp  19   none   Listen   9  0  ::
> tcp  20   none   Listen   9930  ::
> tcp  21   none   Listen   9950  ::
> tcp  22   networkClosed   0  0  ::
> tcp  23   networkClosed   0  0  ::
> tcp  24   networkClosed   56457021  192.168.122.1
> tcp  25   networkClosed   39452  567192.168.122.71
> tcp  26   networkClosed   40392  567192.168.122.71
> tcp  27   networkClosed   56757328  192.168.122.71
> tcp  28   networkClosed   56740392  192.168.122.71
> udp  0networkClosed   0  0  ::
> 
> 
> 
> 2014-05-26 22:26 GMT-04:00 Skip Tavakkolian :
> can you supply the output from your cpu?
> % cat /net/ndb
> % netstat -n
> 
> 
> 
> On Mon, May 26, 2014 at 7:18 PM, yan cui  wrote:
> plan9 auth+cpu+file server runs on vm, 
> 
> $ telnet 192.168.122.71 564
> Trying 192.168.122.71...
> Connected to 192.168.122.71.
> Escape character is '^]'.
> Then, no response. 
> 
> 
> 
> 
> 
> 2014-05-26 21:51 GMT-04:00 Skip Tavakkolian :
> 
> the firewall here wont answer pings.
> 
> you could check with netstat on your plan 9 and/or traceroute from your linux 
> system.  btw, does your plan 9 cpu run in a vm? also does telnet on the linux 
> system behave the same way as your dial? e.g.
> $ telnet  564
> 
> 
> 
> On Mon, May 26, 2014 at 6:30 PM, yan cui  wrote:
> interesting. 
> I also dial tcp!www.9netics.com!http, but failed. Actually, 
> I cannot even ping it successfully. (other sites such as www.google.com can 
> be pinged on my system.) By the way, if fossil uses another ip, how to find 
> that?
> 
> 
> 2014-05-26 20:52 GMT-04:00 Skip Tavakkolian :
> 
> works here (see below). i wonder if fossil is announcing on a different ip 
> than you're expecting?
> 
> % 9c dial.c
> % 9l -o dial dial.o
> % ./dial tcp!www.9netics.com!http
> GET / HTTP/1.0
> 
> HTTP/1.1 200 OK
> Server: Plan9
> Date: Tue, 27 May 2014 00:50:46 GMT
> ETag: "364d3v1b"
> Content-Length: 2682
> Last-Modified: Thu, 29 Aug 2013 22:51:43 GMT
> Content-Type: text/html
> Connection: close
> 
>  
> 
> 
> ...
> 
> 
> On Mon, May 26, 2014 at 5:13 PM, Nick Owens  wrote:
> yan,
> 
> did you try to use packet capture software like wireshark, or snoopy(8)
> on plan 9, to see the packets?
> 
> running wireshark on linux, and snoopy on plan 9, will give you insight
> into if the packets reach the other side successfully.
> 
> On Mon, May 26, 2014 at 08:06:21PM -0400, yan cui wrote:
> > Hi all,
> >
> > I used a program to dial from one system to another system, but
> > it gives a connection time out error. I have searched on Internet for a
> > long time and cannot get a solution. Could you please provide some
> > suggestions or hints? Basically, one system is Linux based system with rc
> > shell installed (we call it A). The other one is a auth+cpu+file server
> > (we call it B). On B, I have used fossil/conf command to listen tcp!*!564.
> > On A, I executed dial tcp!!564, but it reports a time out
> > error after waiting some time. Results are the same when A is a plan9
> > terminal. By

Re: [9fans] suicide message on vmware

2014-06-05 Thread Bakul Shah
On Fri, 06 Jun 2014 10:48:21 +0530 Ramakrishnan Muthukrishnan 
 wrote:
> Well, looks like I cannot run any binaries anymore and getting the
> suicide message! I don't have anything critical on this vm image and
> can re-install it. But I want to see if I can recover it and how. I
> will re-read the "syscall 53" thread to look for any solutions.

Aren't the old binaries saved under the same name  but
prefixed with _?  If you haven't rebooted yet, you can use
those to copy the new kernel to the FAT partition.

> 
> Ramakrishnan
> 
> 
> On Fri, Jun 6, 2014 at 9:35 AM, Ramakrishnan Muthukrishnan
>  wrote:
> > On Fri, Jun 6, 2014 at 8:51 AM, erik quanstrom  wrot
> e:
> >> On Thu Jun  5 23:17:37 EDT 2014, vu3...@gmail.com wrote:
> >>> Hi,
> >>>
> >>> I just saw a suicide message on 9atom running on plan9 while updating
> >>> the system:
> >>>
> >>> % replica/pull -v /dist/replica/network
> >>>
> >>> After a while, I saw this printed, but the replica/pull is proceeding
> >>> without any problem.
> >>>
> >>> (not completely readable because stats window overwrote the screen)
> >>> ... bad sys call number 53 pc 101c6
> >>> timesync 57: suicide: sys: bad syscall pc=0x101c6
> >>
> >> nsec!  argh!
> >
> > Ah, I should have remembered.
> >
> > So, I guess, I got the updating sequence wrong? First upgrade kernel,
> > then upgrade the rest?
> >
> > --
> >   Ramakrishnan
> 
> 
> 
> -- 
>   Ramakrishnan
> 



Re: [9fans] suicide message on vmware

2014-06-06 Thread Bakul Shah
On Fri, 06 Jun 2014 13:02:14 +0530 Ramakrishnan Muthukrishnan 
 wrote:
> On Fri, Jun 6, 2014 at 11:30 AM, Bakul Shah  wrote:
> > On Fri, 06 Jun 2014 10:48:21 +0530 Ramakrishnan Muthukrishnan  .com> wrote:
> >> Well, looks like I cannot run any binaries anymore and getting the
> >> suicide message! I don't have anything critical on this vm image and
> >> can re-install it. But I want to see if I can recover it and how. I
> >> will re-read the "syscall 53" thread to look for any solutions.
> >
> > Aren't the old binaries saved under the same name  but
> > prefixed with _?  If you haven't rebooted yet, you can use
> > those to copy the new kernel to the FAT partition.
> 
> Thanks, I didn't know that old binaries are kept prefixed with _. Very nice!
> 
> I copied the kernels from David (9legacy.org/download/kernel.tar.bz2),
> untar'ed it. This copied into /386/9pcf. Then I do:

I don't know what's on 9legacy.org. Copy the labs kernel from
/386/9pcf since after reboot it will support the updated labs
binaries that use nsec() syscall.

My assumption is you are running an old kernel with new
binaries.

> 9fat:
> _cp /386/9pcf /n/9fat/9pcf
> 
> But I get an error message: '/n/9fat/9pcf clone failed'.

9fat: will use the new binaries! Look at /rc/bin/9fat: and
follow the steps using the old binaries. The following may
be enough.


_dossrv 
_mount -c /srv/dos /n/9fat /dev/sdC0/9fat

Unless dossrv is already running (use _ps) and /n/9fat is
already mounted, in which case you will have to _unmount it
and kill dossrv.



Re: [9fans] suicide message on vmware

2014-06-06 Thread Bakul Shah

> On Jun 5, 2014, at 8:15 PM, Ramakrishnan Muthukrishnan  
> wrote:
> 
> Hi,
> 
> I just saw a suicide message on 9atom running on plan9 while updating
> the system:
> 
> % replica/pull -v /dist/replica/network

I missed that you were running 9atom. Using old binaries to copy the new kernel 
to /n/9fat and rebooting means now you're running the bell labs kernel. I don't 
know how different the two kernels are but if you want to continue running 
9atom everything, you may have to undo nsec related changes in the userland.

More generally, as this nsec change demonstrates, if you rely on sources over 
which you have no control & you also have local changes, you pretty much have 
to treat the external sources as a "vendor branch" and do a careful merge to 
avoid such surprises.


Re: [9fans] suicide message on vmware

2014-06-06 Thread Bakul Shah
On Fri, 06 Jun 2014 11:35:08 EDT erik quanstrom  wrote:
> On Fri Jun  6 11:26:13 EDT 2014, ba...@bitblocks.com wrote:
> > 
> > > On Jun 5, 2014, at 8:15 PM, Ramakrishnan Muthukrishnan 
>  wrote:
> > > 
> > > Hi,
> > > 
> > > I just saw a suicide message on 9atom running on plan9 while updating
> > > the system:
> > > 
> > > % replica/pull -v /dist/replica/network
> > 
> > I missed that you were running 9atom. Using old binaries to copy the new ke
> rnel to /n/9fat and rebooting means now you're running the bell labs kernel. 
> I don't know how different the two kernels are but if you want to continue ru
> nning 9atom everything, you may have to undo nsec related changes in the user
> land.
> > 
> > More generally, as this nsec change demonstrates, if you rely on sources ov
> er which you have no control & you also have local changes, you pretty much h
> ave to treat the external sources as a "vendor branch" and do a careful merge
>  to avoid such surprises.
> 
> that's not how replica works.  replica respects local changes.  however,
> since in this case two different databases were mixed up, there is little
> chance that the user has a sane system.

What two databases?

Replica respects local changes at the file level.  You still
have to do a manual merge if the server version changed as
well.

The bigger issue is that the unit of update needs to be a
*set* of files that take a system from one consistent state to
another. If you update only a subset of files, you may be left
with an inconsistent system.

Another issue: your local changes may depend on the contents
of a file that you never changed so the update can overwrite
it with new and possibly incompatible contents. For all these
reasons external sources should go in a vendor branch.

And I never understood why binaries are pulled.  It is not
like on Linux/*BSD where building binaries takes a long time.

And replica (& related scripts) can't deal with changes like
syscall updates.

For a foolproof update in case of incompatible kernel changes
(and if you're running the same distribution as you pulled
from), you should

1. build all binaries and new kernels (but not install)
2. install the new kernel (& loader) on the boot partition
3. reboot
4. install new binaries
5. reboot

If you have local changes, you have to add

0. pull in a "vendor branch" and merge changes



Re: [9fans] Question about fossil

2014-06-08 Thread Bakul Shah
On Sat, 07 Jun 2014 21:39:29 BST Riddler  wrote:
> 
> Onto my question: What if I shrunk that fossil partition to, say, 1GB
> and then wrote either more than 1GB in small files or a single 2GB
> file.

Why would you want to make the fossil partition that small?

I would keep it at least twice as large as the largest file
I'd ever want to create.

> Will fossil error on the write that pushes it over the edge?
> Perhaps 'spill' the extra data over to venti?
> Something else entirely?

I haven't studied fossil but AFAIK it won't spill data to
venti when it runs low on disk. Rather, you set it up to take
daily snapshots so the partition should be large enough to
hold all the new data you may generate in a day.

Other things to note:

- make sure you mirror or backup the venti disk or else you
  may lose the only copy of a block!
- try this out on a small scale before you commit to it, as I
  suspect you'll run into various limits and may be bugs. Do
  report what you discover.
- performance will likely be poor. For better performance you
  may want to keep venti index on a separate (flash) disk.
- it would be nice if you can come up with a workable setup
  for a venti server!



Re: [9fans] Question about fossil

2014-06-08 Thread Bakul Shah
On Sun, 08 Jun 2014 03:56:24 EDT erik quanstrom  wrote:
> > - try this out on a small scale before you commit to it, as I
> >   suspect you'll run into various limits and may be bugs. Do
> >   report what you discover.
> > - performance will likely be poor. For better performance you
> >   may want to keep venti index on a separate (flash) disk.
> > - it would be nice if you can come up with a workable setup
> >   for a venti server!
> 
> usb performance is ~4-7MB/s.  this is the best you can hope for
> from the disk.  venti will only slow this down by multiplying
> disk accesses and being a bit seeky.  keep in mind if you're
> using this for a venti server, that usb b/w needs to be shared
> with tcp.

The last time I measured this (Aug 2012) raw disk write was
10MB/s, file writes were 2MB/s. On the same h/w & disk linux
got 25MB/s (don't recall file throughput). And Linux gets
11.3MB/s ethernet throughput compared 3.7MB/s on 9pi (both
with ttcp). Linux tcp throughput is close to linespeed.

Might almost be worth using p9p vent under linux!  And keep
isects on the sdcard and arenas on an external disk.



Re: [9fans] Question about fossil

2014-06-09 Thread Bakul Shah
On Mon, 09 Jun 2014 17:25:51 EDT erik quanstrom  wrote:
> On Mon Jun  9 17:13:09 EDT 2014, lyn...@orthanc.ca wrote:
> 
> > 
> > On Jun 9, 2014, at 1:21 PM, Riddler  wrote:
> > 
> > > It was brought about mainly because the wiki states that sources only
> > > uses ~512MB for fossil.
> > 
> > I suspect that's wildly out of date.
> 
> a basic install requires about 512mb, which all must be in fossil at
> the same time.  so that's correct as far as it goes.
> 
> nontheless, i think that 2% or a minimum of 1G and a maximum
> of about 20G makes sense for fossil.  i have run various file servers
> out of space, and it's not pretty.

Over the weekend I was playing with fossil and "copied" my
fossil partition using its last score, swapped the two disks
(under virtualbox) and rebooted.  df now shows 1MB in use! So
if you init fossil from the score of an existing installation,
you can make do with a lot less space -- only depends on how
much new stuff you create every day! Even there you can
probably write a script that watches df and when it reaches
some limit creates an archival snapshot or just snapshot
every hour or so!

This idea can drastically reduce new installation time.
Someone (vsrinivas?) has created vtrc.c, a venti proxy, a
variation of Russ's venti/ro.c. This can probably be enhanced
so that if a block is not found in your local venti, it asks a
public venti (and writes back that block to the local venti).



  1   2   3   4   5   6   7   8   >