On Fri, Apr 24, 2009 at 08:33:59AM -0700, ron minnich wrote:
>
> [snipped precisions about some of my notes]
>
> Not sure what you're getting at here, but you've barely scratched the surface.
The fact that I'm not an english native speaker does not help and my
wording may be rude.
This was not in
On Thu, Apr 23, 2009 at 9:56 AM, wrote:
> clustermatic: not much left from lanl
This is a long story and the situation is less a comment on the
software than on the organization. I only say this because, almost 5
years after our last release,
there are still people out there using it.
> beowu
On Sat, Apr 18, 2009 at 08:05:50AM -0700, ron minnich wrote:
>
> For cluster work that was done in the OS, see any clustermatic
> publication from minnich, hendriks, or watson, ca. 2000-2005.
FWIW, I haven't found much left, and finally purchased your (and al.)
article about HARE: The Right-Wei
> Not to beat a (potentially) dead horse (even further) to death, but if we
> had some way of knowing that files were actually data (i.e. not ctl files;
> cf. QTDECENT) we could do more prefetching in a proxy -- e.g. cfs could be
> modified to do read entire files into its cache (presumably it woul
On Thu, Apr 23, 2009 at 01:07:58PM +0800, sqweek wrote:
> Ken Thompson wrote:
> | Now for the question. 9P could probably be speeded up, for large
> | reads and writes, by issuing many smaller reads in parallel rather
> | than serially. Another try would be to allow the client of the
> | filesystem
2009/4/21 erik quanstrom :
>> http://moderator.appspot.com/#15/e=c9&t=2d
>
> "You must have JavaScript enabled in order to use this feature.
>
> cruel irony.
No silver bullet, unfortunately :)
Ken Thompson wrote:
| HTTP is really TCP/IP - a reliable stream transport. 9P is a
| filesystem protocol
2009/4/21 erik quanstrom :
>> i was trying to point out that if you try to
>> ignore the issue by removing flush from the
>> protocol, you'll get a system that doesn't work so smoothly.
>
> your failure cases seem to rely on poorly chosen tags.
> i wasn't suggesting that flush be eliminated. i was
> i was trying to point out that if you try to
> ignore the issue by removing flush from the
> protocol, you'll get a system that doesn't work so smoothly.
your failure cases seem to rely on poorly chosen tags.
i wasn't suggesting that flush be eliminated. i was
thinking of ways of keeping flush
2009/4/21 erik quanstrom :
> bundling is equivalent to running the original sequence on
> the remote machine and shipping only the result back. some
> rtt latency is eliminated but i think things will still be largely
> in-order because walks will act like fences. i think the lots-
> of-small-fil
2009/4/21 erik quanstrom :
>> plan 9 and inferno rely quite heavily on having flush,
>> and it's sometimes notable when servers don't implement it.
>> for instance, inferno's file2chan provides no facility
>> for flush notification, and wm/sh uses file2chan; thus if you
>> kill a process that's rea
On Tue, Apr 21, 2009 at 10:06 AM, roger peppe wrote:
> 2009/4/21 David Leimbach :
> > Roger... this sounds pretty promising.
>
> i dunno, there are always hidden dragons in this area,
> and forsyth, rsc and others are better at seeing them than i.
>
Perhaps... but this discussion, and "trying" i
2009/4/21 David Leimbach :
> Roger... this sounds pretty promising.
i dunno, there are always hidden dragons in this area,
and forsyth, rsc and others are better at seeing them than i.
> 10p? I'd hate to call it 9p++.
9p2010, based on how soon it would be likely to be implemented...
On Tue, Apr 21, 2009 at 9:25 AM, erik quanstrom wrote:
> > > if the problem with 9p is latency, then here's a decision that could be
> > > revisisted. it would be a complication, but it seems to me better than
> > > a http-like protocol, bundling requets together or moving to a storage-
> > > ori
2009/4/21 Bakul Shah :
> In the pipelined case, from a server's perspective, client's
> requests just get to it faster (and may already be waiting!).
> It doesn't have to do anything special. What am I missing?
you're missing the fact that without the sequence operator, the
second request can arr
On Tue, Apr 21, 2009 at 1:19 AM, roger peppe wrote:
> 2009/4/20 andrey mirtchovski :
> >> with 9p, this takes a number of walks...
> >
> > shouldn't that be just one walk?
> >
> > % ramfs -D
> > ...
> > % mkdir -p /tmp/one/two/three/four/five/six
> > ...
> > % cd /tmp/one/two/three/four/five/six
On Tue, Apr 21, 2009 at 2:52 AM, maht wrote:
> one issue with multiple 9p requests is that tags are not order enforced
>
> consider the contrived directory tree
>
> 1/a/a
> 1/a/b
> 1/b/a
> 1/b/b
>
> Twalk 1 fid 1
> Twalk 2 fid a
> Twalk 3 fid b
>
> Tag 3 could conceivably arrive at the server bef
On Tue, 21 Apr 2009 17:03:07 BST roger peppe wrote:
> the idea with my proposal is to have an extension that
> changes as few of the semantics of 9p as possible:
>
> C->S Tsequence tag=3D1 sid=3D1
> C->S Topen tag=3D2 sid=3D1 fid=3D20 mode=3D0
> C->S Tread tag=3D3 sid=3D1 fid=3D20 count=3D8192
>
> > if the problem with 9p is latency, then here's a decision that could be
> > revisisted. it would be a complication, but it seems to me better than
> > a http-like protocol, bundling requets together or moving to a storage-
> > oriented protocol.
>
> Can you explain why is it better than bundl
> plan 9 and inferno rely quite heavily on having flush,
> and it's sometimes notable when servers don't implement it.
> for instance, inferno's file2chan provides no facility
> for flush notification, and wm/sh uses file2chan; thus if you
> kill a process that's reading from wm/sh's /dev/cons,
> t
On Tue, 21 Apr 2009 10:50:18 EDT erik quanstrom wrote:
> On Tue Apr 21 10:34:34 EDT 2009, n...@lsub.org wrote:
> > Well, if you don't have flush, your server is going to keep a request
> > for each process that dies/aborts.
If a process crashes, who sends the Tflush? The server must
clean up w
2009/4/21 erik quanstrom :
> isn't the tag space per fid?
no, otherwise every reply message (and Tflush) would include a fid too;
moreover Tversion doesn't use a fid (although it probably doesn't
actually need a tag)
> a variation on the tagged queuing flush
> cache would be to force the client t
On Tue Apr 21 10:34:34 EDT 2009, n...@lsub.org wrote:
> Well, if you don't have flush, your server is going to keep a request
> for each process that dies/aborts. If requests always complete quite
> soon it's not a problem, AFAIK, but your server may be keeping the
> request to reply when something
> To: 9fans@9fans.net
> Reply-To: 9fans@9fans.net
> Date: Tue Apr 21 16:11:28 CET 2009
> Subject: Re: [9fans] Plan9 - the next 20 years
>
> On Tue Apr 21 10:05:43 EDT 2009, rogpe...@gmail.com wrote:
> > 2009/4/21 erik quanstrom :
> > > what is the important use c
2009/4/21 erik quanstrom :
> what is the important use case of flush and why is this
> so important that it drives the design?
actually the in-order delivery is most important
for Rmessages, but it's important for Tmessages too.
consider this exchange (C=client, S=server), where
the Tflush is sent
On Tue Apr 21 10:05:43 EDT 2009, rogpe...@gmail.com wrote:
> 2009/4/21 erik quanstrom :
> > what is the important use case of flush and why is this
> > so important that it drives the design?
>
[...]
> "The 9P protocol must run above a reliable transport protocol with
> delimited messages. [...]
>
On Tue Apr 21 06:25:49 EDT 2009, rogpe...@gmail.com wrote:
> 2009/4/21 maht :
> > Tag 3 could conceivably arrive at the server before Tag 2
>
> that's not true, otherwise the flush semantics wouldn't
> work correctly. 9p *does* require in-order delivery.
i have never needed to do anything importa
i wrote:
> the currently rather complex definition of Twalk could
> be replaced by clone and walk1 instead, as
> in the original 9p: {Tclone, Twalk, Twalk, ...}
i've just realised that the replacement would be
somewhat less efficient as the current Twalk, as the
cloned fid would still have to be c
2009/4/21 maht :
> Tag 3 could conceivably arrive at the server before Tag 2
that's not true, otherwise the flush semantics wouldn't
work correctly. 9p *does* require in-order delivery.
one issue with multiple 9p requests is that tags are not order enforced
consider the contrived directory tree
1/a/a
1/a/b
1/b/a
1/b/b
Twalk 1 fid 1
Twalk 2 fid a
Twalk 3 fid b
Tag 3 could conceivably arrive at the server before Tag 2
2009/4/20 andrey mirtchovski :
>> with 9p, this takes a number of walks...
>
> shouldn't that be just one walk?
>
> % ramfs -D
> ...
> % mkdir -p /tmp/one/two/three/four/five/six
> ...
> % cd /tmp/one/two/three/four/five/six
> ramfs 640160:<-Twalk tag 18 fid 1110 newfid 548 nwname 6 0:one 1:two
> 2
On Mon, 20 Apr 2009 16:33:41 EDT erik quanstrom wrote:
> let's take the path /sys/src/9/pc/sdata.c. for http, getting
> this path takes one request (with the prefix http://$server)
> with 9p, this takes a number of walks, an open. then you
> can start with the reads. only the reads may be done
> with 9p, this takes a number of walks...
shouldn't that be just one walk?
% ramfs -D
...
% mkdir -p /tmp/one/two/three/four/five/six
...
% cd /tmp/one/two/three/four/five/six
ramfs 640160:<-Twalk tag 18 fid 1110 newfid 548 nwname 6 0:one 1:two
2:three 3:four 4:five 5:six
ramfs 640160:->Rwalk ta
On Mon, Apr 20, 2009 at 1:33 PM, erik quanstrom wrote:
> > > not that i can think of. but that addresses throughput, but not
> latency.
> >
> >
> > Right, but with better throughput overall, you can "hide" latency in some
> > applications. That's what HTTP does with this AJAX fun right?
> >
> >
> > not that i can think of. but that addresses throughput, but not latency.
>
>
> Right, but with better throughput overall, you can "hide" latency in some
> applications. That's what HTTP does with this AJAX fun right?
>
> Show some of the page, load the rest over time, and people "feel bett
On Mon, Apr 20, 2009 at 12:03 PM, erik quanstrom wrote:
> > > > Tread tag fid offset count
> > > >
> > > > Rread tag count data
> > >
> > > without having the benefit of reading ken's thoughts ...
> > >
> > > you can have 1 fd being read by 2 procs at the same time.
> > > the only way to do this
> Thus running multiple reads (on the same file) only really
> works for files which operate as read disks - e.g. real disks,
> ram disks etc.
at which point, you have reinvented aoe. :-)
- erik
> I thought 9p had tagged requests so you could put many requests in flight
> at
> once, then synchronize on them when the server replied.
This is exactly what fcp(1) does, which is used by replica.
If you want to read a virtual file however, these often
don't support seeks or implement them in u
> > > Tread tag fid offset count
> > >
> > > Rread tag count data
> >
> > without having the benefit of reading ken's thoughts ...
> >
> > you can have 1 fd being read by 2 procs at the same time.
> > the only way to do this is by having multiple outstanding tags.
>
>
> I thought the tag was ass
On Mon, Apr 20, 2009 at 11:35 AM, erik quanstrom wrote:
> > I thought 9p had tagged requests so you could put many requests in flight
> at
> > once, then synchronize on them when the server replied.
> >
> > Maybe i misunderstand the application of the tag field in the protocol
> then?
> >
> > Trea
>For speed of light in fibre optic 30ms is about 8000km (New York to San
>Francisco and back)
>in that 30ms a 3.2Ghz P4 could do 292 million instructions
i think that's just enough to get to dbus and back.
I did the experiment, for the o/live, of issuing multiple (9p) RPCs
in parallel, without waiting for answers.
In general it was not enough, because in the end the client had to block
and wait for the file to come before looking at it to issue further rpcs.
On Mon, Apr 20, 2009 at 8:03 PM, Skip
> I thought 9p had tagged requests so you could put many requests in flight at
> once, then synchronize on them when the server replied.
>
> Maybe i misunderstand the application of the tag field in the protocol then?
>
> Tread tag fid offset count
>
> Rread tag count data
without having the b
J.R. Mauro wrote:
What kind of latency?
For speed of light in fibre optic 30ms is about 8000km (New York to San
Francisco and back)
Assuming you have a direct fiber connection with no routers in
between. I would say that is somewhat rare.
The author found that from klondike.cis.upen
On Mon, Apr 20, 2009 at 11:03 AM, Skip Tavakkolian <9...@9netics.com> wrote:
> > 9p is efficient as long as your latency is <30ms
>
> check out ken's answer to a question by sqweek. the question
> starts: "With cross-continental round trip times, 9p has a hard time
> competing (in terms of throug
>
> What kind of latency?
>
> For speed of light in fibre optic 30ms is about 8000km (New York to San
> Francisco and back)
Assuming you have a direct fiber connection with no routers in
between. I would say that is somewhat rare.
> http://moderator.appspot.com/#15/e=c9&t=2d
"You must have JavaScript enabled in order to use this feature.
cruel irony.
- erik
> 9p is efficient as long as your latency is <30ms
check out ken's answer to a question by sqweek. the question
starts: "With cross-continental round trip times, 9p has a hard time
competing (in terms of throughput) against less general protocols like
HTTP. ..."
http://moderator.appspot.com/#15
9p is efficient as long as your latency is <30ms
What kind of latency?
For speed of light in fibre optic 30ms is about 8000km (New York to San
Francisco and back)
in that 30ms a 3.2Ghz P4 could do 292 million instructions
There's an interesting article about it in acmq queue20090203-dl
On Mon, Apr 20, 2009 at 4:14 AM, Skip Tavakkolian <9...@9netics.com> wrote:
> ericvh stated it better in the "FAWN" thread. choosing the abstraction
> that makes the resulting environments have required attributes
> (reliable, consistent, easy, etc.) will be the trick. i believe with
> the curren
ericvh stated it better in the "FAWN" thread. choosing the abstraction
that makes the resulting environments have required attributes
(reliable, consistent, easy, etc.) will be the trick. i believe with
the current state of the Internet -- e.g. lack of speed and security
-- service abstraction i
On Sun, Apr 19, 2009 at 12:34 PM, Enrico Weigelt wrote:
> I'm currently in the process of designing an clustered storage,
> inspired by venti and git, which also supports removing files,
> on-demand sychronization, etc. I'll let you know when I've
> got something to present.
The only presentatio
* Latchesar Ionkov wrote:
Hi,
> I talked with a guy that's is doing parallel filesystem work, and
> according to him 80% of all filesystem operations when running an HPC
> job are for checkpointing (not that much restart). I just don't see
> how checkpointing can scale knowing how bad the parall
On Sun, Apr 19, 2009 at 12:12 AM, Skip Tavakkolian <9...@9netics.com> wrote:
> > Well, in the octopus you have a fixed part, the pc, but all other
> > machines come and go. The feeling is very much that your stuff is in
> > the cloud.
>
> i was going to mention this. to me the current view of clo
> Well, in the octopus you have a fixed part, the pc, but all other
> machines come and go. The feeling is very much that your stuff is in
> the cloud.
i was going to mention this. to me the current view of cloud
computing as evidence by papers like this[1] are basically hardware
infrastructu
On Sat, Apr 18, 2009 at 7:31 PM, Charles Forsyth wrote:
> this discussion of checkpoint/restart reminds me of
> a hint i was given years ago: if you wanted to break into a system,
> attack through the checkpoint/restart system. i won a jug of
> beer for my subsequent successful attack which involv
this discussion of checkpoint/restart reminds me of
a hint i was given years ago: if you wanted to break into a system,
attack through the checkpoint/restart system. i won a jug of
beer for my subsequent successful attack which involved patching
the disc offset for an open file in a copy of the Sla
A checkpoint restart package.
https://ftg.lbl.gov/CheckpointRestart/CheckpointRestart.shtml
>
> I _do_ think yours should come first! Having to say: "yes" to an user...
If you don't say 'yes' at some point, you won't have a system anyone
will want to use. Remember all those quotes about why Unix doesn't
prevent you from doing stupid things?
> I _do_ think yours should come first! Having to say: "yes" to an user...
sometimes, when the user is the military-industrial complex, one has
no choice but to say "yes" ;)
On Sat, Apr 18, 2009 at 12:20 PM, ron minnich wrote:
>
> I'll say it again. It does not matter what we think. It matters what
> apps do. And some apps have multiple processes accessing one file.
>
> As to the wisdom of such access, there are many opinions :-)
>
> You really can not just rule thing
On Sat, Apr 18, 2009 at 12:20 PM, ron minnich wrote:
> On Sat, Apr 18, 2009 at 9:10 AM, J.R. Mauro wrote:
>
>> I agree that generally only one process will be accessing a "normal"
>> file at once. I think an editor is not a good example, as you say.
>>
>
> I'll say it again. It does not matter wh
On Sat Apr 18 12:21:49 EDT 2009, rminn...@gmail.com wrote:
> On Sat, Apr 18, 2009 at 9:10 AM, J.R. Mauro wrote:
>
> > I agree that generally only one process will be accessing a "normal"
> > file at once. I think an editor is not a good example, as you say.
> >
>
> I'll say it again. It does not
On Sat, Apr 18, 2009 at 9:10 AM, J.R. Mauro wrote:
> I agree that generally only one process will be accessing a "normal"
> file at once. I think an editor is not a good example, as you say.
>
I'll say it again. It does not matter what we think. It matters what
apps do. And some apps have multip
On Sat, Apr 18, 2009 at 11:11 AM, erik quanstrom wrote:
> On Sat Apr 18 11:08:21 EDT 2009, rminn...@gmail.com wrote:
>> On Sat, Apr 18, 2009 at 6:50 AM, erik quanstrom
>> wrote:
>>
>> > in a plan 9 system, the only files that i can think of which many processes
>> > have open at the same time ar
On Sat, Apr 18, 2009 at 9:50 AM, erik quanstrom wrote:
>> > * you can get the same effect by increasing the scale of your system.
>> >
>> > * the reason conventional systems work is not, in my opinion, because
>> > the collision window is small, but because one typically doesn't do
>> > conflictin
On Sat, Apr 18, 2009 at 08:05:50AM -0700, ron minnich wrote:
>
> For cluster work that was done in the OS, see any clustermatic
> publication from minnich, hendriks, or watson, ca. 2000-2005.
Will do.
--
Thierry Laronde (Alceste)
http://www.kergis.com/
Key fingerprint = 0FF7 E9
On Sat Apr 18 11:08:21 EDT 2009, rminn...@gmail.com wrote:
> On Sat, Apr 18, 2009 at 6:50 AM, erik quanstrom wrote:
>
> > in a plan 9 system, the only files that i can think of which many processes
> > have open at the same time are log files, append-only files. just reopening
> > log file would
I talked with a guy that's is doing parallel filesystem work, and
according to him 80% of all filesystem operations when running an HPC
job are for checkpointing (not that much restart). I just don't see
how checkpointing can scale knowing how bad the parallel fs are.
Lucho
On Fri, Apr 17, 20
On Sat, Apr 18, 2009 at 6:50 AM, erik quanstrom wrote:
> in a plan 9 system, the only files that i can think of which many processes
> have open at the same time are log files, append-only files. just reopening
> log file would solve the problem.
you're not thinking in terms of parallel applica
On Sat, Apr 18, 2009 at 4:59 AM, wrote:
> But my gut feeling, after reading about Mach or reading A. Tanenbaum
> (that I find poor---but he is A. Tanenbaum, I'm only T. Laronde),
> is that a cluster is above the OS (a collection of CPUs), but a
> NUMA is for the OS an atom, i.e. is below the OS,
> i'm not sure why editor is the case that's being bandied about.
I'm not sure why anyone should be listening to my ramblings...
I assume that C/R or migration is not an atomic operation. If it were
atomic, that's the entire problem dealt with. If it's not atomic,
there are potential race condi
[I reply to myself because I was replying half on two distinct threads]
On Sat, Apr 18, 2009 at 01:59:03PM +0200, tlaro...@polynum.com wrote:
>
> But my gut feeling, after reading about Mach or reading A. Tanenbaum
> (that I find poor---but he is A. Tanenbaum, I'm only T. Laronde),
> is that a cl
> > * you can get the same effect by increasing the scale of your system.
> >
> > * the reason conventional systems work is not, in my opinion, because
> > the collision window is small, but because one typically doesn't do
> > conflicting edits to the same file.
> >
> > * saying that something "is
I assumed cloud computing means you can log into any node
that you are authorised to and your data and code will migrate
to you as needed.
The idea being the sam -r split is not only dynamic but on demand,
you may connect to the cloud from your phone just to read your email
so the editor session s
On Fri, Apr 17, 2009 at 03:15:25PM -0700, ron minnich wrote:
> if you want to look at checkpointing, it's worth going back to look at
> Condor, because they made it really work. There are a few interesting
> issues that you need to get right. You can't make it 50% of the way
> there; that's not use
> there's no guarantee to a process running in a conventional
> environment that files won't change underfoot. why would
> condor extend a new guarantee?
Because you have to migrate standard applications, not only
applications that allow for migration. Consider a word processing
session, for exa
On Sat, Apr 18, 2009 at 12:16 AM, erik quanstrom wrote:
>> On Fri, Apr 17, 2009 at 11:37 PM, erik quanstrom
>> wrote:
>> >> I can imagine a lot of problems stemming from open files could be
>> >> resolved by first attempting to import the process's namespace at the
>> >> time of checkpoint and,
> the original condor just forwarded system calls back to the node it
> was started from. Thus all system calls were done in the context of
> the originating node and user.
Not much good if you're migrating because the node's gone down. What
happens then? Sorry to ask, RTFM seems a bit beyond my
On Sat, Apr 18, 2009 at 12:16 AM, erik quanstrom wrote:
>> But I'll say that if anyone tries to solve these problems today, they
>> should not fall into the same trap, [...]
>
> yes. forward thinking was just the thing that made multics
> what it is today.
>
> it is equally a trap to try to prog
> Speaking of NUMA and such though, is there even any support for it in the
> kernel?
> I know we have a 10gb Ethernet driver, but what about cluster interconnects
> such as InfiniBand, Quadrics, or Myrinet? Are such things even desired in
> Plan 9?
there is no explicit numa support in the pc
> On Fri, Apr 17, 2009 at 11:37 PM, erik quanstrom
> wrote:
> >> I can imagine a lot of problems stemming from open files could be
> >> resolved by first attempting to import the process's namespace at the
> >> time of checkpoint and, upon that failing, using cached copies of the
> >> file made a
> But I'll say that if anyone tries to solve these problems today, they
> should not fall into the same trap, [...]
yes. forward thinking was just the thing that made multics
what it is today.
it is equally a trap to try to prognosticate too far in advance.
one increases the likelyhood of failu
On Fri, Apr 17, 2009 at 11:56 PM, erik quanstrom wrote:
>> Vidi also seems to be an attempt to make Venti work in such a dynamic
>> environment. IMHO, the assumption that computers are always connected
>> to the network was a fundamental mistake in Plan 9
>
> on the other hand, without this assump
On Fri, Apr 17, 2009 at 11:37 PM, erik quanstrom wrote:
>> I can imagine a lot of problems stemming from open files could be
>> resolved by first attempting to import the process's namespace at the
>> time of checkpoint and, upon that failing, using cached copies of the
>> file made at the time of
> Vidi also seems to be an attempt to make Venti work in such a dynamic
> environment. IMHO, the assumption that computers are always connected
> to the network was a fundamental mistake in Plan 9
on the other hand, without this assumption, we would not have 9p.
it was a real innovation to dispens
> I can imagine a lot of problems stemming from open files could be
> resolved by first attempting to import the process's namespace at the
> time of checkpoint and, upon that failing, using cached copies of the
> file made at the time of checkpoint, which could be merged later.
there's no guarant
On Fri, Apr 17, 2009 at 10:39 PM, ron minnich wrote:
> On Fri, Apr 17, 2009 at 7:06 PM, J.R. Mauro wrote:
>
>> Yeah, the problem's bigger than I thought (not surprising since I
>> didn't think much about it). I'm having a hard time figuring out how
>> Condor handles these issues. All I can see fr
On Fri, Apr 17, 2009 at 7:06 PM, J.R. Mauro wrote:
> Yeah, the problem's bigger than I thought (not surprising since I
> didn't think much about it). I'm having a hard time figuring out how
> Condor handles these issues. All I can see from the documentation is
> that it gives you warnings.
the o
On Fri, Apr 17, 2009 at 7:01 PM, ron minnich wrote:
> On Fri, Apr 17, 2009 at 3:35 PM, J.R. Mauro wrote:
>
>> Amen. Linux is currently having a seriously hard time getting C/R
>> working properly, just because of the issues you mention. The second
>> you mix in non-local resources, things get pea
On Fri, Apr 17, 2009 at 3:35 PM, J.R. Mauro wrote:
> Amen. Linux is currently having a seriously hard time getting C/R
> working properly, just because of the issues you mention. The second
> you mix in non-local resources, things get pear-shaped.
it's not just non-local. It's local too.
you ar
On Fri, Apr 17, 2009 at 6:15 PM, ron minnich wrote:
> if you want to look at checkpointing, it's worth going back to look at
> Condor, because they made it really work. There are a few interesting
> issues that you need to get right. You can't make it 50% of the way
> there; that's not useful. You
if you want to look at checkpointing, it's worth going back to look at
Condor, because they made it really work. There are a few interesting
issues that you need to get right. You can't make it 50% of the way
there; that's not useful. You have to hit all the bits -- open /tmp
files, sockets, all of
Well, in the octopus you have a fixed part, the pc, but all other
machines come and go. The feeling is very much that your stuff is in
the cloud.
I mean, not everything has to be dynamic.
El 17/04/2009, a las 22:17, eri...@gmail.com escribió:
On Fri, Apr 17, 2009 at 2:43 PM, wrote:
On Fr
On Fri, Apr 17, 2009 at 4:14 PM, Eric Van Hensbergen wrote:
> On Fri, Apr 17, 2009 at 2:43 PM, wrote:
>> On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
>>> I cannot find the reference (sorry), but I read an interview with Ken
>>> (Thompson) a while ago.
>>>
>>
>> My interpretation
Speaking of NUMA and such though, is there even any support for it in the
kernel?
I know we have a 10gb Ethernet driver, but what about cluster interconnects
such as InfiniBand, Quadrics, or Myrinet? Are such things even desired in Plan
9?
I'm glad see process migration has been mentioned
<>
Steve Simon wrote:
> I cannot find the reference (sorry), but I read an interview with Ken
> (Thompson) a while ago.
>
> He was asked what he would change if he where working on plan9 now,
> and his reply was somthing like "I would add support for cloud computing".
Perhaps you were thinking of his
On Fri, Apr 17, 2009 at 2:43 PM, wrote:
> On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
>> I cannot find the reference (sorry), but I read an interview with Ken
>> (Thompson) a while ago.
>>
>
> My interpretation of cloud computing is precisely the split done by
> plan9 with termin
On Fri, Apr 17, 2009 at 3:43 PM, wrote:
> On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
>> I cannot find the reference (sorry), but I read an interview with Ken
>> (Thompson) a while ago.
>>
>> He was asked what he would change if he where working on plan9 now,
>> and his reply was
On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
> I cannot find the reference (sorry), but I read an interview with Ken
> (Thompson) a while ago.
>
> He was asked what he would change if he where working on plan9 now,
> and his reply was somthing like "I would add support for cloud co
On Fri, Apr 17, 2009 at 3:16 PM, Steve Simon wrote:
> I cannot find the reference (sorry), but I read an interview with Ken
> (Thompson) a while ago.
>
> He was asked what he would change if he where working on plan9 now,
> and his reply was somthing like "I would add support for cloud computing".
1 - 100 of 101 matches
Mail list logo