Beware of the latency.

Plan B usage shows that latency is the biggest problem.
I admit that only in unions, but if you join N different file servers
in a directory, and then you dup the latency, it may be a problem.
A workaround is just not to join too many servers, but it's a workaround,
not a fix.

Anyway, when I complete volfs we'll see if it´s beareable or not.
If it is, I'll be happy to discard the kernel changes.  If it's not,
we'll have to see why.

On 9/10/05, Gorka guardiola <[EMAIL PROTECTED]> wrote:
> On 9/9/05, Russ Cox <[EMAIL PROTECTED]> wrote:
> > > Funny. The 9p reliability project looks to me a lot like the redirfs
> > > that we played with before introducing the Plan B volumes into the kernel.
> > > It provided failover (on replicated FSs, by some other means) and could
> > > recover the fids justh by keeping track of their paths.
> 
> It is almost the same thing. The difference is that one is 9P-9P
> an the other is 9P-syscall(read-write...). We didnt have a plan 9 kernel on 
> the
> other side, so we couldnt use redirfs, that is why we ported recover.
> It is better for linux (p9p) too, as it doesnt require to mount the
> filesystem and then use it, so you dont depend on having someone to
> serve the 9P files by mounting them. I am not sure about Plan 9. On
> one side, recover knows more about the stuff under it, so it has more
> granularity, and can fail in a nicer way. On the other side, redirfs
> is much much simpler.
> 
> > >
> > > The user level process I'm with now is quite similar to that (appart from
> > > including the language to select particular volumes) it maintains a fid
> > > table knowning which server, and which path within the server are the ones
> > > for each fid. It's what the Plan B kernel ns does, but within a server.
> > >
> > > Probably, the increase in latency you are seeing is the one I'm going to
> > > see in volfs. The 2x penalty in performace is what one could expect, 
> > > because
> > > you have twice the latency. However, the in-kernel implementation has no
> > > penalty at all, because the kernel can rewrite the mount tables.
> > >
> > > Maybe we should talk about this.
> > > Eric? Russ? What do you say? Is it worth to pay the extra
> > > latency just to avoid a change (serious, I admit) in the kernel?
> >
> > I keep seeing that 2x number but I still don't believe it's actually
> > reasonable to measure the hit on an empty loopback file server.
> > Do something over a 100Mbps file server connection
> > talking to fossil and see what performance hit you get then.
> 
> Yes, this increase in latency is in the loopback. If you are
> using a network it is probably completely lost in the noise, the
> network being probably 100 times slower than the loopback.
> 
> >
> > Stuff in the kernel is much harder to change and debug.
> > Unless there's a compelling reason, I'd like to see it stay
> > in user space.  And I'm not yet convinced that performance
> > is a compelling reason.
> 
> I agree, though it depends on the application. For us (normal users) I
> agree completely that it is not compelling. Some people out there are
> doing stuff in which performance
> is important (let them write the code? :-)).
> --
> - curiosity sKilled the cat
>

Reply via email to