In Plan B (the one with the changed kernel) I had a kernel
process doing cclose for channels that were on gone volumes.
Perhaps that was the same thing. I also noticed that the list
of "closing chans" could grow a lot.

But, isn´t the problem the failure to declare a connection
as broken, even when it is? [e.g., when its server process is
dead or broken].

For example, while we were using IL, the connections
were nicely collected and declared broken, however, when
we switched to tcp, some endpoints stayed for a very long
time.

In short, wouldn´t it be better to try to "break" officially an
already broken connection, than to use a pool of closing
processes? [NB: I used just one process, not a pool, but
I think the problem and workaround remains the same].

Note that if a connection is broken, there is no need to
send clunks for chans going through it.

What do you think?
Thanks

On 1/5/06, Bruce Ellis <[EMAIL PROTECTED]> wrote:
> thanks.  very precise.
>
> On 1/5/06, Russ Cox <[EMAIL PROTECTED]> wrote:
> > > Could we all see that code as well?
> >
> > The code looks at c->dev in cclose to see if the
> > chan is from devmnt.  If so, cclose places the chan on
> > a queue rather than calling devtab[c->dev]->close()
> > and chanfree() directly.  A pool of worker processes
> > tend the queue, like in exportfs, calling close() and
> > chanfree() themselves.
> >
> > Russ
>
>

Reply via email to