First of all, the patch was just to get to the point of compilability,
which other prople said they would "take it from there".  I don't
have a NetWare server to test against in my apartment.  I'd be just as
happy to _let_ the other people who wanted to "take it from there" do
do, now that I made it compile.

That said...

Julian Elischer wrote:
> some comments:
> firstly:
> 
>   ncp_conn_locklist(int flags, struct proc *p)
>   {
> !       struct thread *td = TAILQ_FIRST(&p->p_threads);         /* XXX
> */
> !
> !       return lockmgr(&listlock, flags | LK_CANRECURSE, 0, td);
>   }
> 
> can't you use unidifs? :-)

Only if I want the code to be unreadable.  8-) 8-).  Can't you
apply the patch to a -current tree checked out for that purpose,
and get whatever flavor of diffs you want, e.g. "cvs diff -u"?
Unidiffs don't support "fuzz", and uless you are committing the
thing, I'd rather not have to recreate it interatively until it
passes someone's filter.  A context diff gives me that.


> ok....
> there is a Macro to find the first thread in a process.
> FIRST_THREAD_IN_PROC(p)

I didn't see this.  It's definitely the way to go.

Any chance of that being documented any time soon, so that no
one else does the obvious thing, instead, like I did?


> I use it because it makes it easy to find the
> places that are DEFINITLY BROKEN. The marker for a KSE breakage is:
> XXXKSE, but places that use FIRST_THREAD_IN_PROC(p) are marked
> implicitly since nearly any time it is used, the code is trying to
> do something that doesn't make sense. (except in fork and
> maybe exit and exec, and in things like linux emulation
> where you KNOW there is only one thread).
> 
> if you see TAILQ_FIRST(&p->p_threads) (or FIRST_THREAD_IN_PROC(p)) you
> can pretty much guarantee that what is happenning there need to be
> completely rewritten, possibly to the extent of rerwiting it's callers
> and the caller's callers. That is the problem I hit when trying to
> convert it to start with...

Actually, I really wanted a "LAST_THREAD_IN_PROC(p)".  The
reality is that I want the most recently inserted thread to use
as a blocking context, on the theory that it would not be used
until much later, and that all the pages it cares about are much
more likely to be in core, particularly on a process with tons
of threads.

The only reason it's being used at all is that the lockmgr()
class need a blocking context for their calls, and it's an
explicit parameter (arguably, it should be curthread).

I can edit the patch (since it's not a unidiff 8-) 8-) 8-)),
or I can post a new one, if you want (it's only 9K, compressed).

But realize that all it means is "give me a thread that I can
use as a blocking context while I'm waiting on lockmgr()".


> Since I don't know the way that process IDs come into the session
> control (you have to understand the protocol for that) I basically hit
> a wall on trying to work out what to rewrite, and how.

The wire protocol is always request/response.  Always.  As I
stated before, the only exception is a lock, with/without a
timeout.  In that case, you get the synchronous response to
your synchornous request, which basically means "request has
been queue for servicing", and you later get a seperate
notification.

The notification is by connection.  The connection is per process,
because we are talking about a connection where the credentials
are associated with the connection in question.  The connection
provides both a "state" context (waiting for request vs. request
in progress).  Making additional requests over the same VC will
result in a "request in progress, go away you dork" response to
the client.

When an async response to a lock request comes in, in comes in on
a seperate VC (each connection has 3 VCs, only 2 of which are
normally used: the one for request/response, and the one for
async notifications, which is overloaded to handle lock responses).
The connection is mapped back to a process to map it back to the
blocker.

What this basically means is that NCP's can't be handled as
multithreaded, without establishing a VC per thread.  It just
does not work.  Therefore NCP request are going to serialize in
the kernel, no matter what you do in happy-thread-town.

If you are asking for the code to be thread safe, then you are
basically talking about multithreading the whole stack:

NWFS <-> NCP Client <-> IPX <-> IP

Probably it would be a better idea if TCP/IP was multithreaded
first?


> BTW the obnoxious FIRST_THREAD_IN_PROC will go away when
> we have got rid of most of the broken code and be replaced in places
> for which it is ok with p2td() which will be:
> 
> __inline struct thread *
> p2td(struct proc *p)
> {
>         KASSERT(((p->p_flag & P_KSES) == 0),
>                 ("Threaded program uses p2td()"));
>         return (TAILQ_FIRST(&p->p_threads));
> }

Uh, how exactly is that less obnoxious, given it's the same code
with a different name and an obnoxious inline instead of a macro?
8-).


> You can always get from a thread to a single process but the reverse
> always presents the question "which thread?". This question can only be
> answered with external information or if you know there is only one
> thread at this moment.

The answer is that "the code doesn't care what thread"; it would
prefer to not have to think in terms of threads at all, but if
you want to force it to, then it's going to think in terms of
"blocking contexts for the benefit of FreeBSD code it calls",
and nothing else.

I'll let the author of the NCP code and/or someone with a
NetWare server to test against deal with setting up additional
virtual ciruits on a given connection: the NetWare credentials
are pushed on each one, and there's a count limit, so this is
not as easy as you'd naievely think.

I'd rather not deal with that, given that I have a detailed
knowledge of NetWare internals, along with a non-disclosure
agreement with Novell.

Did you want me to update the patch to use your FIRST_THREAD_IN_PROC
macro and resend it?

-- Terry

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to