The following [RFC] is a mostly complete set of patches that moves
gpn_id and gnn_id into fc_rport.c. They become part of the RP state
machine as the first two states in the sequence of states.

These patches also start the RP state machine from a work thread.

...

These patches are related to cleanup work I'm doing for the LP. I think
we need to decide what to do about fc_ns.c. Currently that are two parts
of that file that should be moved out. I think that gpn_id and gnn_id
should be moved into the RP state machine (that's what this patch-set
does) and I think that the LP registration with the name server should
be moved into LP. This would leave fc_ns.c with only discovery and RSCN,
which I would argue could be moved into LP. I don't care about that step
at this point though.

Assuming that everyone agrees with this direction it would do the following:

LP = FLOGI, (start RP for name server), REG_PN, REG_FT, SCR, READY (start DISC)

DISC = GPN_FT (starts RP for each discovered port)

RP = GPN_ID, GNN_ID, PLOGI, PRLI, RTV, READY

The RP would skip certain states depending on what information it already had
when rport_login() was called. For example we already have the WWPN from GPN_FT
so when discovery calls rport_login() that function will determine that it can
skip GPN_ID.

Assuming these patches are committed after being polished a bit more the next
steps would be-

1) Ensure the RP locking is sound (I'm sure that it's not for at least GPN_ID
   and GNN_ID after this patch-set)

2) Move REG_PN, REG_FT and SCR into LP

3) Ensure the LP locking is sound

I think the locking will be much simpler after these changes. The rport lock
will be used for all RP states and the lport lock will be used for all LP 
states. We'll need to ensure that both are held when resetting becuase the
fear is that we'll be reset just before sending a frame out.

All feedback is welcome...

-- 
//Rob
_______________________________________________
devel mailing list
[email protected]
http://www.open-fcoe.org/mailman/listinfo/devel

Reply via email to