> 
> I have not tried a dump across nfs for a very long time.  I believe it
> only exists for diskless clients - you would not want to use it on
> any regular systems.
> 

Just FYI on this point: when I rewrote the panic code long ago, I did make
dumps across NFS work, but in a very limited fashion.  What you do is to
make an NFS mounted swap file, and then dumpadm -d /path/to/file.  (This
is how swap is set up by default on diskless systems).  There are a number
of nasty issues here, though:

- At the time I did this (Solaris 8), we didn't have Nemo/GLDv3 and thus
  there was no way to put the NICs into polled mode.  The dump code wants
  to execute with interrupts disabled, but for NFS we need them, so there's
  this hack in nd_poll() that I added (see nfs_dump.c) to temporarily
  enable them while we're waiting for replies.  If we decide to care more
  about remote dumps in the future, this should all be changed to use
  the newer networking interfaces to put the stack in polled mode.

- There are other parts of the higher-level networking stack that can't work
  from panic context.  For example, if you don't already have say the ARP
  entries you need to get to the destination, you're not going to be able
  to do those network transactions and obtain a connection.

So fundamentally if we ever decide to really care about this, there needs
to be a better design (perhaps not based on NFS at all) where the minimal
networking state needed is locked into place in advance by the dump subsystem.

Personally I dislike the entire notion of network dumps due to the number of
additional variables that introduce unreliability into the process, which is
why we never recommend this to customers.  For future systems, the more useful
next-generation option we're exploring is in-memory dumps, where the state is
left in memory and the newly-rebooted OS saves the former OS's state.

-Mike

-- 
Mike Shapiro, Solaris Kernel Development. blogs.sun.com/mws/
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code

Reply via email to