Thank you, setting SHARE_NOINUSE_CHECK indeed speeds up things substantially.
However, there seems to be a bug in the NFS part of Solaris 10u2 when so many
filesystems
are shared. When I run showmount -e after the pool has been (successfully)
imported,
I get an error:
$ showmount -e
showmount:
You'll also note that there's a line saying Stopping because process dumped
core which we shouldn't ignore, IMO.
In case this is a Sun-supported config (s10u2 indicates as much), please file a
case :-)
regards
Michael Schuster
H.-J. Schnitzer wrote:
Thank you, setting SHARE_NOINUSE_CHECK
You'll also note that there's a line saying Stopping because process dumped
core which we shouldn't ignore, IMO.
In case this is a Sun-supported config (s10u2 indicates as much), please file
a
case :-)
This looks like the rpcgen issue where the list is encoded using a recursive
rather than
How did you measure it? (I'm not saying it doesn't
take those 45kB - just I haven't checked it myself
and I wonder how you checked it).
ran 'top', looked at 'mem free'
created 1000 filesystems
ran 'top' again.
rebooted to be sure
ran 'top' again
I'm sure I should use something better than
Eric said:
Each filesystem holding onto memory (unnecessarily if
no one is using that filesystem) is something we're thinking
about changing.
OK - glad to hear that it's already been acknowledged as an issue!
Right - NFSv4 allows client's to cross filesystem boundaries.
Trond just recently
yeah, thought of that, but we put some structure in ages ago
to get around the possible problems with thousands of entries
in one directory - so we have /export/home/NN/username
where NN is a 2 digit number.
I don't think there's any way to specify an automount map
with multiple levels in it.
Casper said:
You can have composite mounts (multiple nested mounts)
but that is essentially a single automount entry so it
can't be overly long, I believe.
I've seen that in the man page, but I've never managed to
find a use for it!
What I'd *like* to be able to do is have a map that amounts