I've always seen this curve in my tests (local disk or iscsi) and just
think its zfs as designed. I haven't seen much parallelism when I have
multiple i/o jobs going, the filesystem seems to go mostly into one or
the other mode. Perhaps per vdev (in iscsi I'm only exposing one or
two), there is on
Hey all -
Was playing a little with zfs today and noticed that when I was
untarring a 2.5gb archive both from and onto the same spindle in my
laptop, I noticed that the bytes red and written over time was seesawing
between approximately 23MB/s and 0MB/s.
It seemed like we read and read and read
Bennett, Steve wrote:
A slightly different tack now...
what filesystems is it a good (or bad) idea to put on ZFS?
root - NO (not yet anyway)
home - YES (although the huge number of mounts still scares me a bit)
/usr - possible?
not yet - the system wouldn't be patchable or upgradeable.
/var -
A slightly different tack now...
what filesystems is it a good (or bad) idea to put on ZFS?
root - NO (not yet anyway)
home - YES (although the huge number of mounts still scares me a bit)
/usr - possible?
/var - possible?
swap - no?
Is there any advantage in having multiple zpools over just havi
>What I'd *like* to be able to do is have a map that amounts to:
>
>00 -ro \
> / keck:/export/home/00
> /* -rw /export/home/00/&
What is our interest in mounting the 00 and 01 directories? is there any
data there not in the subdirectories?
Currently, I'm using executable maps to create zfs hom
Casper said:
> You can have composite mounts (multiple nested mounts)
> but that is essentially a single automount entry so it
> can't be overly long, I believe.
I've seen that in the man page, but I've never managed to
find a use for it!
What I'd *like* to be able to do is have a map that amount
>yeah, thought of that, but we put some structure in ages ago
>to get around the possible problems with thousands of entries
>in one directory - so we have /export/home/NN/username
>where NN is a 2 digit number.
>
>I don't think there's any way to specify an automount map
>with multiple levels in
Eric said:
> Each filesystem holding onto memory (unnecessarily if
> no one is using that filesystem) is something we're thinking
> about changing.
OK - glad to hear that it's already been acknowledged as an issue!
> Right - NFSv4 allows client's to cross filesystem boundaries.
> Trond just recen
> How did you measure it? (I'm not saying it doesn't
> take those 45kB - just I haven't checked it myself
> and I wonder how you checked it).
ran 'top', looked at 'mem free'
created 1000 filesystems
ran 'top' again.
rebooted to be sure
ran 'top' again
I'm sure I should use something better than t