Dunno about eSATA jbods, but eSATA host ports have
appeared on at least two HDTV-capable DVRs for storage
expansion (looks like one model of the Scientific Atlanta
cable box DVR's as well as on the shipping-any-day-now
Tivo Series 3).
It's strange that they didn't go with firewire since
Lori said:
The limitation is mainly about the *number* of disks
that can be accessed at one time.
...
But with straight mirroring, there's no such problem
because any disk in the mirror can supply all of the
disk blocks needed to boot.
Does that mean that these restrictions will go away
Eric said:
For U3, these are the performance fixes:
6424554 full block re-writes need not read data in
6440499 zil should avoid txg_wait_synced() and use dmu_sync()
to issue
parallelIOs when fsyncing
6447377 ZFS prefetch is inconsistant
6373978 want to take lots of snapshots quickly ('zfs
I guess that could be made to work, but then the data on
the disk becomes much (much much) more difficult to
interpret because you have some rows which are effectively
one width and others which are another (ad infinitum).
How do rows come into it? I was just assuming that each
Jeff Bonwick said:
RAID-Z takes a different approach. We were designing a filesystem
as well, so we could make the block pointers as semantically rich
as we wanted. To that end, the block pointers in ZFS contains data
layout information. One nice side effect of this is that we don't
need
If you are going to use Veritas NetBackup why not use the
native Solaris client ?
I don't suppose anyone knows if Networker will become zfs-aware at any
point?
e.g.
backing up properties
backing up an entire pool as a single save set
efficient incrementals (something similar to zfs
Mike said:
3) ZFS ability to recognize duplicate blocks and store only one copy.
I'm not sure the best way to do this, but my thought was to have ZFS
remember what the checksums of every block are. As new blocks are
written, the checksum of the new block is compared to known checksums.
If
Casper said:
You can have composite mounts (multiple nested mounts)
but that is essentially a single automount entry so it
can't be overly long, I believe.
I've seen that in the man page, but I've never managed to
find a use for it!
What I'd *like* to be able to do is have a map that amounts
A slightly different tack now...
what filesystems is it a good (or bad) idea to put on ZFS?
root - NO (not yet anyway)
home - YES (although the huge number of mounts still scares me a bit)
/usr - possible?
/var - possible?
swap - no?
Is there any advantage in having multiple zpools over just