On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote:
For extremely large files (25 to 100GBs), that are accessed
sequentially for both read write, I would expect 64k or 128k.
Lager files accessed sequentially don't need any special heuristic for
record size determination:
On 12/10/06, Michael Schuster [EMAIL PROTECTED] wrote:
Ceri Davies wrote:
On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
I'd expect:
zpool import -f
(see the manpage)
to probe /dev/dsk/ and rebuild the zpool.cache file,
but my understanding is that this a) doesn't work
Nico,
Yes, I agree.
But also single random large single read and writes would also
benefit from a large record size. So, I didn't try make that
distinction. However, I guess that the best random large
reads writes would fall within single filesystem
So how do I import a pool created on a different host for the first
time?
zpool import [ -f ]
(provided it's not in use *at the same time* by another host)
So the warnings I've heard no longer apply?
If so, that's great. Thanks for all replies.
Umm, which warnings? The don't
Recently, I was in a position where I was aiding someone in
configuring five disks in RAID-Z1, and we were discussing whether or
not it would be possible to add (not replace) disks to the pool
without destroying and recreating the filesystem.
As far as I know, this is not currently possible (as