Up to the moderator whether this will add anything:
I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between
SANs. configured a private subnet allowed rsh on the receiving V440.
command: zfs send | (rsh receiving-host zfs receive ...)
Took a whole week (7 days) and brought
On 02/16/11 09:50 AM, David Strom wrote:
Up to the moderator whether this will add anything:
I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between
SANs. configured a private subnet allowed rsh on the receiving V440.
command: zfs send | (rsh receiving-host zfs receive
On Thu, Jan 13, 2011 at 8:09 AM, Stephan Budach stephan.bud...@jvm.de wrote:
Actually mbuffer does a great job for that, too. Whenever I am using mbuffer
I am achieving much higher throughput then using ssh.
Agreed, mbuffer seems to be required to get decent throughput. Using
it on both ends of
From: Richard Elling [mailto:richard.ell...@gmail.com]
This means the current probability of any sha256 collision in all of the
data in the whole world, using a ridiculously small block size, assuming
all
... it doesn't matter. Other posters have found collisions and a collision
without
Moving to a new SAN, both LUNs will not be accessible at the same time.
Thanks for the several replies I've received, sounds like the dd to tape
mechanism is broken for zfs send, unless someone knows otherwise or has
some trick?
I'm just going to try a tar to tape then (maybe using dd),
Am 13.01.11 15:00, schrieb David Strom:
Moving to a new SAN, both LUNs will not be accessible at the same time.
Thanks for the several replies I've received, sounds like the dd to
tape mechanism is broken for zfs send, unless someone knows otherwise
or has some trick?
I'm just going to try
On Thu, January 13, 2011 09:00, David Strom wrote:
Moving to a new SAN, both LUNs will not be accessible at the same time.
Thanks for the several replies I've received, sounds like the dd to tape
mechanism is broken for zfs send, unless someone knows otherwise or has
some trick?
I'm just
On Jan 11, 2011, at 8:51 PM, Edward Ned Harvey wrote:
heheheh, ok, I'll stop after this. ;-) Sorry for going on so long, but it
was fun.
In 2007, IDC estimated the size of the digital universe in 2010 would be 1
zettabyte. (10^21 bytes) This would be 2.5*10^18 blocks of 4000 bytes.
I've used several tape autoloaders during my professional life. I
recall that we can use ufsdump or tar or dd with at least some
autoloaders where the autoloader can be set to automatically eject a
tape when it's full load the next one. Has always worked OK whenever
I tried it.
I'm
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Strom
So, has anyone had any experience with piping a zfs send through dd (so
as to set the output blocksize for the tape drive) to a tape autoloader
in autoload mode?
Yes. I've had
On 01/12/11 04:15 AM, David Strom wrote:
I've used several tape autoloaders during my professional life. I
recall that we can use ufsdump or tar or dd with at least some
autoloaders where the autoloader can be set to automatically eject a
tape when it's full load the next one. Has always
heheheh, ok, I'll stop after this. ;-) Sorry for going on so long, but it
was fun.
In 2007, IDC estimated the size of the digital universe in 2010 would be 1
zettabyte. (10^21 bytes) This would be 2.5*10^18 blocks of 4000 bytes.
12 matches
Mail list logo