On Jan 13, 2011, at 7:47 AM, Brian wrote:
> I have a situation coming up soon in which I will have to migrate some iSCSI
> backing stores setup with comstar. Are there steps published anywhere on how
> to move these between pools? Does one still use send/receive or do I somehow
> just move th
"hard errors" are a generic classification. fmdump -eV shows the
sense/asc/ascq, which
is generally more useful for diagnosis. More below...
On Jan 1, 2011, at 7:50 AM, Benji wrote:
> Hi,
>
> I recently noticed that there are a lot of Hard Errors on multiple drives
> that's being reported b
On Wed, Jan 12, 2011 at 5:45 PM, Wim van den Berge
wrote:
> I have a pile of aging Dell MD-1000's laying around that have been replaced
> by new primary storage. I've been thinking of using them to create some
> archive/backup storage for my primary ZFS systems.
>
> Unfortunately they do not all
On Jan 12, 2011, at 5:45 PM, Wim van den Berge wrote:
> I have a pile of aging Dell MD-1000's laying around that have been replaced
> by new primary storage. I've been thinking of using them to create some
> archive/backup storage for my primary ZFS systems.
>
> Unfortunately they do not all c
On Thu, Jan 13, 2011 at 4:36 AM, fred wrote:
> Thanks for this explanation
>
> So there is no real way to estimate the size of the increment?
Unfortunately not for now.
> Anyway, for this particular filesystem, i'll stick with rsync and yes, the
> difference was 50G!
Why? I would expect rsync
On Thu, January 13, 2011 09:00, David Strom wrote:
> Moving to a new SAN, both LUNs will not be accessible at the same time.
>
> Thanks for the several replies I've received, sounds like the dd to tape
> mechanism is broken for zfs send, unless someone knows otherwise or has
> some trick?
>
> I'm j
Basically I think yes you need to add all the vdevs you require in the
circumstances you describe.
You just have to consider what ZFS is able to do with the disks that
you give it. If you have 4x mirrors to start with then all writes will
be spread across all disks and you will get nice per
Am 13.01.11 15:00, schrieb David Strom:
Moving to a new SAN, both LUNs will not be accessible at the same time.
Thanks for the several replies I've received, sounds like the dd to
tape mechanism is broken for zfs send, unless someone knows otherwise
or has some trick?
I'm just going to try a
I have a situation coming up soon in which I will have to migrate some iSCSI
backing stores setup with comstar. Are there steps published anywhere on how
to move these between pools? Does one still use send/receive or do I somehow
just move the backing store? I have moved filesystems before us
Moving to a new SAN, both LUNs will not be accessible at the same time.
Thanks for the several replies I've received, sounds like the dd to tape
mechanism is broken for zfs send, unless someone knows otherwise or has
some trick?
I'm just going to try a tar to tape then (maybe using dd), then,
I have a pile of aging Dell MD-1000's laying around that have been replaced by
new primary storage. I've been thinking of using them to create some
archive/backup storage for my primary ZFS systems.
Unfortunately they do not all contain identical drives. Some of the older
MD-1000's have 15x500
Hi all,
I got a serious problem when I have upgraded my zpool !! (big mistake)
I have booted from opensolaris milax 05, to import my rpool
I got some errors like
--
zpool import -fR /mnt rpool
milax zfs : WARNING can't open objset for rpool/zpnes/z-email/ROOT
milax zfs : WARNI
Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it
will always complain about overlapping slices, since *s2 is the entire disk.
This warning seems excessive, but "-f" will ignore it.
As for ZIL, the first time I created a slice for it. This worked well, the
second t
The way I understand it is that you should add new mirrors (vdevs) of the same
size as the other vdevs already attached to the said pool. That is, if your
vdevs are mirrors of 2TB drives, don't add a new mirror of, say, 1TB drives.
I might be wrong but this is my understanding.
--
This message
Maybe this can be of help: (ZFS Administration Guide)
http://docs.sun.com/app/docs/doc/819-5461/gavwg?a=view
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > This means the current probability of any sha256 collision in all of the
> > data in the whole world, using a ridiculously small block size, assuming
all
>
> ... it doesn't matter. Other posters have found collisions and a collision
>
Thanks for this explanation
So there is no real way to estimate the size of the increment?
Anyway, for this particular filesystem, i'll stick with rsync and yes, the
difference was 50G!
Thanks
--
This message posted from opensolaris.org
___
zfs-disc
Hi,
the ZFS_Best_Practises_Guide states this:
"Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the
pool fills up, new allocations will be forced to favor larger vdevs over
smaller ones and this will cause subsequent reads to come from a subset
of underlying devices leading
Hi all,
thanks a lot for your suggestions. I have checked all of them and
neither the network itself nor any other check indicated any problem.
Alas, I think I know what is going on… ehh… my current zpool has two
vdevs that are actually not even sized, as shown by zpool iostat -v:
zpool ios
19 matches
Mail list logo