Using ZFS to mirror two hardware RAID-5 LUNs is actually quite nice.
Because the data is mirrored at the ZFS level, you get all the benefits
of self-healing. Moreover, you can survive a great variety of hardware
failures: three or more disks can die (one in the first array, two or
more in the
Jeff Bonwick wrote:
Using ZFS to mirror two hardware RAID-5 LUNs is actually quite nice.
Because the data is mirrored at the ZFS level, you get all the benefits
of self-healing. Moreover, you can survive a great variety of hardware
failures: three or more disks can die (one in the first
On Mon, Jun 16, 2008 at 5:33 PM, Robert Milkowski [EMAIL PROTECTED] wrote:
Have you got more details or at least bug ids?
Is it only (I dboubt) fc related?
I ran into something that looks like
6594621 dangling dbufs (dn=ff056a5ad0a8, dbuf=ff0520303300)
during stress
with LDoms 1.0.
Hello Erik,
Monday, June 16, 2008, 9:45:13 AM, you wrote:
ET One thing I should mention on this is that I've had _very_ bad
ET experience with using single-LUN ZFS filesystems over FC.
ET that is, using an external SAN box to create a single LUN, export that
ET LUN to a FC-connected host,
One thing I should mention on this is that I've had _very_ bad
experience with using single-LUN ZFS filesystems over FC.
that is, using an external SAN box to create a single LUN, export that
LUN to a FC-connected host, then creating a pool as follows:
zpool create tank LUN_ID
It works fine,
I'm not sure why people obsess over this issue so much. Disk is cheap.
We have a fair number of 3510 and 2540 on our SAN. They make RAID-5 LUNs
available to various servers.
On the servers we take RAID-5 LUNs from different arrays and ZFS mirror them.
So if any array goes away we are still
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide is the
following:
ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from
intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are
detected by ZFS checksums.
On 14 June, 2008 - zfsmonk sent me these 0,7K bytes:
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is the following: ZFS works well with storage based protected LUNs
(RAID-5 or mirrored LUNs from intelligent storage arrays). However,
ZFS cannot heal
On Sat, 14 Jun 2008, zfsmonk wrote:
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is the following: ZFS works well with storage based protected LUNs
(RAID-5 or mirrored LUNs from intelligent storage arrays). However,
ZFS cannot heal corrupted blocks
On Sat, 14 Jun 2008, zfsmonk wrote:
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is the following: ZFS works well with storage based protected LUNs
(RAID-5 or mirrored LUNs from intelligent storage arrays). However,
ZFS cannot
- Original Message -
From: Brian Wilson [EMAIL PROTECTED]
Date: Saturday, June 14, 2008 12:12 pm
Subject: Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays
To: Bob Friesenhahn [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
On Sat, 14 Jun 2008, zfsmonk wrote
On Sat, 14 Jun 2008, Brian Wilson wrote:
What are the odds, in that configuration of zpool (no mirroring,
just using the intelligent disk as concatenated luns in the zpool)
that if we have this silent corruption, the whole zpool dies? If
anyone knows, what's the comparative odds of the
On Sat, 14 Jun 2008, dick hoogendijk wrote:
With zfs you can scrub the pool at the system level. This allows you
to discover many issues early before they become nightmares.
#zpool status
scrub: none requested
My question is really, do I wait 'till scrub is requested or am I
supposed to
On Sat, Jun 14, 2008 at 02:19:05PM -0500, Bob Friesenhahn wrote:
On Sat, 14 Jun 2008, Brian Wilson wrote:
What are the odds, in that configuration of zpool (no mirroring,
just using the intelligent disk as concatenated luns in the zpool)
that if we have this silent corruption, the whole
On Sat, Jun 14, 2008 at 02:51:31PM -0500, Bob Friesenhahn wrote:
I think that none requested likely means that the administrator has
never issued a request to scrub the pool.
Or the system. That status line will show the last scrub/resilver to
have taken place. None requested means that no
On Sun, 15 Jun 2008, Brian Hechinger wrote:
how long the scrub takes. My pool is set to be scrubbed every night
via a cron job:
And like all other things of this nature, the more often you do it, the
less invasive it will be as there is less to do. That being said, I still
wouldn't
16 matches
Mail list logo