Marion Hakanson wrote:
However, given the default behavior of ZFS (as of Solaris-10U3) is to
panic/halt when it encounters a corrupted block that it can't repair,
I'm re-thinking our options, weighing against the possibility of a
significant downtime caused by a single-block corruption.
Guess
[EMAIL PROTECTED] said:
That is the part of your setup that puzzled me. You took the same 7 disk
raid5 set and split them into 9 LUNS. The Hitachi likely splits the virtual
disk into 9 continuous partitions so each LUN maps back to different parts
of the 7 disks. I speculate that ZFS thinks
I wrote:
Just thinking out loud here. Now I'm off to see what kind of performance
cost there is, comparing (with 400GB disks):
Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
8+1 RAID-Z on 9 244.2GB LUN's from a 6+1 HW RAID5 volume
[EMAIL PROTECTED] said:
On 2/1/07, Marion Hakanson [EMAIL PROTECTED] wrote:
There's also the potential of too much seeking going on for the raidz pool,
since there are 9 LUN's on top of 7 physical disk drives (though how Hitachi
divides/stripes those LUN's is not clear to me).
Marion,
That is the part of your setup
fishy smell way below...
Marion Hakanson wrote:
I wrote:
Just thinking out loud here. Now I'm off to see what kind of performance
cost there is, comparing (with 400GB disks):
Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
8+1 RAID-Z on 9 244.2GB LUN's from a
Our Netapp does double-parity RAID. In fact, the filesystem design is
remarkably similar to that of ZFS. Wouldn't that also detect the
error? I suppose it depends if the `wrong sector without notice'
error is repeated each time. Or is it random?
On most (all?) other systems the
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about managing
writes,reads and failures. It wants to be bit perfect. So if you use the
hardware that comes with a given solution (in my case an Engenio 6994) to
manage failures you risk a) bad writes that don't get
Hi Jeff,
Maybe I mis-read this thread, but I don't think anyone was saying that
using ZFS on-top of an intelligent array risks more corruption. Given
my experience, I wouldn't run ZFS without some level of redundancy,
since it will panic your kernel in a RAID-0 scenario where it detects
a LUN is
On Jan 29, 2007, at 14:17, Jeffery Malloch wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about
managing writes,reads and failures. It wants to be bit perfect.
So if you use the hardware that comes with a given solution (in my
case an Engenio 6994) to
On Mon, Jan 29, 2007 at 11:17:05AM -0800, Jeffery Malloch wrote:
From what I can tell from this thread ZFS if VERY fussy about
managing writes,reads and failures. It wants to be bit perfect. So
if you use the hardware that comes with a given solution (in my case
an Engenio 6994) to manage
On January 29, 2007 11:17:05 AM -0800 Jeffery Malloch
[EMAIL PROTECTED] wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about managing
writes,reads and failures. It wants to be bit perfect.
It's funny to call that fussy. All filesystems WANT to be bit
Albert Chin said:
Well, ZFS with HW RAID makes sense in some cases. However, it seems that if
you are unwilling to lose 50% disk space to RAID 10 or two mirrored HW RAID
arrays, you either use RAID 0 on the array with ZFS RAIDZ/RAIDZ2 on top of
that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of
Hello Anantha,
Friday, January 26, 2007, 5:06:46 PM, you wrote:
ANS All my feedback is based on Solaris 10 Update 2 (aka 06/06) and
ANS I've no comments on NFS. I strongly recommend that you use ZFS
ANS data redundancy (z1, z2, or mirror) and simply delegate the
ANS Engenio to stripe the data
Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
this story about wrong blocks being written by RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch everyone's attention
.It's going to take more than that
On Jan 26, 2007, at 14:43, Gary Mills wrote:
Our Netapp does double-parity RAID. In fact, the filesystem design is
remarkably similar to that of ZFS. Wouldn't that also detect the
error? I suppose it depends if the `wrong sector without notice'
error is repeated each time. Or is it random?
I've used ZFS since July/August 2006 when Sol 10 Update 2 came out (first
release to integrate ZFS.) I've used it on three servers (E25K domain, and 2
E2900s) extensivesely; two them are production. I've over 3TB of storage from
an EMC SAN under ZFS management for no less than 6 months. Like
On Fri, Jan 26, 2007 at 08:06:46AM -0800, Anantha N. Srirama wrote:
b. Your server will hang when one of the underlying disks disappear. In our
case we had a T2000 running 11/06 and had a mirrored zpool against two
internal drives. When we pulled one of the drives abruptly the server
Oh yep, I know that churning feeling in stomach that there's got to be a
GOTCHA somewhere... it can't be *that* simple!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, Jan 26, 2007 at 09:33:40AM -0800, Akhilesh Mritunjai wrote:
ZFS Rule #0: You gotta have redundancy
ZFS Rule #1: Redundancy shall be managed by zfs, and zfs alone.
Whatever you have, junk it. Let ZFS manage mirroring and redundancy. ZFS
doesn't forgive even single bit errors!
How
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers? I'm putting together a new
IMAP server that will eventually use 3TB of space from our Netapp via
an iSCSI SAN. The Netapp provides all of the
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers?
It will work, but if the storage system corrupts the data, ZFS will be
unable
Gary Mills wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers?
It will work, but if the storage system corrupts the data, ZFS
Gary Mills wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers?
It will work, but if the storage system corrupts
[EMAIL PROTECTED] wrote on 01/26/2007 01:43:35 PM:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers?
It will work, but
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the data-management industry
for longer than I have (and I've been in this
Ed Gould wrote:
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the data-management industry
for longer than I have (and I've
Dana H. Myers wrote:
Ed Gould wrote:
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the data-management
On Jan 26, 2007, at 12:52, Dana H. Myers wrote:
So this leaves me wondering how often the controller/drive subsystem
reads data from the wrong sector of the drive without notice; is it
symmetrical with respect to writing, and thus about once a drive/year,
or are there factors which change this?
Torrey McMahon wrote:
Dana H. Myers wrote:
Ed Gould wrote:
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the
On Jan 26, 2007, at 13:16, Dana H. Myers wrote:
I would tend to expect these spurious events to impact read and write
equally; more specifically, the chance of any one read or write being
mis-addressed is about the same. Since, AFAIK, there are many more
reads
from a disk typically than
it would be good to have real data and not only guess ot anecdots
this story about wrong blocks being written by RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch everyone's attention
.It's going to take more than that to prove RAID ctrls
On Jan 26, 2007, at 13:29, Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
Yes, I agree. I'm sorry I don't have the data that Jim presented at
FAST, but he did present actual data. Richard Elling (I believe it was
Richard) has also posted some related
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ed Gould
Sent: Friday, January 26, 2007 3:38 PM
Yes, I agree. I'm sorry I don't have the data that Jim presented at
FAST, but he did present actual data. Richard Elling (I believe it
was
Richard) has also posted some
Dana H. Myers wrote:
Torrey McMahon wrote:
Dana H. Myers wrote:
Ed Gould wrote:
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference, comes
from Jim Gray, who has been around the data-management industry for
longer than I have (and I've been in this business since 1970); he's
currently at
On 26-Jan-07, at 7:29 PM, Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
this story about wrong blocks being written by RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch everyone's attention
.It's
My only qualification to enter this discussion is that I once wrote a
floppy disk format program for minix. I recollect, however, that each
sector on the disk is accompanied by a block that contains the sector
address and a CRC.
You'd have to define the layer you're talking about. I presume
Toby Thain wrote:
On 26-Jan-07, at 7:29 PM, Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
this story about wrong blocks being written by RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch
1. How stable is ZFS?
It's a new file system; there will be bugs. It appears to be well-tested,
though. There are a few known issues; for instance, a write failure can panic
the system under some circumstances. UFS has known issues too
2. Recommended config. Above, I have a fairly
39 matches
Mail list logo