Folks,
what can I post to the list to make the discussion go on?
Is this what you folks want to see? which I shared with King and High but
not you folks?
http://www.excelsioritsolutions.com/jz/jzbrush/jzbrush.htm
This is not even IT stuff so that I never thought I should post this to the
Got some more information about HW raid vs ZFS:
http://www.opensolaris.org/jive/thread.jspa?messageID=326654#326654
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, January 13, 2009 3:46 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
Got some more information about HW raid vs ZFS:
http://www.opensolaris.org/jive/thread.jspa?messageID=326654#326654
--
This message posted from opensolaris.org
no one is working tonight?
where is the discussions?
ok, I will not be picking on Orvar all the time, if that's why...
the windows statements was heavy, but hey, I am at home, not at work, it was
just because Orvar was suffering.
folks, are we not going to do IT just because I played
Still not happy?
I guess I will have do more spam myself --
So, I have to explain why I didn't like Linux but I like MS and OpenSolaris?
I don't have any religious love for MS or Sun.
Just that I believe, talents are best utilized in an organized and
systematic fashion, to benefit the whole.
Ok, so someone is doing IT and has questions.
Thank you!
[I did not post this using another name, because I am too honorable to do
that.]
This is a list discussion, should not be paused for one voice.
best,
z
[If Orvar has other questions that I have not addressed, please ask me
off-list. It's
Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RAID 2 is something weird that no one uses, and really only exists on
paper as part of Berkeley's original RAID paper, IIRC. raidz2 is more
or less RAID 6, just like raidz is more or less RAID 5. With raidz2,
you have to lose 3 drives per vdev before data loss occurs.
Scott
On Thu, Jan 8,
On Thu, Jan 8, 2009 at 10:01, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
Raid-2 is much less used, for one, uses many more disks for parity,
for two, and is much slower in any application I can think of.
Suppose you have 11
To: zfs-discuss@opensolaris.org
Sent: Thursday, January 08, 2009 10:01 AM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
--
This message posted from opensolaris.org
___
zfs
Folks, I have had much fun and caused much trouble.
I hope we now have learned the open spirit of storage.
I will be less involved with the list discussion going forward, since me too
have much work to do in my super domain.
[but I still have lunch hours, so be good!]
As I always say, thank you
For SCSI disks (including FC), you would use the FUA bit on the read command.
For SATA disks ... does anyone care? ;-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.)
best,
z
- Original Message -
From: Anton B. Rang r...@acm.org
To: zfs-discuss@opensolaris.org
Sent: Tuesday, January 06, 2009 9:07 AM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
For SCSI disks (including FC), you would use the FUA bit on the read
command
Ok, folks, new news - [feel free to comment in any fashion, since I don't
know how yet.]
EMC ACQUIRES OPEN-SOURCE ASSETS FROM SOURCELABS
http://go.techtarget.com/r/5490612/6109175
attachment: joetucci.jpg___
zfs-discuss mailing list
On Sat, Jan 03, 2009 at 09:58:37PM -0500, JZ wrote:
Under what situations would you expect any differences between the ZFS
checksums and the Netapp checksums to appear?
I have no evidence, but I suspect the only difference (modulo any bugs)
is how the software handles checksum failures.
As
ddun...@taos.com
To: zfs-discuss@opensolaris.org
Sent: Monday, January 05, 2009 2:42 AM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
On Sat, Jan 03, 2009 at 09:58:37PM -0500, JZ wrote:
Under what situations would you expect any differences between the ZFS
checksums
ECC theory tells, that you need a minimum distance of 3
to correct one error in a codeword, ergo neither RAID-5 or RAID-6
are enough: you need RAID-2 (which nobody uses today).
What is RAID-2? Is it raidz2?
--
This message posted from opensolaris.org
On Wed, Dec 31, 2008 at 01:53:03PM -0500, Miles Nordin wrote:
The thing I don't like about the checksums is that they trigger for
things other than bad disks, like if your machine loses power during a
resilver, or other corner cases and bugs. I think the Netapp
block-level RAID-layer
http://www.nber.org/sys-admin/linux-nas-raid.html
best,
z
- Original Message -
From: Marc Bevand m.bev...@gmail.com
To: zfs-discuss@opensolaris.org
Sent: Thursday, January 01, 2009 6:40 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
Mattias Pantzare pantzare
Hi Carsten,
Carsten Aulbert wrote:
Hi Marc,
Marc Bevand wrote:
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to be true
for Areca controllers :)
Are you
Ulrich Graef wrote:
You need not to wade through your paper...
ECC theory tells, that you need a minimum distance of 3
to correct one error in a codeword, ergo neither RAID-5 or RAID-6
are enough: you need RAID-2 (which nobody uses today).
Raid-Controllers today take advantage of the fact
On Fri, Jan 2, 2009 at 10:47 AM, Mika Borner opensola...@bluewin.ch wrote:
Ulrich Graef wrote:
You need not to wade through your paper...
ECC theory tells, that you need a minimum distance of 3
to correct one error in a codeword, ergo neither RAID-5 or RAID-6
are enough: you need RAID-2
Tim wrote:
The Netapp paper mentioned by JZ
(http://pages.cs.wisc.edu/~krioukov/ParityLostAndParityRegained-FAST08.ppt
http://pages.cs.wisc.edu/%7Ekrioukov/ParityLostAndParityRegained-FAST08.ppt)
talks about write verify.
Would this feature make sense in a ZFS
...@sun.com
Sent: Friday, January 02, 2009 2:35 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
Tim wrote:
The Netapp paper mentioned by JZ
(http://pages.cs.wisc.edu/~krioukov/ParityLostAndParityRegained-FAST08.ppt
http://pages.cs.wisc.edu/%7Ekrioukov
On Fri, 2 Jan 2009, JZ wrote:
I have not done a cost study on ZFS towards the 999s, but I guess we can
do better with more system and I/O based assurance over just RAID checksum,
so customers can get to more s with less redundant hardware and
software feature enablement fees.
Message -
From: Bob Friesenhahn bfrie...@simple.dallas.tx.us
To: JZ j...@excelsioritsolutions.com
Cc: zfs-discuss@opensolaris.org
Sent: Friday, January 02, 2009 8:21 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
On Fri, 2 Jan 2009, JZ wrote:
I have not done a cost
On Fri, 2 Jan 2009, JZ wrote:
We are talking about 0.001% of defined downtime headroom for a 4-9 SLA (that
may be defined as accessing the correct data).
It seems that some people spend a lot of time analyzing their own
hairy navel and think that it must be the surely be center of the
: Friday, January 02, 2009 10:31 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
On Fri, 2 Jan 2009, JZ wrote:
We are talking about 0.001% of defined downtime headroom for a 4-9 SLA
(that may be defined as accessing the correct data).
It seems that some people spend a lot
On second thought, let me further explain why I had the Linux link in the
same post.
That was written a while ago, but I think the situation for the cheap RAID
cards has not changed much, though the RAID ASICs in RAID enclosures are
getting more and more robust, just not open.
If you take
Mattias Pantzare pantzare at gmail.com writes:
On Tue, Dec 30, 2008 at 11:30, Carsten Aulbert wrote:
[...]
where we wrote data to the RAID, powered the system down, pulled out one
disk, inserted it into another computer and changed the sector checksum
of a few sectors (using hdparm's
Hi Marc (and all the others),
Marc Bevand wrote:
So Carsten: Mattias is right, you did not simulate a silent data corruption
error. hdparm --make-bad-sector just introduces a regular media error that
*any* RAID level can detect and fix.
OK, I'll need to go back to our tests performed
Mattias Pantzare pantzare at gmail.com writes:
He was talking about errors that the disk can't detect (errors
introduced by other parts of the system, writes to the wrong sector or
very bad luck). You can simulate that by writing diffrent data to the
sector,
Well yes you can. Carsten and I
Ive studied all links here. But I want information of the HW raid controllers.
Not about ZFS, because I have plenty of ZFS information now. The closest thing
I got was
www.baarf.org
Where in one article he states that raid5 never does parity check on reads.
Ive wrote that to the Linux guys. And
Orvar Korvar wrote:
Ive studied all links here. But I want information of the HW raid
controllers. Not about ZFS, because I have plenty of ZFS information now. The
closest thing I got was
www.baarf.org
[one of my favorite sites ;-)]
The problem is that there is no such thing as hardware
There is a company (DataCore Software) that has been making / shipping
products for many years that I believe would help in this area. I've
used them before, they're very solid and have been leveraging the use of
commodity server and disk hardware to build massive storage arrays (FC
iSCSI),
ca == Carsten Aulbert carsten.aulb...@aei.mpg.de writes:
ok == Orvar Korvar knatte_fnatte_tja...@yahoo.com writes:
ca (using hdparm's utility makebadsector)
I haven't used that before, but it sounds like what you did may give
the RAID layer some extra information. If one of the disks
db == Dave Brown dbr...@csolutions.net writes:
db CRC/Checksum Error Detection In SANmelody and SANsymphony,
db enhanced error detection can be provided by enabling Cyclic
db Redundancy Check (CRC) [...] The CRC bits may
db be added to either Data Digest, Header Digest, or both.
On Wed, Dec 31, 2008 at 12:58 PM, Miles Nordin car...@ivy.net wrote:
db == Dave Brown dbr...@csolutions.net writes:
db CRC/Checksum Error Detection In SANmelody and SANsymphony,
db enhanced error detection can be provided by enabling Cyclic
db Redundancy Check (CRC) [...] The CRC
The problem is that there is no such thing as hardware RAID there is
only software RAID. The HW RAID controllers are processors
running software and the features of the product are therefore limited by
the software developer and processor capabilities. I goes without saying
that the processors
vs HardWare raid - data integrity?
On Wed, Dec 31, 2008 at 12:58 PM, Miles Nordin car...@ivy.net wrote:
db == Dave Brown dbr...@csolutions.net writes:
db CRC/Checksum Error Detection In SANmelody and SANsymphony,
db enhanced error detection can be provided by enabling
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to be true
for Areca controllers :)
Are you sure about that ? The latest research I know of [1] says that
Hi Marc,
Marc Bevand wrote:
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to be true
for Areca controllers :)
Are you sure about that ? The latest research I
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
Well, I probably need to wade through the paper (and recall Galois field
theory) before answering this. We did a few tests in a 16 disk RAID6
where we wrote data to the RAID, powered the system down, pulled out one
disk, inserted it into
On Tue, Dec 30, 2008 at 11:30, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
Hi Marc,
Marc Bevand wrote:
Carsten Aulbert carsten.aulbert at aei.mpg.de writes:
In RAID6 you have redundant parity, thus the controller can find out
if the parity was correct or not. At least I think that to
Que? So what can we deduce about HW raid? There are some controller cards that
do background concistency checks? And error detection of various kind?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Best,
z
- Original Message -
From: Orvar Korvar knatte_fnatte_tja...@yahoo.com
To: zfs-discuss@opensolaris.org
Sent: Tuesday, December 30, 2008 8:21 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
Que? So what can we deduce about HW raid? There are some controller
To answer original post, simple answer:
Almost all old RAID designs have holes in their logic where they are
insufficiently paranoid on the writes or read, and sometimes both. One example
is the infamous RAID-5 write hole.
Look at simple example of mirrored SVM versus ZFS in page 1516 of
On Sun, 28 Dec 2008, Orvar Korvar wrote:
On a Linux forum, Ive spoken about ZFS end to end data integrity. I
wrote things as upon writing data to disc, ZFS reads it back and
compares to the data in RAM and corrects it otherwise. I also wrote
that ordinary HW raid doesnt do this check.
Hi all,
Bob Friesenhahn wrote:
My understanding is that ordinary HW raid does not check data
correctness. If the hardware reports failure to successfully read a
block, then a simple algorithm is used to (hopefully) re-create the
lost data based on data from other disks. The difference
On Sun, 28 Dec 2008, Carsten Aulbert wrote:
ZFS does check the data correctness (at the CPU) for each read while
HW raid depends on the hardware detecting a problem, and even if the
data is ok when read from disk, it may be corrupted by the time it
makes it to the CPU.
AFAIK this is not done
Hi Bob,
Bob Friesenhahn wrote:
AFAIK this is not done during the normal operation (unless a disk asked
for a sector cannot get this sector).
ZFS checksum validates all returned data. Are you saying that this fact
is incorrect?
No sorry, too long in front of a computer today I guess: I
This is good information guys. Do we have some more facts and links about HW
raid and it's data integrity, or lack of?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
4:16 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
This is good information guys. Do we have some more facts and links about
HW raid and it's data integrity, or lack of?
--
This message posted from opensolaris.org
___
zfs
-discuss@opensolaris.org
Sent: Sunday, December 28, 2008 7:50 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?
Nice discussion. Let my chip in my old timer view --
Until a few years ago, the understanding of HW RAID doesn't proactively
check for consistency of data vs
==
Bob Friesenhahn
- Original Message -
From: JZ j...@excelsioritsolutions.com
To: Orvar Korvar knatte_fnatte_tja...@yahoo.com;
zfs-discuss@opensolaris.org
Sent: Sunday, December 28, 2008 7:55 PM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity
55 matches
Mail list logo