On 11/24/2009 05:36 PM, Alan Johnson wrote:
> Nice! Ok, so you get the multi-level, compression, encryption, error
> checking, and RAIDZs? Anything else significant I am forgetting?
coming soon to a ZFS near you: block-level de-duplication. If a block
with the same SHA-256 exists on disk it ge
Ben Scott wrote:
> On Tue, Nov 24, 2009 at 2:32 PM, Ken D'Ambrosio wrote:
>
>> UDNRC [sic]. With all due respect to the encyclopedic knowledge of Ben, I
>> took this one with a grain of salt. And again, Wikipedia to the rescue:
>>
>
> I trust my own experience a lot more than I trust a
On Mon, Nov 23, 2009 at 4:38 PM, Bill McGonigle wrote:
> > On 23-Nov-2009, Alan Johnson sent:
>
> > Nope. As I understand it, when you do an iSCSI export of a ZFS
> > pool, you're getting a block device with the advantages of the
> > ZFS storage mechanism without any particular filesystem on it.
On Tue, Nov 24, 2009 at 2:32 PM, Ken D'Ambrosio wrote:
> UDNRC [sic]. With all due respect to the encyclopedic knowledge of Ben, I
> took this one with a grain of salt. And again, Wikipedia to the rescue:
I trust my own experience a lot more than I trust a Wikipedia
article tagged as needing
On Mon, Nov 23, 2009 at 1:09 PM, Tom Buskey wrote:
>> http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance:
>> "... known as the write hole ..."
>
> Thanks for clarifying that for me.
For the record, there doesn't seem to be any universal agreement as
to what "RAID 5 write hole"
>> I think the only other
>> filesystems that checksum are NetApp's WAFL(?) and Linux's btfrs.
No; check out Wikipedia's filesystem comparison page, below. That being
said, ain't many, and most of them are new.
http://en.wikipedia.org/wiki/Comparison_of_file_systems#Metadata
> In an interesting
On Mon, Nov 23, 2009 at 9:25 AM, Tom Buskey wrote:
> RAID5 will have faster read performance then RAID 1 or a single disk.
This is possible if there are multiple I/O paths. Many controllers
don't do that; there is a single I/O path to the RAID engine. So
adding disks actually slows things dow
> On 23-Nov-2009, Alan Johnson sent:
> Nope. As I understand it, when you do an iSCSI export of a ZFS
> pool, you're getting a block device with the advantages of the
> ZFS storage mechanism without any particular filesystem on it.
>
> I could be wrong, of course. I haven't played with that part
On 23-Nov-2009, Alan Johnson sent:
> Nice! But then what does it look like to the client? Doesn't
> iSCSI appear like a block device that still needs a file system
> on top of it?
Correct. You get a block device that you can put any filesystem
you like on.
> Does the client need ZFS support?
No
On Mon, Nov 23, 2009 at 1:41 PM, Bill McGonigle wrote:
> yeah, NFS and databases aren't really a great mapping - not enough
> semantics are supported even if they were fast enough.
But there is not a lot of meta data manipulation for DB files, mostly mtime
as I turn off atime by reflect these da
On Mon, Nov 23, 2009 at 1:09 PM, Tom Buskey wrote:
>
>
> On Mon, Nov 23, 2009 at 10:31 AM, Alan Johnson wrote:
> I have a Ubuntu 9.10 box that boots a RAID6 with GRUB2. I expect that is
> very new, eh?
>
> So your Ubuntu does software RAID6 on the boot disks with / and /boot?
>
Um, certainly /
> I don't want to go commercial, so I won't guess the name, but are the
> initials BFCC, but chance? ;-)
you'll have to check when the new website gets pushed to live. :)
> A fellow on this list at the Birthday party said that iSCSI had a lot less
> network overhead and much better real throughpu
On Mon, Nov 23, 2009 at 11:18 AM, Bill McGonigle wrote:
> I know of a computer company over in Lebanon that's selling 16 and
> 24-bay Nexenta-based ZFS storage servers that'll do iscsi, nfs, smb with
> impressive ease. ;) OpenSolaris kernel, Ubuntu userland, block-level
> dedup coming early next
On Mon, Nov 23, 2009 at 10:31 AM, Alan Johnson wrote:
> On Mon, Nov 23, 2009 at 9:25 AM, Tom Buskey wrote:
>
>> I think the RAID 5 write hole refers to the slowdown on writes with RAID
>> 5. In order to lose data, a 2nd drive needs to fail (as opposed to only 1
>> drive on a RAID 0 or JBOD).
>>
On 11/23/2009 10:31 AM, Alan Johnson wrote:
> We! From all the theory I've read and watched, ZFS is the end
> game. I'm still trying to figure out how to work it into cloud
> storage. Does FreeNAS some how enable ZFS over iSCSI? I can't wrap my
> mind around that, but the benefits of ZFS on
On Mon, Nov 23, 2009 at 9:25 AM, Tom Buskey wrote:
> I think the RAID 5 write hole refers to the slowdown on writes with RAID
> 5. In order to lose data, a 2nd drive needs to fail (as opposed to only 1
> drive on a RAID 0 or JBOD).
>
According to
http://en.wikipedia.org/wiki/Standard_RAID_level
On Sat, Nov 21, 2009 at 6:02 PM, Bill McGonigle wrote:
> On 11/21/2009 11:40 AM, Bruce Labitt wrote:
> > Bill, why not RAID-5? Isn't RAID-5 supposed to be ultra-reliable? As
> > in hot swap disks? Or does this just apply to software RAID-5...
> >
> > -Bruce
> > who knows very little about this
On Sat, Nov 21, 2009 at 11:40 AM, Bruce Labitt wrote:
> Bill, why not RAID-5? Isn't RAID-5 supposed to be ultra-reliable? As
> in hot swap disks? Or does this just apply to software RAID-5...
>
Wow, a lot of good stuff has been said on this thread. Most of which is
easily referrenced in thes
On Sat, Nov 21, 2009 at 6:02 PM, Bill McGonigle wrote:
> RAID-5 itself has a problem known as the "RAID-5 write hole" where data
> loss can be guaranteed in certain situations.
I've seen multiple different claims of something called "RAID-5 write hole".
One claim I see is that if there's an
On 11/21/2009 11:40 AM, Bruce Labitt wrote:
> Bill, why not RAID-5? Isn't RAID-5 supposed to be ultra-reliable? As
> in hot swap disks? Or does this just apply to software RAID-5...
>
> -Bruce
> who knows very little about this RAID stuff...
RAID-5 itself has a problem known as the "RAID-5 wr
On Sat, Nov 21, 2009 at 11:40 AM, Bruce Labitt
wrote:
> Bill, why not RAID-5? Isn't RAID-5 supposed to be ultra-reliable?
RAID 5 is not more reliable than RAID 1. Both can survive the
failure of a single physical disk; both will fail if two disks fail.
Mirror (RAID 1) is simple enough: Eac
On Fri, Nov 20, 2009 at 2:42 PM, Bill McGonigle wrote:
> On 11/18/2009 03:33 PM, Ben Scott wrote:
> > I've found hardware RAID to be more reliable when booting with a
> > degraded disk set. A smart controller will just fail the bad member
> > disk and ignore it. Software-based solutions -- whi
Bill McGonigle wrote:
> On 11/18/2009 03:33 PM, Ben Scott wrote:
>
>> I've found hardware RAID to be more reliable when booting with a
>> degraded disk set. A smart controller will just fail the bad member
>> disk and ignore it. Software-based solutions -- which don't kick in
>> until the OS
On 11/20/2009 04:43 PM, Ben Scott wrote:
> You could argue that the alternate scenario above is the fault of
> the BIOS or disk controller, that it should be able to recover from an
> I/O error on disk 0 and move on and try disk 1. You're prolly right.
> But this is the pee sea platform we're ta
On Fri, Nov 20, 2009 at 2:42 PM, Bill McGonigle wrote:
>> Software-based solutions -- which don't kick in
>> until the OS is running -- sometimes get caught up trying to boot from
>> a failed disk.
>
> "Please don't use RAID-5".
>A healthy, properly configured (and tested) RAID-1 will boot nicely.
On 11/18/2009 03:11 PM, Ben Scott wrote:
> [2] Myself, I've never had a problem the "megaraid" driver that's been
> part of the standard Linux kernel since circa 2001. Obviously,
> experiences vary.
To split hairs, the megaraid driver was replaced c. 2005 with a new one,
which abandons some of th
On 11/18/2009 03:33 PM, Ben Scott wrote:
> I've found hardware RAID to be more reliable when booting with a
> degraded disk set. A smart controller will just fail the bad member
> disk and ignore it. Software-based solutions -- which don't kick in
> until the OS is running -- sometimes get caug
On Wed, Nov 18, 2009 at 10:44 PM, Ben Scott wrote:
> On Wed, Nov 18, 2009 at 8:32 PM, Alan Johnson wrote:
> > Would you agree that there are plenty of SATA and SCSI drivers
> > that work with most or all correctly implemented devices?
>
> I'm talking about software (the OS, device drivers, etc
On Wed, Nov 18, 2009 at 8:32 PM, Alan Johnson wrote:
>> There is no standard software/hardware interface for SATA or SCSI
>> controllers.[1]
>
> If we replace the word "standard" with the word "generic", does that work?
No.
> Would you agree that there are plenty of SATA and SCSI drivers
> th
On Wed, 2009-11-18 at 15:11 -0500, Ben Scott wrote:
> [2] Myself, I've never had a problem the "megaraid" driver that's been
> part of the standard Linux kernel since circa 2001. Obviously,
> experiences vary.
I've just had a rather foul run-in with a MegaRaid chipset embedded on a
SuperMicro se
On Wed, Nov 18, 2009 at 3:11 PM, Ben Scott wrote:
> On Wed, Nov 18, 2009 at 1:07 PM, Alan Johnson wrote:
> > The only reason I've ever had to install a driver
> > for a RAID controller is for online management. As far as drive access,
> all
> > the controllers I've come across just look like an
On Wed, Nov 18, 2009 at 3:08 PM, Bill McGonigle wrote:
> Yeah, if you've got local disk connectivity, software RAID is usually
> faster and more stable than hardware RAID, and is certainly more portable.
Software RAID is portable across hardware, but not operating systems.
Hardware RAID is p
On Wed, Nov 18, 2009 at 1:07 PM, Alan Johnson wrote:
> The only reason I've ever had to install a driver
> for a RAID controller is for online management. As far as drive access, all
> the controllers I've come across just look like any other SATA or SCSI
> controller from a device exposure.
T
On 11/16/2009 10:39 AM, Chip Marshall wrote:
> No, I meant the hardware RAID. While yes, the drive array
> appears as a block device, that block device is being handled by
> a driver for the RAID controller, which can have flaws leading
> to problems.
Yeah, if you've got local disk connectivity, s
On Mon, Nov 16, 2009 at 10:39 AM, Chip Marshall wrote:
> No, I meant the hardware RAID. While yes, the drive array
> appears as a block device, that block device is being handled by
> a driver for the RAID controller, which can have flaws leading
> to problems.
>
> I don't recall the specific is
On 15-Nov-2009, Alan Johnson sent:
> On Sun, Nov 15, 2009 at 12:29 PM, Chip Marshall wrote:
> > I've heard that the RAID is unstable under older Linux
> > kernels, but that was 3~4 years ago, so I suspect it's been
> > fine for a while now.
>
> Software RAID, of course, as hardware RAID should b
On Sun, Nov 15, 2009 at 12:29 PM, Chip Marshall wrote:
> I've heard that the RAID is unstable under older Linux kernels, but that
> was 3~4 years ago, so I suspect it's been fine for a while now.
I've been using Linux software RAID for a number of years without
any issue that I could blame on i
On Sun, Nov 15, 2009 at 12:29 PM, Chip Marshall wrote:
> I've heard that the RAID is unstable under older Linux kernels, but that
> was 3~4 years ago, so I suspect it's been fine for a while now.
>
Software RAID, of course, as hardware RAID should be transparent to the OS
at the block device poi
38 matches
Mail list logo