On Sun, Jun 7, 2009 at 12:01 AM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
(very roughly, in the non-sequential access case) expected to deliver
performance of four drives in a RAID0 array?
According to all the Sun documentation, the I/O throughput of a raidz
configuration is
(very roughly, in the non-sequential access case) expected to deliver
performance of four drives in a RAID0 array?
According to all the Sun documentation, the I/O throughput of a raidz
configuration is equal to that of a single drive.
exactly what i say. it's like RAID3. Not RAID5 which have
Sorry to come into the discussion late, but I just want to confirm
something.
The configuration below is a stripe of four components, each of which is
RAIDZ2, right?
If, as was discussed later in the thread, RAIDZ(2) is more similar to
RAID3 than RAID5 for random performance, the given
On Sat, Jun 6, 2009 at 12:54 PM, Ivan Vorasivo...@freebsd.org wrote:
Sorry to come into the discussion late, but I just want to confirm
something.
The configuration below is a stripe of four components, each of which is
RAIDZ2, right?
If, as was discussed later in the thread, RAIDZ(2) is
We remade the pool using 3x 8-drive raidz2 vdevs, and performance has
been great (400 MBytes/s write, almost 3 GBytes/s sequential read, 800
MBytes/s random read).
Yep that corresponds with what we saw, although we were getting a little
higher write rates with our 46 drive configuration
@freebsd.org; xorquew...@googlemail.com
Subject: RE: Request for opinions - gvinum or ccd?
Its all done on write, so if you update the file it will have multiple
copies again
what is exactly what i said in the beginning.
___
freebsd-hackers@freebsd.org
this.
No you don't you just make sure you scrub the pools regularly once a week
for instance.
AGAIN example - i had one drive failed, got it out but recovered all data
as for everything copies was set to more than one.
Or most data, but those with copies=1 wasn't critical for me.
then i
In the case of zfs yes, but not always. Eg you could have a concatenated
volume. Where you only start writing to the second disk when the 1st is
full.
i don't know how ZFS exactly allocates space, but i use gconcat with UFS
and it isn't true.
UFS do jump between zones (called cyllinder
You shouldn't need to alter the copies attribute to recover from disk
failures as the normal raid should take care of that. What the copies is
I don't think we understand each other. I say that when i want 2 copies,
ZFS should rebuild second copy if it's gone and i run resilver.
it does not,
). This is common practice and
we see no issues with it.
-Original Message-
From: owner-freebsd-hack...@freebsd.org
[mailto:owner-freebsd-hack...@freebsd.org] On Behalf Of
xorquew...@googlemail.com
Sent: 01 June 2009 01:00
To: freebsd-hackers@freebsd.org
Subject: Re: Request for opinions - gvinum
'; freebsd-hackers@freebsd.org; 'Mike Meyer';
xorquew...@googlemail.com
Subject: RE: Request for opinions - gvinum or ccd?
Yep it probably isn't clear enough, it does mention stuff about spreading
it
across vdevs, but doesn't say striped.
isn't spreading and stripping actually the same
Of Wojciech Puchar
Sent: 01 June 2009 01:26
To: Mike Meyer
Cc: freebsd-hackers@freebsd.org; xorquew...@googlemail.com
Subject: Re: Request for opinions - gvinum or ccd?
Disks, unlike software, sometimes fail. Using redundancy can help
modern SATA drives fail VERY often. about 30% of drives i
On Mon, 2009-06-01 at 09:32 +0100, krad wrote:
Zfs has been designed for highly scalable redundant disk pools therefore
using it on a single drive kind of goes against it ethos. Remember a lot of
the blurb in the man page was written by sun and therefore is written with
corporates in mind,
To: krad
Cc: 'Mike Meyer'; freebsd-hackers@freebsd.org; xorquew...@googlemail.com
Subject: RE: Request for opinions - gvinum or ccd?
You shouldn't need to alter the copies attribute to recover from disk
failures as the normal raid should take care of that. What the copies is
I don't think we
of copies after data
is stored on the fs)
-Original Message-
From: Tom Evans [mailto:tevans...@googlemail.com]
Sent: 01 June 2009 13:50
To: krad
Cc: xorquew...@googlemail.com; freebsd-hackers@freebsd.org
Subject: RE: Request for opinions - gvinum or ccd?
On Mon, 2009-06-01 at 09:32 +0100
On Mon, 2009-06-01 at 14:19 +0100, krad wrote:
no you would only loose the data for that block. Zfs also checksums meta
data, but by default keeps multiple copies of it so that's fairly resilient.
If you had the copies set to 1 then you wouldn't loose the block either,
unless you were real
Its all done on write, so if you update the file it will have multiple
copies again
what is exactly what i said in the beginning.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe,
On Sat, 30 May 2009, xorquew...@googlemail.com wrote:
...
I'll definitely be looking at ZFS. Thanks for the info.
I've never been dead set on any option in particular, it's just that I
wasn't aware of anything that would do what I wanted that wasn't just
simple RAID0 and manual backups.
Just
for opinions - gvinum or ccd?
On Sat, 30 May 2009 20:18:40 +0100
xorquew...@googlemail.com wrote:
If you're running a 7.X 64-bit system with a couple of GIG of ram,
expect it to be in service for years without having to reformat the
disks, and can afford another drive, I'd recommend going
On 2009-05-31 13:13:24, krad wrote:
Please don't whack gstripe and zfs together. It should work but is ugly and
you might run into issues. Getting out of them will be harder than a pure
zfs solution
Yeah, will be using pure ZFS having read everything I can find on it so far.
I was skeptical of
On Sun, 31 May 2009 13:13:24 +0100
krad kra...@googlemail.com wrote:
Please don't whack gstripe and zfs together. It should work but is ugly and
you might run into issues. Getting out of them will be harder than a pure
zfs solution
Yeah, I sorta suspected that might be the case.
ZFS does
Would create a striped data set across da1 and da2
What kind of performance gain can I expect from this? I'm purely thinking
about performance now - the integrity checking stuff of ZFS is a pleasant
extra.
with stripping - as much as with gstripe, ZFS do roughly the same.
with RAID-z -
)
-Original Message-
From: Mike Meyer [mailto:m...@mired.org]
Sent: 31 May 2009 21:33
To: krad
Cc: 'Mike Meyer'; xorquew...@googlemail.com; freebsd-hackers@freebsd.org
Subject: Re: Request for opinions - gvinum or ccd?
On Sun, 31 May 2009 13:13:24 +0100
krad kra...@googlemail.com wrote:
Please
should really use raidz2 in zfs (or some double parity raid on other
systems) if you are worried about data integrity. The reason being the odds
of the crc checking not detecting an error are much more likely these days.
The extra layer of parity pushes these odds into being much bigger
you are
There is one last thing I'd like clarified. From the zpool
manpage:
In order to take advantage of these features, a pool must make use of
some form of redundancy, using either mirrored or raidz groups. While
ZFS supports running in a non-redundant configuration, where each root
vdev
On Mon, 1 Jun 2009 00:59:43 +0100
xorquew...@googlemail.com wrote:
There is one last thing I'd like clarified. From the zpool
manpage:
In order to take advantage of these features, a pool must make use of
some form of redundancy, using either mirrored or raidz groups. While
ZFS
.
The extra layer of parity pushes these odds into being much bigger
-Original Message-
From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl]
Sent: 31 May 2009 22:57
To: xorquew...@googlemail.com
Cc: krad; freebsd-hackers@freebsd.org
Subject: Re: Request for opinions - gvinum or ccd
Disks, unlike software, sometimes fail. Using redundancy can help
modern SATA drives fail VERY often. about 30% of drives i bought recently
failed in less than a year.
both checksum on and copies 1 on, and the latter isn't the
default. It's probably better to let zpool provide the
Yep it probably isn't clear enough, it does mention stuff about spreading it
across vdevs, but doesn't say striped.
isn't spreading and stripping actually the same?
___
freebsd-hackers@freebsd.org mailing list
On Sun, May 31, 2009 at 4:59 PM, xorquew...@googlemail.com wrote:
There is one last thing I'd like clarified. From the zpool
manpage:
In order to take advantage of these features, a pool must make use of
some form of redundancy, using either mirrored or raidz groups. While
ZFS
On Sat, 30 May 2009 18:52:39 +0100
xorquew...@googlemail.com wrote:
Simple question then as the handbook describes both ccd and gvinum -
which should I pick?
My first reaction was neither, then I realized - you didn't say what
version of FreeBSD you're running. But if you're running a supported
On 2009-05-30 14:43:54, Mike Meyer wrote:
On Sat, 30 May 2009 18:52:39 +0100
xorquew...@googlemail.com wrote:
Simple question then as the handbook describes both ccd and gvinum -
which should I pick?
My first reaction was neither, then I realized - you didn't say what
version of FreeBSD
On Sat, 30 May 2009 20:18:40 +0100
xorquew...@googlemail.com wrote:
If you're running a 7.X 64-bit system with a couple of GIG of ram,
expect it to be in service for years without having to reformat the
disks, and can afford another drive, I'd recommend going to raidz on a
three-drive
On 2009-05-30 16:27:44, Mike Meyer wrote:
The last bit is wrong. Moving a zfs pool between two systems is pretty
straightforward. The configuration information is on the drives; you
just do zpool import pool after plugging them in, and if the mount
point exists, it'll mount it. If the system
34 matches
Mail list logo