On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote:
Hi,
Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the
system i have now.
Regarding the performance...let's assume that a bonnie++ benchmark
could go to 200 mg/s in. The possibility of getting the same
Hi,
I think that in this case the cpu is not the bottleneck, since i'm not
using ssh.
However my 1gb network link probably is the bottleneck.
Bruno
On 26-3-2010 9:25, Erik Ableson wrote:
On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote:
Hi,
Indeed the 3 disks per vdev
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
not that much.
Now i will play with link aggregation and see how it goes, and of course
i'm counting that incremental replication will be slower...but since the
amount of data would be much less probably it will still
Using fewer than 4 disks in a raidz2 defeats the purpose of raidz2, as
you will always be in a degraded mode.
Freddie, are you nuts? This is false.
Sure you can use raidz2 with 3 disks in it. But it does seem pointless to do
that instead of a 3-way mirror.
Coolio. Learn something new everyday. One more way that raidz is
different from RAID5/6/etc.
Freddie, again, you're wrong. Yes, it's perfectly acceptable to create either
raid-5 or raidz using 2 disks. It's not degraded, but it does seem pointless
to do this instead of a mirror.
Just because most people are probably too lazy to click the link, I’ll paste a
phrase from that sun.com webpage below:
“Creating a single-parity RAID-Z pool is identical to creating a mirrored pool,
except that the ‘raidz’ or ‘raidz1’ keyword is used instead of ‘mirror’.”
And
“zpool create
OK, I have 3Ware looking into a driver for my cards (3ware 9500S-8) as
I dont see an OpenSolaris driver for them.
But this leads me that they do have a FreeBSD Driver, so I could still
use ZFS.
What does everyone thing about that? I bet it is not as mature as on
OpenSolaris.
mature is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26.03.2010 12:46, Edward Ned Harvey wrote:
OK, I have 3Ware looking into a driver for my cards (3ware 9500S-8) as
I dont see an OpenSolaris driver for them.
But this leads me that they do have a FreeBSD Driver, so I could still
use ZFS.
What
It seems like the zpool export will ques the drives and mark the pool
as exported. This would be good if we wanted to move the pool at that
time but we are thinking of a disaster recovery scenario. It would be
nice to export just the config to where if our controller dies, we can
use the
In the Thoughts on ZFS Pool Backup Strategies thread it was stated
that zfs send, sends uncompress data and uses the ARC.
If zfs send sends uncompress data which has already been compress
this is not very efficient, and it would be *nice* to see it send the
original compress data. (or an
On Fri, Mar 26, 2010 at 07:46:01AM -0400, Edward Ned Harvey wrote:
And FreeBSD in general will be built using older versions of packages than
what's in OpenSolaris.
Both are good OSes. If you can use FreeBSD but OpenSolaris doesn't have the
driver for your hardware, go for it.
While I use
While I use zfs with FreeBSD (FreeNAS appliance with 4x SATA 1 TByte
drives)
it is trailing OpenSolaris by at least a year if not longer and hence
lacks
many key features people pick zfs over other file systems. The
performance,
especially CIFS is quite lacking. Purportedly (I have never
On Fri, March 26, 2010 07:06, Edward Ned Harvey wrote:
In the Thoughts on ZFS Pool Backup Strategies thread it was stated
that zfs send, sends uncompress data and uses the ARC.
If zfs send sends uncompress data which has already been compress
this is not very efficient, and it would be
On Fri, Mar 26 at 7:29, Edward Ned Harvey wrote:
Using fewer than 4 disks in a raidz2 defeats the purpose of raidz2, as
you will always be in a degraded mode.
Freddie, are you nuts? This is false.
Sure you can use raidz2 with 3 disks in it. But it does seem pointless to
do
On Fri, Mar 26 at 11:10, Sanjeev wrote:
On Thu, Mar 25, 2010 at 02:45:12PM -0700, John Bonomi wrote:
I'm sorry if this is not the appropriate place to ask, but I'm a
student and for an assignment I need to be able to show at the hex
level how files and their attributes are stored and referenced
Hi,
You might take a look at
http://www.osdevcon.org/2008/files/osdevcon2008-max.pdf
and
http://www.osdevcon.org/2008/files/osdevcon2008-proceedings.pdf, starting
at page 36.
Or you might just use od -x file for the file part of your assignment.
Have fun.
max
Eric D. Mudama wrote:
On Fri,
On Fri, March 26, 2010 07:38, Edward Ned Harvey wrote:
Coolio. Learn something new everyday. One more way that raidz is
different from RAID5/6/etc.
Freddie, again, you're wrong. Yes, it's perfectly acceptable to create
either raid-5 or raidz using 2 disks. It's not degraded, but it does
On Fri, March 26, 2010 09:46, David Dyer-Bennet wrote:
I don't know that it makes sense to. There are lots of existing filter
packages that do compression; so if you want compression, just put them in
your pipeline. That way you're not limited by what zfs send has
implemented, either. When
On Fri, 26 Mar 2010, Edward Ned Harvey wrote:
mature is not the right term in this case. FreeBSD has been around much
longer than opensolaris, and it's equally if not more mature. FreeBSD is
probably somewhat less featureful. Because their focus is heavily on the
reliability and stability
Does zfs handle 4kb sectors properly or does it always assume 512b sectors?
If it does, we could manually create a slice properly aligned and set zfs to
use it...
--
The sender of this email subscribes to Perimeter E-Security's email
anti-virus service. This email has been scanned for
Yes, it does.
Bottone, Frank 写道:
Does zfs handle 4kb sectors properly or does it always assume 512b
sectors?
If it does, we could manually create a slice properly aligned and set
zfs to use it…
--
The sender of this email subscribes to Perimeter E-Security's email
anti-virus service.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26.03.2010 16:55, Bottone, Frank wrote:
Does zfs handle 4kb sectors properly or does it always assume 512b sectors?
If it does, we could manually create a slice properly aligned and set
zfs to use it?
A real simple patch would be to
Awesome!
Just when I thought zfs couldn’t get any better...
-Original Message-
From: larry@sun.com [mailto:larry@sun.com]
Sent: Friday, March 26, 2010 11:58 AM
To: Bottone, Frank
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and 4kb sector Drives (All new
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID
5, but what about RAID 10? Isn't a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn't I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4
And I should mention that I have a boot drive (500gb SATA) so I dont have to
consider booting from the RAID, I just want to use it for storage.
- Original Message -
From: Slack-Moehrle mailingli...@mailnewsrss.com
To: zfs-discuss zfs-discuss@opensolaris.org
Sent: Friday, March 26, 2010
Slack-Moehrle wrote:
And I should mention that I have a boot drive (500gb SATA) so I dont have to
consider booting from the RAID, I just want to use it for storage.
- Original Message -
From: Slack-Moehrle mailingli...@mailnewsrss.com
To: zfs-discuss zfs-discuss@opensolaris.org
Sent:
On Fri, Mar 26, 2010 at 1:39 PM, Slack-Moehrle mailingli...@mailnewsrss.com
wrote:
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to
RAID 5, but what about RAID 10? Isn't a RAID 10 setup better for data
protection?
So if I have 8 x 1.5tb drives, wouldn't I:
On Fri, 26 Mar 2010, Slack-Moehrle wrote:
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is
similar to RAID 5, but what about RAID 10? Isn't a RAID 10 setup better
for data protection?
I think so--at the expense of extra disks for a given amount of available
storage.
On Fri, Mar 26, 2010 at 11:39 AM, Slack-Moehrle
mailingli...@mailnewsrss.com wrote:
I am looking at ZFS and I get that they call it RAIDZ which is similar to
RAID 5, but what about RAID 10? Isn't a RAID 10 setup better for data
protection?
So if I have 8 x 1.5tb drives, wouldn't I:
-
So if I have 8 x 1.5tb drives, wouldn't I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
So you would do:
zpool create tank mirror drive1 drive2 mirror drive3 drive4
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26.03.2010 20:04, Slack-Moehrle wrote:
So if I have 8 x 1.5tb drives, wouldn't I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How
RAIDZ = RAID5, so lose 1 drive (1.5TB)
RAIDZ2 = RAID6, so lose 2 drives (3TB)
RAIDZ3 = RAID7(?), so lose 3 drives (4.5TB).
What you lose in useable space, you gain in redundancy.
-m
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
Can someone explain in terms of usable space RAIDZ vs RAIDZ2 vs RAIDZ3? With
8 x 1.5tb?
I apologize for seeming dense, I just am confused about non-stardard raid
setups, they seem tricky.
raidz eats one disk. Like RAID5
raidz2 digests another one. Like RAID6
raidz3 yet another one.
On Fri, 26 Mar 2010, Freddie Cash wrote:
Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member vdevs,
which can be
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading terminology.
Bob Friesenhahn wrote:
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading terminology.
What is the main difference between RAID0 and striping (what zfs really
does, i guess?)
On Fri, Mar 26, 2010 at 12:25:54PM -0700, Malte Schirmacher wrote:
Bob Friesenhahn wrote:
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading terminology.
What is the main difference between RAID0
On Fri, Mar 26, 2010 at 12:21 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 26 Mar 2010, Freddie Cash wrote:
Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member
vdevs, which can be
Except that ZFS does not support RAID0.
Wow, what part of overly
On Mar 26, 2010, at 8:58 AM, Svein Skogen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26.03.2010 16:55, Bottone, Frank wrote:
Does zfs handle 4kb sectors properly or does it always assume 512b sectors?
If it does, we could manually create a slice properly aligned and set
On Mar 25, 2010, at 7:25 PM, antst wrote:
I have two storages, both on snv133. Both filled with 1TB drives.
1) stripe over two raidz vdevs, 7 disks in each. In total avalable size is
(7-1)*2=12TB
2) zfs pool over HW raid, also 12TB.
Both storages keeps the same data with minor
On Fri, March 26, 2010 14:21, Bob Friesenhahn wrote:
On Fri, 26 Mar 2010, Freddie Cash wrote:
Overly-simplified, a ZFS pool is a RAID0 stripeset across all the member
vdevs, which can be
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and
On Fri, March 26, 2010 14:25, Malte Schirmacher wrote:
Bob Friesenhahn wrote:
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading terminology.
What is the main difference between RAID0 and striping
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not
that much.
That is about right. IIRC, the theoretical max is about 4% improvement, for
MTU of 8KB.
Now i will play with link aggregation and see how it goes,
On Mar 25, 2010, at 2:45 PM, John Bonomi wrote:
I'm sorry if this is not the appropriate place to ask, but I'm a student and
for an assignment I need to be able to show at the hex level how files and
their attributes are stored and referenced in ZFS. Are there any resources
available that
It depends a bit on how you set up the drives really. You could make one raidz
vdev of 8 drives, losing one of them for parity, or you could make two raidz
vdevs of 4 drives each and lose two drives for parity (one for each vdev). You
could also do one raidz2 vdev of 8 drives and lose two
On Mar 26, 2010, at 4:46 AM, Edward Ned Harvey wrote:
What does everyone thing about that? I bet it is not as mature as on
OpenSolaris.
mature is not the right term in this case. FreeBSD has been around much
longer than opensolaris, and it's equally if not more mature.
Bill Joy might take
Hi Richard,
Richard Elling wrote:
On Mar 25, 2010, at 2:45 PM, John Bonomi wrote:
I'm sorry if this is not the appropriate place to ask, but I'm a student and
for an assignment I need to be able to show at the hex level how files and
their attributes are stored and referenced in ZFS. Are
On Fri, 26 Mar 2010, Malte Schirmacher wrote:
Bob Friesenhahn wrote:
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading terminology.
What is the main difference between RAID0 and striping (what zfs
On Fri, 26 Mar 2010, David Dyer-Bennet wrote:
The question was essentially Wait, I don't see RAID 10 here, and that's
what I like. How do I do that? I think the answer was responsive and
not misleading enough to be dangerous; the differences can be explicated
later.
Most of us choose a pool
On Fri, 26 Mar 2010, Freddie Cash wrote:
On Fri, Mar 26, 2010 at 12:21 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 26 Mar 2010, Freddie Cash wrote:
Overly-simplified, a ZFS pool is a RAID0 stripeset across all the
member
vdevs, which can be
Hi
I have a couple of questions
I currently have a 4disk RaidZ1 setup and want to move to a RaidZ2
4x2TB = RaidZ1 (tank)
My current plan is to setup
8x1.5TB in a RAIDZ2 and migrate the data from the tank vdev over.
What's the best way to accomplish this with minimal disruption?
I have seen the
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product - petty sure that eclipses all (Open)Solaris-based storage ;)
-marc
On 3/26/10, Richard Elling richard.ell...@gmail.com wrote:
On Mar 26,
Hi
I'm planning on setting up two RaidZ2 volumes in different pools for added
flexibility in removing / resizing (from what I understand if they were in the
same pool I can't remove them at all). I also have got an SSD drive that I was
going to use as Cache (L2ARC). How do I set this up to have
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26.03.2010 23:25, Marc Nicholas wrote:
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product - petty sure that eclipses all
On 03/27/10 11:22 AM, Muhammed Syyid wrote:
Hi
I have a couple of questions
I currently have a 4disk RaidZ1 setup and want to move to a RaidZ2
4x2TB = RaidZ1 (tank)
My current plan is to setup
8x1.5TB in a RAIDZ2 and migrate the data from the tank vdev over.
What's the best way to accomplish
zfs send s...@oldpool | zfs receive newpool
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 03/27/10 11:32 AM, Svein Skogen wrote:
On 26.03.2010 23:25, Marc Nicholas wrote:
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product - petty sure that eclipses all (Open)Solaris-based
On Fri, March 26, 2010 17:26, Muhammed Syyid wrote:
Hi
I'm planning on setting up two RaidZ2 volumes in different pools for added
flexibility in removing / resizing (from what I understand if they were in
the same pool I can't remove them at all).
What do you mean remove?
You cannot remove
On 03/27/10 11:33 AM, Richard Jahnel wrote:
zfs send s...@oldpool | zfs receive newpool
In the OP's case, a recursive send is in order.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mar 26, 2010, at 3:25 PM, Marc Nicholas wrote:
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product - petty sure that eclipses all (Open)Solaris-based storage ;)
FreeBSD 8 or FreeBSD
On 03/27/10 09:39 AM, Richard Elling wrote:
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not that
much.
That is about right. IIRC, the theoretical max is about 4% improvement, for
MTU of 8KB.
Now i
OK, so I made progress today. FreeBSD see's all of my drives, ZFS is acting
correct.
Now for me confusion.
RAIDz3
# zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7
Gives: 'raidz3' no such GEOM providor
# I am looking at the best practices guide and I am confused about adding a
On Fri, Mar 26, 2010 at 6:29 PM, Slack-Moehrle mailingli...@mailnewsrss.com
wrote:
OK, so I made progress today. FreeBSD see's all of my drives, ZFS is acting
correct.
Now for me confusion.
RAIDz3
# zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7
Gives: 'raidz3' no such
On Fri, Mar 26, 2010 at 5:42 PM, Richard Elling richard.ell...@gmail.comwrote:
On Mar 26, 2010, at 3:25 PM, Marc Nicholas wrote:
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product -
On Mar 26, 2010, at 23:37, David Dyer-Bennet d...@dd-b.net wrote:
On Fri, March 26, 2010 14:25, Malte Schirmacher wrote:
Bob Friesenhahn wrote:
Except that ZFS does not support RAID0. I don't know why you guys
persist with these absurd claims and continue to use wrong and
misleading
For the time being, the EARS series of drives actually present 512 byte sectors
to the o/s through emulation in firmware.
The drive I tested was WD20EARS (2TB WD Caviar Green Advanced Format drives):
MDL: WD20EARS-00S81
DATE: 29 DEC 2009
DCM: HBRNHT2BB
DCX: 6019S1W87
LBA: 3907029168
The LBA
I have a question about using mixed vdev in the same zpool and what the
community opinion is on the matter. Here is my setup:
I have four 1TB drives and two 500GB drives. When I first setup ZFS I was
under the assumption that it does not really care much on how you add devices
to the pool
66 matches
Mail list logo