On Sat, Jan 08, 2011 at 12:33:50PM -0500, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
you get a product that has been
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
Other OS's have had problems with the Broadcom NICs aswell..
Yes. The difference is, when I go to support.dell.com and punch in my
service tag, I can download updated firmware and drivers for RHEL that (at
least supposedly) solve the problem. I
On Jan 9, 2011, at 4:19 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
Other OS's have had problems with the Broadcom NICs aswell..
Yes. The difference is, when I go to support.dell.com and punch in my
service
Just to add a bit to this, I just love sweeping generalizations...
On 9 Jan 2011, at 19:33 , Richard Elling wrote:
On Jan 9, 2011, at 4:19 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
Other OS's have had
As for certified systems, It's my understanding that Nexenta themselves don't
certify anything. They have systems which are recommended and supported by
their network of VAR's.
The certified solutions listed on Nexenta's website were certified by Nexenta.
On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote:
From: Khushil Dep [mailto:khushil@gmail.com]
I've deployed large SAN's on both SuperMicro 825/826/846 and Dell
R610/R710's and I've not found any issues so far. I always make a point of
installing Intel chipset NIC's on the DELL's and disabling
On Thu, Jan 6, 2011 at 11:36 PM, Garrett D'Amore garr...@nexenta.com wrote:
On 01/ 6/11 05:28 AM, Edward Ned Harvey wrote:
See my point? Next time I buy a server, I do not have confidence to
simply expect solaris on dell to work reliably. The same goes for solaris
derivatives, and all
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
you get a product that has been through a rigorous qualification process
How do I do this, exactly? I
Am 08.01.11 18:33, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
you get a product that has been through a rigorous
On 01/ 8/11 10:43 AM, Stephan Budach wrote:
Am 08.01.11 18:33, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
you get a
On 06/01/2011 00:14, Edward Ned Harvey wrote:
solaris engineers don't use? Non-sun hardware. Pretty safe bet you won't
find any Dell servers in the server room where solaris developers do their
thing.
You would lose that bet, not only would you find Dell you would many
other big names as
I've deployed large SAN's on both SuperMicro 825/826/846 and Dell
R610/R710's and I've not found any issues so far. I always make a point of
installing Intel chipset NIC's on the DELL's and disabling the Broadcom ones
but other than that it's always been plain sailing - hardware-wise anyway.
I've
From: Richard Elling [mailto:richard.ell...@nexenta.com]
If I understand correctly, you want Dell, HP, and IBM to run OSes other
I agree, but neither Dell, HP, nor IBM develop Windows...
I'm not sure of the current state, but many of the Solaris engineers
develop
on laptops and Sun did
This is a silly argument, but...
Haven't seen any underdog proven solid enough for me to deploy in
enterprise yet.
I haven't seen any overdog proven solid enough for me to be able to rely
on either. Certainly not Solaris. Don't get me wrong, I like(d) Solaris.
But every so often you'd
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
On Wed, 5 Jan 2011, Edward Ned Harvey wrote:
with regards to ZFS and all the other projects relevant to solaris.)
I know in the case of SGE/OGE, it's officially closed source now. As of
Dec
31st, sunsource is being
From: Khushil Dep [mailto:khushil@gmail.com]
I've deployed large SAN's on both SuperMicro 825/826/846 and Dell
R610/R710's and I've not found any issues so far. I always make a point of
installing Intel chipset NIC's on the DELL's and disabling the Broadcom ones
but other than that it's
Two fold really - firstly I remember the headaches I used to have
configuring Broadcom cards properly under Debain/Ubuntu but the sweetness
that was using an Intel NIC. Bottom line for me was that I know Intel
drivers have been around longer than Broadcom drivers and thus it would make
sense to
On Jan 5, 2011, at 7:44 AM, Edward Ned Harvey wrote:
From: Khushil Dep [mailto:khushil@gmail.com]
We do have a major commercial interest - Nexenta. It's been quiet but I do
look forward to seeing something come out of that stable this year? :-)
I'll agree to call Nexenta a major
On Jan 5, 2011, at 4:14 PM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@nexenta.com]
I'll agree to call Nexenta a major commerical interest, in regards to
contribution to the open source ZFS tree, if they become an officially
supported OS on Dell, HP, and/or IBM
From: Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com
To: 'Khushil Dep' khushil@gmail.com
Cc: Richard Elling richard.ell...@nexenta.com,
zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] A few questions
Message-ID: 000201cbada5$a3678270$ea3687
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
The claim was that there are more people contributing code from outside of
Oracle than inside to zfs. Your contributions to Illumos do absolutely
nothing
Guys, please let's just
Edward Ned Harvey wrote
I don't know if anyone has real numbers, dollars contributed or number of
developer hours etc, but I think it's fair to say that oracle is probably
contributing more to the closed source ZFS right now, than the rest of the
world is contributing to the open source ZFS
From: Deano [mailto:de...@rattie.demon.co.uk]
Sent: Wednesday, January 05, 2011 9:16 AM
So honestly do we want to innovate ZFS (I do) or do we just want to follow
Oracle?
Well, you can't follow Oracle. Unless you wait till they release something,
reverse engineer it, and attempt to
We do have a major commercial interest - Nexenta. It's been quiet but I do
look forward to seeing something come out of that stable this year? :-)
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 5 January 2011 14:34, Edward Ned
On Wed, Jan 5, 2011 at 15:34, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Deano [mailto:de...@rattie.demon.co.uk]
Sent: Wednesday, January 05, 2011 9:16 AM
So honestly do we want to innovate ZFS (I do) or do we just want to follow
Oracle?
Well, you
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael Schuster
Sent: Wednesday, January 05, 2011 9:42 AM
To: Edward Ned Harvey
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] A few questions
From: Michael Schuster [mailto:michaelspriv...@gmail.com]
Well, you can't follow Oracle. Unless you wait till they release
something,
reverse engineer it, and attempt to reimplement it.
that's not my understanding - while we will have to wait, oracle is
supposed to release *some*
From: Khushil Dep [mailto:khushil@gmail.com]
We do have a major commercial interest - Nexenta. It's been quiet but I do
look forward to seeing something come out of that stable this year? :-)
I'll agree to call Nexenta a major commerical interest, in regards to
contribution to the open
On 01/ 4/11 11:48 PM, Tim Cook wrote:
On Tue, Jan 4, 2011 at 8:21 PM, Garrett D'Amore garr...@nexenta.com
mailto:garr...@nexenta.com wrote:
On 01/ 4/11 09:15 PM, Tim Cook wrote:
On Mon, Jan 3, 2011 at 5:56 AM, Garrett D'Amore
garr...@nexenta.com mailto:garr...@nexenta.com
Edward Ned Harvey wrote
From: Deano [mailto:de...@rattie.demon.co.uk]
Sent: Wednesday, January 05, 2011 9:16 AM
So honestly do we want to innovate ZFS (I do) or do we just want to follow
Oracle?
Well, you can't follow Oracle. Unless you wait till they release
something,
reverse engineer
From: Richard Elling [mailto:richard.ell...@nexenta.com]
I'll agree to call Nexenta a major commerical interest, in regards to
contribution to the open source ZFS tree, if they become an officially
supported OS on Dell, HP, and/or IBM hardware.
NexentaStor is officially supported on
On Wed, 5 Jan 2011, Edward Ned Harvey wrote:
with regards to ZFS and all the other projects relevant to solaris.)
I know in the case of SGE/OGE, it's officially closed source now. As of Dec
31st, sunsource is being decomissioned, and the announcement of officially
closing the SGE source and
It is sad that such a lovely file system is now in Oracle's unresponsive hands.
I hope someone builds another open file system just like it. I could never
find anything like it to protect my data like it does.
___
zfs-discuss mailing list
On 01/ 4/11 01:19 PM, webd...@gmail.com wrote:
It is sad that such a lovely file system is now in Oracle's unresponsive hands.
I hope someone builds another open file system just like it. I could never
find anything like it to protect my data like it does.
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
There are more people outside of
On 01/ 4/11 11:35 PM, Robert Milkowski wrote:
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com
On 01/ 3/11 05:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
There are more people outside of Oracle developing for ZFS than
inside Oracle.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Gress
On 01/ 4/11 01:19 PM, webd...@gmail.com wrote:
It is sad that such a lovely file system is now in Oracle's unresponsive
hands. I
hope someone builds another open file system just
On Mon, Jan 3, 2011 at 5:56 AM, Garrett D'Amore garr...@nexenta.com wrote:
On 01/ 3/11 05:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling richard.ell...@gmail.com
wrote:
There are more people outside of Oracle
On Tue, Jan 4, 2011 at 8:21 PM, Garrett D'Amore garr...@nexenta.com wrote:
On 01/ 4/11 09:15 PM, Tim Cook wrote:
On Mon, Jan 3, 2011 at 5:56 AM, Garrett D'Amore garr...@nexenta.comwrote:
On 01/ 3/11 05:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
There are more people outside of Oracle developing for ZFS than
inside Oracle.
This has been true for some time now.
On Mon, 3 Jan 2011, Robert Milkowski wrote:
Exactly my observation as well. I haven't seen any ZFS related
development happening at Ilumos or Nexenta, at least not yet.
There seems to be plenty of zfs work on the FreeBSD project, but
primarily with porting the latest available sources to
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling richard.ell...@gmail.com
wrote:
There are more people outside of Oracle developing for ZFS than inside
Oracle.
This has been true for some
On 1/3/2011 8:28 AM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
There are more people outside of Oracle
On Jan 3, 2011, at 2:10 PM, Erik Trimble wrote
On 1/3/2011 8:28 AM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com wrote:
There are more people
On Dec 21, 2010, at 5:05 AM, Deano wrote:
The question therefore is, is there room in the software implementation to
achieve performance and reliability numbers similar to expensive drives
whilst using relative cheap drives?
For some definition of similar, yes. But using relatively cheap
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.comwrote:
On Dec 21, 2010, at 5:05 AM, Deano wrote:
The question therefore is, is there room in the software implementation to
achieve performance and reliability numbers similar to expensive drives
whilst using relative
It's worse on raidzN than on mirrors, because the
number of items which must
be read is higher in radizN, assuming you're using
larger vdev's and
therefore more items exist scattered about inside
that vdev. You therefore
have a higher number of things which must be randomly
read before
On 21/12/2010 05:44, Richard Elling wrote:
On Dec 20, 2010, at 7:31 AM, Phil Harman phil.har...@gmail.com
mailto:phil.har...@gmail.com wrote:
On 20/12/2010 13:59, Richard Elling wrote:
On Dec 20, 2010, at 2:42 AM, Phil Harman phil.har...@gmail.com
mailto:phil.har...@gmail.com wrote:
Why does
On Dec 20, 2010, at 7:31 AM, Phil Harman phil.har...@gmail.com wrote:
If you only have a few slow drives, you don't have performance.
Like trying to win the Indianapolis 500 with a tricycle...
Well you can put a jet engine on a tricycle and perhaps win it… Or you can
change the race
On 21/12/2010 13:05, Deano wrote:
On Dec 20, 2010, at 7:31 AM, Phil Harman phil.har...@gmail.com
mailto:phil.har...@gmail.com wrote:
If you only have a few slow drives, you don't have performance.
Like trying to win the Indianapolis 500 with a tricycle...
Actually, I didn't say that,
Doh sorry about that, the threading got very confused on my mail reader!
Bye,
Deano
From: Phil Harman [mailto:phil.har...@gmail.com]
Sent: 21 December 2010 13:12
To: Deano
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] A few questions
On 21/12/2010 13:05, Deano wrote
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Mon, Dec 20 at 19:19, Edward Ned Harvey wrote:
If there is no correlation between on-disk order of blocks for different
disks within the same vdev, then all hope is lost; it's
From: Richard Elling [mailto:richard.ell...@gmail.com]
Now suppose you have a raidz with 3 disks (disk1, disk2, disk3, where
disk3
is resilvering). You find some way of ordering all the used blocks of
disk1... Which means disk1 will be able to read in optimal order and
speed.
Sounds
On Tue, Dec 21 at 8:24, Edward Ned Harvey wrote:
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Mon, Dec 20 at 19:19, Edward Ned Harvey wrote:
If there is no correlation between on-disk order of blocks for different
disks within the
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
Unless your drive is able to queue up a request to read every single used
part of the drive... Which is larger than the command queue for any
reasonable drive in the world... The point
Thanks Edward.
I do agree about mirrored rpool (equivalent to Windows OS volume); not doing it
goes against one of my principles when building enterprise servers.
Is there any argument against using the rpool for all data storage as well as
being the install volume?
Say for example I chucked
Oh, does anyone know if resilvering efficiency is improved or fixed in Solaris
11 Express, as that is what i'm using.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Why does resilvering take so long in raidz anyway?
Because it's broken. There were some changes a while back that made it more
broken.
There has been a lot of discussion, anecdotes and some data on this list.
The resilver doesn't do a single pass of the drives, but uses a smarter
temporal
2010 10:43
To: Lanky Doodle
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] A few questions
Why does resilvering take so long in raidz anyway?
Because it's broken. There were some changes a while back that made it more
broken.
There has been a lot of discussion, anecdotes and some
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest of
us.
OK, that is a bit concerning then. As good as ZFS may be, i'm not sure I want
to committ to a file system that is 'broken' and
On 20/12/2010 11:03, Deano wrote:
Hi,
Which brings up an interesting question...
IF it were fixed in for example illumos or freebsd is there a plan for how
to handle possible incompatible zfs implementations?
Currently the basic version numbering only works as it implies only one
stream of
On 20/12/2010 11:29, Lanky Doodle wrote:
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest of
us.
OK, that is a bit concerning then. As good as ZFS may be, i'm not sure I want
to committ
Phil Harman phil.har...@gmail.com wrote:
Changes to the resilvering implementation don't necessarily require
changes to the on disk format (although they could). Of course, there
might be an issue moving a pool mid-resilver from one implementation to
another.
We seem to come to a similar
On Dec 20, 2010, at 2:42 AM, Phil Harman phil.har...@gmail.com wrote:
Why does resilvering take so long in raidz anyway?
Because it's broken. There were some changes a while back that made it more
broken.
broken is the wrong term here. It functions as designed and correctly
resilvers
Thanks relling.
I suppose at the end of the day any file system/volume manager has it's flaws
so perhaps it's better to look at the positives of each and decide based on
them.
So, back to my question above, is there a deciding argument [i]against[/i]
putting data on the install volume
On 20/12/2010 13:59, Richard Elling wrote:
On Dec 20, 2010, at 2:42 AM, Phil Harman phil.har...@gmail.com
mailto:phil.har...@gmail.com wrote:
Why does resilvering take so long in raidz anyway?
Because it's broken. There were some changes a while back that made
it more broken.
broken is the
On Dec 18, 2010, at 12:23 PM, Lanky Doodle wrote:
Now this is getting really complex, but can you have server failover in ZFS,
much like DFS-R in Windows - you point clients to a clustered ZFS namespace
so if a complete server failed nothing is interrupted.
This is the purpose of an Amber
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest of
us.
OK,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
Is there any argument against using the rpool for all data storage as well
as
being the install volume?
Generally speaking, you can't do it.
The rpool is only supported on
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Sent: Monday, December 20, 2010 11:46 AM
To: 'Lanky Doodle'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] A few questions
From: zfs
-discuss] A few questions
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest
-discuss] A few questions
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest
] On Behalf Of Edward Ned Harvey
Sent: Monday, December 20, 2010 11:46 AM
To: 'Lanky Doodle'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] A few questions
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
I believe Oracle
On 12/20/2010 11:56 AM, Mark Sandrock wrote:
Erik,
just a hypothetical what-if ...
In the case of resilvering on a mirrored disk, why not take a snapshot, and then
resilver by doing a pure block copy from the snapshot? It would be sequential,
so long as the original data was
On Mon, 20 Dec 2010 11:27:41 PST Erik Trimble erik.trim...@oracle.com wrote:
The problem boils down to this:
When ZFS does a resilver, it walks the METADATA tree to determine what
order to rebuild things from. That means, it resilvers the very first
slab ever written, then the next
On Dec 20, 2010, at 2:05 PM, Erik Trimble wrote:
On 12/20/2010 11:56 AM, Mark Sandrock wrote:
Erik,
just a hypothetical what-if ...
In the case of resilvering on a mirrored disk, why not take a snapshot, and
then
resilver by doing a pure block copy from the snapshot? It would be
From: Erik Trimble [mailto:erik.trim...@oracle.com]
We can either (a) change how ZFS does resilvering or (b) repack the
zpool layouts to avoid the problem in the first place.
In case (a), my vote would be to seriously increase the number of
in-flight resilver slabs, AND allow for
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
In the case of resilvering on a mirrored disk, why not take a snapshot,
and
then
resilver by doing a pure block copy from the snapshot? It would be
sequential,
So, a
ZFS
On Mon, Dec 20 at 19:19, Edward Ned Harvey wrote:
If there is no correlation between on-disk order of blocks for different
disks within the same vdev, then all hope is lost; it's essentially
impossible to optimize the resilver/scrub order unless the on-disk order of
multiple disks is highly
It well may be that different methods are optimal for different use cases.
Mechanical disk vs. SSD; mirrored vs. raidz[123]; sparse vs. populated; etc.
It would be interesting to read more in this area, if papers are available.
I'll have to take a look. ... Or does someone have pointers?
Mark
On Dec 20, 2010, at 7:31 AM, Phil Harman phil.har...@gmail.com wrote:
On 20/12/2010 13:59, Richard Elling wrote:
On Dec 20, 2010, at 2:42 AM, Phil Harman phil.har...@gmail.com wrote:
Why does resilvering take so long in raidz anyway?
Because it's broken. There were some changes a while
On Dec 20, 2010, at 4:19 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
We can either (a) change how ZFS does resilvering or (b) repack the
zpool layouts to avoid the problem in the first place.
In case
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alexander Lesle
at Dezember, 17 2010, 17:48 Lanky Doodle wrote in [1]:
By single drive mirrors, I assume, in a 14 disk setup, you mean 7
sets of 2 disk mirrors - I am thinking of
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Friday, December 17, 2010 9:16 PM
While I agree that smaller vdevs are more reliable, your statement
about the failure being more likely be in the same vdev if you have
only 2 vdev's to be a rather useless statement. The
On the subject of where to install ZFS, I was planning to use either Compact
Flash or USB drive (both of which would be mounted internally); using up 2 of
the drive bays for a mirrored install is possibly a waste of physical space,
considering it's a) a home media server and b) the config can
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
On the subject of where to install ZFS, I was planning to use either
Compact
Flash or USB drive (both of which would be mounted internally); using up 2
of
the drive bays for a
Thanks for all the replies.
The bit about combining zpools came from this command on the southbrain
tutorial;
zpool create mail \
mirror c6t600D0230006C1C4C0C50BE5BC9D49100d0
c6t600D0230006B66680C50AB7821F0E900d0 \
mirror c6t600D0230006B66680C50AB0187D75000d0
On 12/17/2010 2:12 AM, Lanky Doodle wrote:
Thanks for all the replies.
The bit about combining zpools came from this command on the southbrain
tutorial;
zpool create mail \
mirror c6t600D0230006C1C4C0C50BE5BC9D49100d0
c6t600D0230006B66680C50AB7821F0E900d0 \
mirror
OK cool.
One last question. Reading the Admin Guid for ZFS, it says:
[i]A more complex conceptual RAID-Z configuration would look similar to the
following:
raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 raidz c8t0d0 c9t0d0
c10t0d0 c11t0d0 c12t0d0 c13t0d0 c14t0d0
If you are creating a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
This is relevant as my final setup was planned to be 15 disks, so only one
more than the example.
So, do I drop one disk and go with 2 7 drive vdevs, or stick to 3 5 drive
Thanks!
By single drive mirrors, I assume, in a 14 disk setup, you mean 7 sets of 2
disk mirrors - I am thinking of traditional RAID1 here.
Or do you mean 1 massive mirror with all 14 disks?
This is always a tough one for me. I too prefer RAID1 where redundancy is king,
but the trade off for
You should take a look at the ZFS best practices guide for RAIDZ and
mirrored configuration recommendations:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Its easy for me to say because I don't have to buy storage but
mirrored storage pools are currently more flexible,
at Dezember, 17 2010, 17:48 Lanky Doodle wrote in [1]:
By single drive mirrors, I assume, in a 14 disk setup, you mean 7
sets of 2 disk mirrors - I am thinking of traditional RAID1 here.
Or do you mean 1 massive mirror with all 14 disks?
Edward means a set of two-way-mirrors.
Do you
On Fri, 17 Dec 2010, Edward Ned Harvey wrote:
Also if a 2nd disk fails during resilver, it's more likely to be in the same
vdev, if you have only 2 vdev's. Your odds are better with smaller vdev's,
both because the resilver completes faster, and the probability of a 2nd
failure in the same
Hiya,
I have been playing with ZFS for a few days now on a test PC, and I plan to use
if for my home media server after being very impressed!
I've got the basics of creating zpools and zfs filesystems with compression and
dedup etc, but I'm wondering if there's a better way to handle security.
Also, at present I have 5x 1TB drives to use in my home server so I
plan to create a RAID-Z1 pool which will have my shares on it (Movies,
Music, Pictures etc). I then plan to increase this in sets of 5 (so
another 5x 1TB drives in Jan and nother 5 in Feb/March so that I can
avoid all disks
On Thu, Dec 16, 2010 at 12:59 AM, Lanky Doodle lanky_doo...@hotmail.com wrote:
I have been playing with ZFS for a few days now on a test PC, and I plan to
use if for my home media server after being very impressed!
Works great for that. Have a similar setup at home, using FreeBSD.
Also, at
Hi Lanky,
Other follow-up posters have given you good advice.
I don't see where you are getting the idea that you can combine
pools with pools. You can't do this and I don't see that the
southbrain tutorial illustrates this either. All of his examples
for creating redundant pools are
Thanks for the reply.
In that case, wouldn't it be better to, as you say, start with a 6 drive Z2,
then just keep adding drives until the case is full, for a single Z2 zpool?
Or even Z3, if that's available now?
I have an 11x 5.1/4 bay case, with 3x 5-in-3 hot swap caddies giving me 15
drive
1 - 100 of 101 matches
Mail list logo