> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marty Scholes
>
> On a busy array it is hard even to use the leds as indicators.
Offline the disk. Light stays off.
Use dd to read the disk. Light stays on.
That should make it easy enough.
2011-06-18 0:24, marty scholes wrote:
>> It makes me wonder how large shops with thousands of spindles
handle this.
> We pay for the brand-name disk enclosures or servers where the
fault-management stuff is supported by Solaris.
> Including the blinky lights.
>
Funny you say that.
My Sun
Funny you say that.
My Sun v40z connected a pair of Sun A5200 arrays running OSol 128a can't see
the enclosures. The luxadm command comes up blank.
Except for that annoyance (and similar other issues) the Sun gear works well
with a Sun operating system.
Sent from Yahoo! Mail on Android
___
On 6/17/2011 6:52 AM, Marty Scholes wrote:
Lights. Good.
Agreed. In a fit of desperation and stupidity I once enumerated disks by
pulling them one by one from the array to see which zfs device faulted.
On a busy array it is hard even to use the leds as indicators.
It makes me wonder how large
> Lights. Good.
Agreed. In a fit of desperation and stupidity I once enumerated disks by
pulling them one by one from the array to see which zfs device faulted.
On a busy array it is hard even to use the leds as indicators.
It makes me wonder how large shops with thousands of spindles handle th
> Lights. Good.
Agreed. In a fit of desperation and stupidity I once enumerated disks by
pulling them one by one from the array to see which zfs device faulted.
On a busy array it is hard even to use the leds as indicators.
It makes me wonder how large shops with thousands of spindles handle
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> or is it completely random leaving me with some trial and error to work
out
> what disk is on what port?
It's highly desirable to have drives with lights on them. So you can
On 6/17/2011 12:55 AM, Lanky Doodle wrote:
Thanks Richard.
How does ZFS enumerate the disks? In terms of listing them does it do them
logically, i.e;
controller #1 (motherboard)
|
|--- disk1
|--- disk2
controller #3
|--- disk3
|--- disk4
|--- disk5
|--- disk6
> 4 - the 16th port
>
> Can you find somewhere inside the case for an SSD as
> L2ARC on your
> last port?
Although saying that, if we are saying hot spares may be bad in my scenario, I
could ditch it and use an 3.5" SSD in the 15th drive's place?
--
This message posted from opensolaris.org
>I was planning on using one of
> these
> http://www.scan.co.uk/products/icy-dock-mb994sp-4s-4in
> 1-sas-sata-hot-swap-backplane-525-raid-cage
Imagine if 2.5" 2TB disks were price neutral compared to 3.5" equivalents.
I could have 40 of the buggers in my system giving 80TB raw storage! I'd
h
Thanks Richard.
How does ZFS enumerate the disks? In terms of listing them does it do them
logically, i.e;
controller #1 (motherboard)
|
|--- disk1
|--- disk2
controller #3
|--- disk3
|--- disk4
|--- disk5
|--- disk6
|--- disk7
|--- disk8
|--- disk9
|-
> 1 - are the 2 vdevs in the same pool, or two separate
> pools?
>
I was planning on having the 2 z2 vdevs in one pool. Although having 2 pools
and having them sync'd sounds really good, I fear it may be overkill for the
intended purpose.
>
>
> 3 - spare temperature
>
> for levels raidz2 and
On Thu, Jun 16, 2011 at 07:06:48PM +0200, Roy Sigurd Karlsbakk wrote:
> > I have decided to bite the bullet and change to 2TB disks now rather
> > than go through all the effort using 1TB disks and then maybe changing
> > in 6-12 months time or whatever. The price difference between 1TB and
> > 2TB
On Jun 16, 2011, at 2:07 AM, Lanky Doodle wrote:
> Thanks guys.
>
> I have decided to bite the bullet and change to 2TB disks now rather than go
> through all the effort using 1TB disks and then maybe changing in 6-12 months
> time or whatever. The price difference between 1TB and 2TB disks is
> I have decided to bite the bullet and change to 2TB disks now rather
> than go through all the effort using 1TB disks and then maybe changing
> in 6-12 months time or whatever. The price difference between 1TB and
> 2TB disks is marginal and I can always re-sell my 6x 1TB disks.
>
> I think I ha
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> can you have one vdev that is a duplicate of another
> vdev? By that I mean say you had 2x 7 disk raid-z2 vdevs, instead of them
> both being used in one large pool could you h
Thanks guys.
I have decided to bite the bullet and change to 2TB disks now rather than go
through all the effort using 1TB disks and then maybe changing in 6-12 months
time or whatever. The price difference between 1TB and 2TB disks is marginal
and I can always re-sell my 6x 1TB disks.
I think
> 3x 5 disk raid-z. 3 disk failures in the right scenario, 12TB storage
> 2x 7 disk raid-z + hot spare. 2 disk failures in the right scenario,
> 12TB storage
> 1x 15 disk raid-z2. 2 disk failures, 13TB storage
> 2x 7 disk raid-z2 + hot spare. 4 disk failures in the right scenario,
> 10TB storage
I
Hi Lanky,
If you created a mirrored pool instead of a RAIDZ pool, you could use
the zpool split feature to split your mirrored pool into two identical
pools.
For example, If you had 3-way mirrored pool, your primary pool will
remain redundant with 2-way mirrors after the split. Then, you would
h
It sounds like you are getting a good plan together.
> The only thing though I seem to remember reading that adding vdevs to
> pools way after the creation of the pool and data had been written to it,
> that things aren't spread evenly - is that right? So it might actually make
> sense to buy all
On Wed, Jun 15, 2011 at 8:20 AM, Lanky Doodle wrote:
> That's how I understood autoexpand, about not doing so until all disks have
> been done.
>
> I do indeed rip from disc rather than grab torrents - to VIDEO_TS folders and
> not ISO - on my laptop then copy the whole folder up to WHS in one g
> This is a bit OT, but can you have one vdev that is a duplicate
> of another vdev? By that I mean say you had 2x 7 disk raid-z2
> vdevs, instead of them both being used in one large pool could
> you have one that is a backup of the other, allowing you to
> destroy one of them and re-build wit
That's how I understood autoexpand, about not doing so until all disks have
been done.
I do indeed rip from disc rather than grab torrents - to VIDEO_TS folders and
not ISO - on my laptop then copy the whole folder up to WHS in one go. So while
they're not one large single file, they are lots o
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> In that case what 'option' would you choose - smaller raid-z vdevs or
larger
> raid-z2 vdevs.
The more redundant disks you have, the more protection you get, and the
smaller a
Thanks Edward.
In that case what 'option' would you choose - smaller raid-z vdevs or larger
raid-z2 vdevs.
I do like the idea of having a hot spare so 2x 7 disk raid-z2 may be the better
option rather than 3x 5 disk raid-z with no hot spare. 2TB loss in the former
could be acceptable I suppose
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> But as it for a home media server, which is mainly WORM access and will be
> storing (legal!) DVD/Bluray rips i'm not so sure I can sacrify the space.
For your purposes, raidz
Thanks martysch.
That is what I meant about adding disks to vdevs - not adding disks to vdevs
but adding vdevs to pools.
If the geometry of the vdevs should ideally be the same, it would make sense to
buy one more disk now and have a 7 disk raid-z2 to start with, then buy disks
as and when and
I am asssuming you will put all of the vdevs into a single pool, which is a
good idea unless you have a specific reason for keeping them separate, e.g. you
want to be able to destroy / rebuild a particular vdev while leaving the others
intact.
Fewer disks per vdev implies more vdevs, providing
Thanks Edward.
I'm in two minds with mirrors. I know they provide the best performance and
protection, and if this was a business critical machine I wouldn't hesitate.
But as it for a home media server, which is mainly WORM access and will be
storing (legal!) DVD/Bluray rips i'm not so sure I c
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> The ZFS install will be mirrored, but I am not sure how to configure the
15
> data disks from a performance (inc. resilvering) vs protection vs usable
space
> perspective;
>
>
Hiya,
I am just in the planning stages for my ZFS Home Media Server build at the
moment (to replace WHS v1).
I plan to use 2x motherboard ports and 2x Supermicro AOC-SASLP-MV8 8 port SATA
cards to give 17* drive connections; 2 disks (120GB SATA 2.5") will be used for
the ZFS install using the
31 matches
Mail list logo