Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tim Cook
> 
> The response was that Sun makes sure all drives
> are exactly the same size (although I do recall someone on this forum
having
> this issue with Sun OEM disks as well).  

That was me.  Sun branded Intel SSD being reported 0.01Gb smaller.  But
after bashing my brains out for a few days, we discovered there was some
operation I could perform on the HBA which solved the problem.  I forget
exactly what it was - something like a factory installed disk label or
something, which I overwrote in order to gain that 0.01G on the new drive.

For this reason, I have made a habit of slicing drives, and leaving the last
1G unused.  It's kind of a hassle, but as Cindy mentions, the problem should
be solved in current releases.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Roy Sigurd Karlsbakk
> > One comment: The IDEMA LBA01 spec size of a 160GB device is
> > 312,581,808 sectors.
> >
> > Instead of those WD models, where neither the old nor new drives
> > follow the IDEMA recommendation, consider buying a drive that
> > reports
> > that many sectors. Almost all models these days should be following
> > the IDEMA recommendations due to all the troubles people have had.
> >
> > --eric
> >
> > --
> > Eric D. Mudama
> > edmud...@bounceswoosh.org
> >
> 
> 
> Thats encouraging, if I have to I would rather buy one new disk then
> 4.

Get one that's a bit larger. It won't cost you a fortune. If the reseller is 
nice, you may even return the old one..

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 10:46 AM, Cindy Swearingen wrote:

> Hi Robert,
> 
> We integrated some fixes that allowed you to replace disks of equivalent
> sizes, but 40 MB is probably beyond that window.
> 
> Yes, you can do #2 below and the pool size will be adjusted down to the
> smaller size. Before you do this, I would check the sizes of both
> spares.
> 
I already checked, they are equivalent.

> If both spares are "equivalent" smaller sizes, you could use those to
> build the replacement pool with the larger disks and then put the extra
> larger disks on the shelf.
> 
> Thanks,
> 
> Cindy


I think thats what I will do, I don't wanna spend money if I don't have to... 
I'm kinda funny that way :-)

Thanks for the info Cindy

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 11:19 AM, Eric D. Mudama wrote:

> On Fri, Mar  4 at  9:22, Robert Hartzell wrote:
>> In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
>> storage pool and then shelved the other two for spares. One of the disks 
>> failed last night so I shut down the server and replaced it with a spare. 
>> When I tried to zpool replace the disk I get:
>> 
>> zpool replace tank c10t0d0
>> cannot replace c10t0d0 with c10t0d0: device is too small
>> 
>> The 4 original disk partition tables look like this:
>> 
>> Current partition table (original):
>> Total disk sectors available: 312560317 + 16384 (reserved sectors)
>> 
>> Part  TagFlag First Sector Size Last Sector
>> 0usrwm34  149.04GB  312560350
>> 1 unassignedwm 0   0   0
>> 2 unassignedwm 0   0   0
>> 3 unassignedwm 0   0   0
>> 4 unassignedwm 0   0   0
>> 5 unassignedwm 0   0   0
>> 6 unassignedwm 0   0   0
>> 8   reservedwm 3125603518.00MB  312576734
>> 
>> Spare disk partition table looks like this:
>> 
>> Current partition table (original):
>> Total disk sectors available: 312483549 + 16384 (reserved sectors)
>> 
>> Part  TagFlag First Sector Size Last Sector
>> 0usrwm34  149.00GB  312483582
>> 1 unassignedwm 0   0   0
>> 2 unassignedwm 0   0   0
>> 3 unassignedwm 0   0   0
>> 4 unassignedwm 0   0   0
>> 5 unassignedwm 0   0   0
>> 6 unassignedwm 0   0   0
>> 8   reservedwm 3124835838.00MB  312499966
>> 
>> So it seems that two of the disks are slightly different models and are 
>> about 40mb smaller then the original disks.
> 
> 
> One comment: The IDEMA LBA01 spec size of a 160GB device is
> 312,581,808 sectors.
> 
> Instead of those WD models, where neither the old nor new drives
> follow the IDEMA recommendation, consider buying a drive that reports
> that many sectors.  Almost all models these days should be following
> the IDEMA recommendations due to all the troubles people have had.
> 
> --eric
> 
> -- 
> Eric D. Mudama
> edmud...@bounceswoosh.org
> 


Thats encouraging, if I have to I would rather buy one new disk then 4.
Thanks, Robert 

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 11:46 AM, Cindy Swearingen wrote:

> Robert,
> 
> Which Solaris release is this?
> 
> Thanks,
> 
> Cindy
> 


Solaris 11 express 2010.11

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Cindy Swearingen

Robert,

Which Solaris release is this?

Thanks,

Cindy


On 03/04/11 11:10, Mark J Musante wrote:


The fix for 6991788 would probably let the 40mb drive work, but it would 
depend on the asize of the pool.


On Fri, 4 Mar 2011, Cindy Swearingen wrote:


Hi Robert,

We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.

Yes, you can do #2 below and the pool size will be adjusted down to the
smaller size. Before you do this, I would check the sizes of both
spares.

If both spares are "equivalent" smaller sizes, you could use those to
build the replacement pool with the larger disks and then put the extra
larger disks on the shelf.

Thanks,

Cindy



On 03/04/11 09:22, Robert Hartzell wrote:
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a 
raidz storage pool and then shelved the other two for spares. One of 
the disks failed last night so I shut down the server and replaced it 
with a spare. When I tried to zpool replace the disk I get:


zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: 
device is too small


The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  
312560350 1 unassignedwm 0   
0   0  2 unassignedwm 0   
0   0  3 unassignedwm 0   
0   0  4 unassignedwm 0   
0   0  5 unassignedwm 0   
0   0  6 unassignedwm 0   
0   0  8 reservedwm 312560351
8.00MB  312576734


Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  
312483582 1 unassignedwm 0   
0   0  2 unassignedwm 0   
0   0  3 unassignedwm 0   
0   0  4 unassignedwm 0   
0   0  5 unassignedwm 0   
0   0  6 unassignedwm 0   
0   0  8 reservedwm 312483583
8.00MB  312499966
 So it seems that two of the disks are slightly different models and 
are about 40mb smaller then the original disks. I know I can just add 
a larger disk but I would rather user the hardware I have if possible.

1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of 
the slightly smaller spares? Will zpool/zfs adjust its size to the 
smaller disk?
3) If #2 is possible would I still be able to use the last still 
shelved disk as a spare?


If #2 is possible I would probably recreate the zpool as raidz2 
instead of the current raidz1.


Any info/comments would be greatly appreciated.

Robert
  --  Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




Regards,
markm

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Eric D. Mudama

On Fri, Mar  4 at  9:22, Robert Hartzell wrote:

In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
storage pool and then shelved the other two for spares. One of the disks failed 
last night so I shut down the server and replaced it with a spare. When I tried 
to zpool replace the disk I get:

zpool replace tank c10t0d0
cannot replace c10t0d0 with c10t0d0: device is too small

The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
 0usrwm34  149.04GB  312560350
 1 unassignedwm 0   0   0
 2 unassignedwm 0   0   0
 3 unassignedwm 0   0   0
 4 unassignedwm 0   0   0
 5 unassignedwm 0   0   0
 6 unassignedwm 0   0   0
 8   reservedwm 3125603518.00MB  312576734

Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
 0usrwm34  149.00GB  312483582
 1 unassignedwm 0   0   0
 2 unassignedwm 0   0   0
 3 unassignedwm 0   0   0
 4 unassignedwm 0   0   0
 5 unassignedwm 0   0   0
 6 unassignedwm 0   0   0
 8   reservedwm 3124835838.00MB  312499966

So it seems that two of the disks are slightly different models and are about 
40mb smaller then the original disks.



One comment: The IDEMA LBA01 spec size of a 160GB device is
312,581,808 sectors.

Instead of those WD models, where neither the old nor new drives
follow the IDEMA recommendation, consider buying a drive that reports
that many sectors.  Almost all models these days should be following
the IDEMA recommendations due to all the troubles people have had.

--eric

--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Mark J Musante


The fix for 6991788 would probably let the 40mb drive work, but it would 
depend on the asize of the pool.


On Fri, 4 Mar 2011, Cindy Swearingen wrote:


Hi Robert,

We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.

Yes, you can do #2 below and the pool size will be adjusted down to the
smaller size. Before you do this, I would check the sizes of both
spares.

If both spares are "equivalent" smaller sizes, you could use those to
build the replacement pool with the larger disks and then put the extra
larger disks on the shelf.

Thanks,

Cindy



On 03/04/11 09:22, Robert Hartzell wrote:
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
storage pool and then shelved the other two for spares. One of the disks 
failed last night so I shut down the server and replaced it with a spare. 
When I tried to zpool replace the disk I get:


zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: device is 
too small


The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  312560350 
1 unassignedwm 0   0   0  2 
unassignedwm 0   0   0  3 
unassignedwm 0   0   0  4 
unassignedwm 0   0   0  5 
unassignedwm 0   0   0  6 
unassignedwm 0   0   0  8 
reservedwm 3125603518.00MB  312576734


Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  312483582 
1 unassignedwm 0   0   0  2 
unassignedwm 0   0   0  3 
unassignedwm 0   0   0  4 
unassignedwm 0   0   0  5 
unassignedwm 0   0   0  6 
unassignedwm 0   0   0  8 
reservedwm 3124835838.00MB  312499966
 So it seems that two of the disks are slightly different models and are 
about 40mb smaller then the original disks. 
I know I can just add a larger disk but I would rather user the hardware I 
have if possible.

1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of the 
slightly smaller spares? Will zpool/zfs adjust its size to the smaller 
disk?
3) If #2 is possible would I still be able to use the last still shelved 
disk as a spare?


If #2 is possible I would probably recreate the zpool as raidz2 instead of 
the current raidz1.


Any info/comments would be greatly appreciated.

Robert
  --  Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Joerg Schilling
Cindy Swearingen  wrote:

> Hi Robert,
>
> We integrated some fixes that allowed you to replace disks of equivalent
> sizes, but 40 MB is probably beyond that window.

In former times, similar problems applied to partitioned disks with UFS 
and we at that time did check the market for the lowest disk size in a disk 
class and sold out disks with partitions that have been limited to the lowest 
size in order to be able to easily replace customer disks.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Cindy Swearingen

Hi Robert,

We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.

Yes, you can do #2 below and the pool size will be adjusted down to the
smaller size. Before you do this, I would check the sizes of both
spares.

If both spares are "equivalent" smaller sizes, you could use those to
build the replacement pool with the larger disks and then put the extra
larger disks on the shelf.

Thanks,

Cindy



On 03/04/11 09:22, Robert Hartzell wrote:

In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
storage pool and then shelved the other two for spares. One of the disks failed 
last night so I shut down the server and replaced it with a spare. When I tried 
to zpool replace the disk I get:

zpool replace tank c10t0d0 
cannot replace c10t0d0 with c10t0d0: device is too small


The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  312560350
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3125603518.00MB  312576734


Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  312483582
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3124835838.00MB  312499966
 
So it seems that two of the disks are slightly different models and are about 40mb smaller then the original disks. 


I know I can just add a larger disk but I would rather user the hardware I have 
if possible.
1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of the 
slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
3) If #2 is possible would I still be able to use the last still shelved disk 
as a spare?

If #2 is possible I would probably recreate the zpool as raidz2 instead of the 
current raidz1.

Any info/comments would be greatly appreciated.

Robert
  
--   
   Robert Hartzell

b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 10:01 AM, Tim Cook wrote:

> 
> 
> On Fri, Mar 4, 2011 at 10:22 AM, Robert Hartzell  wrote:
> In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
> storage pool and then shelved the other two for spares. One of the disks 
> failed last night so I shut down the server and replaced it with a spare. 
> When I tried to zpool replace the disk I get:
> 
> zpool replace tank c10t0d0
> cannot replace c10t0d0 with c10t0d0: device is too small
> 
> The 4 original disk partition tables look like this:
> 
> Current partition table (original):
> Total disk sectors available: 312560317 + 16384 (reserved sectors)
> 
> Part  TagFlag First Sector Size Last Sector
>  0usrwm34  149.04GB  312560350
>  1 unassignedwm 0   0   0
>  2 unassignedwm 0   0   0
>  3 unassignedwm 0   0   0
>  4 unassignedwm 0   0   0
>  5 unassignedwm 0   0   0
>  6 unassignedwm 0   0   0
>  8   reservedwm 3125603518.00MB  312576734
> 
> Spare disk partition table looks like this:
> 
> Current partition table (original):
> Total disk sectors available: 312483549 + 16384 (reserved sectors)
> 
> Part  TagFlag First Sector Size Last Sector
>  0usrwm34  149.00GB  312483582
>  1 unassignedwm 0   0   0
>  2 unassignedwm 0   0   0
>  3 unassignedwm 0   0   0
>  4 unassignedwm 0   0   0
>  5 unassignedwm 0   0   0
>  6 unassignedwm 0   0   0
>  8   reservedwm 3124835838.00MB  312499966
> 
> So it seems that two of the disks are slightly different models and are about 
> 40mb smaller then the original disks.
> 
> I know I can just add a larger disk but I would rather user the hardware I 
> have if possible.
> 1) Is there anyway to replace the failed disk with one of the spares?
> 2) Can I recreate the zpool using 3 of the original disks and one of the 
> slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
> 3) If #2 is possible would I still be able to use the last still shelved disk 
> as a spare?
> 
> If #2 is possible I would probably recreate the zpool as raidz2 instead of 
> the current raidz1.
> 
> Any info/comments would be greatly appreciated.
> 
> Robert
> 
> 
> 
> 
> You cannot.  That's why I suggested two years ago that they chop off 1% from 
> the end of the disk at install time to equalize drive sizes.  That way you 
> you wouldn't run into this problem trying to replace disks from a different 
> vendor or different batch.  The response was that Sun makes sure all drives 
> are exactly the same size (although I do recall someone on this forum having 
> this issue with Sun OEM disks as well).  It's ridiculous they don't take into 
> account the slight differences in drive sizes from vendor to vendor.  Forcing 
> you to single-source your disks is a bad habit to get into IMO.
> 
> --Tim
> 


Well that sucks... So I guess the only option is to replace the disk with a 
larger one? Or are you saying thats not possible either?
I can upgrade to larger disks but then there is no guarantee that I can even 
buy 4 identical disks off the shelf at any one time.

Thanks for the info

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] ZFS spare disk usage issue

2011-03-04 Thread Garrett D'Amore
On Fri, 2011-03-04 at 18:03 +0100, Roy Sigurd Karlsbakk wrote:
> So should I post a bug, or is there one there already?
> 
> Btw, I can't reach http://bugs.illumos.org/ - it times out

Try again in a few minutes... the server just got rebooted.

- Garrett
> 
> roy
> 
> - Original Message -
> > We've talked about this, and I will be putting together a fix for this
> > incorrect state handling. :-)
> > 
> > - Garrett
> > 
> > On Fri, 2011-03-04 at 11:50 -0500, Eric Schrock wrote:
> > > This looks like a pretty simple bug. The issue is that the state of
> > > the SPARE vdev is being reported as REMOVED instead of DEGRADED. If
> > > it were the latter (as it should be), then everything would work
> > > just
> > > fine. Please file a bug at bugs.illumos.org.
> > >
> > >
> > > On a side note, this continues to expose the overly simplistic vdev
> > > state model used by ZFS (one which I can take a bulk of the
> > > responsibility for). Back before the days of ditto blocks and
> > > SPA3.0,
> > > it was sufficient to model state as a fairly binary proposition. But
> > > this now has ramifications that don't necessarily make sense. For
> > > example, one may be able open a pool even if a toplevel vdev is
> > > faulted. And even when a spare has finished resilvering, it's left
> > > in
> > > the DEGRADED state, which has implications for allocation policies
> > > (though I remember discussions around changing this). But the pool
> > > state is derived directly from the toplevel vdev state, so if you
> > > switch spares to be ONLINE, then 'zpool status' would think your
> > > pool
> > > is perfectly healthy. In this case it's true from a data protection
> > > standpoint, but not necessarily from a "all is well in the world"
> > > standpoint, as you are down one spare, and that spare may not have
> > > the
> > > same RAS properties as other devices in your RAID-Z stripe (it may
> > > put
> > > 3 disks on the same controller in one stripe, for example).
> > >
> > >
> > > - Eric
> > >
> > > On Fri, Mar 4, 2011 at 7:06 AM, Roy Sigurd Karlsbakk
> > >  wrote:
> > > Hi all
> > >
> > > I just did a small test on RAIDz2 to check whether my
> > > suspicion was right about ZFS not treating spares as
> > > replicas/copies of drives, and I think I've found it true.
> > > The
> > > short story: If two spares replaces two drives in raidz2,
> > > losing a third drive, even with the spares active, makes the
> > > pool unavailable. See full report on
> > >
> > > ODT: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.odt
> > > PDF: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.pdf
> > >
> > > Vennlige hilsener / Best regards
> > >
> > > roy
> > > --
> > > Roy Sigurd Karlsbakk
> > > (+47) 97542685
> > > r...@karlsbakk.net
> > > http://blogg.karlsbakk.net/
> > > --
> > > I all pedagogikk er det essensielt at pensum presenteres
> > > intelligibelt. Det er et elementært imperativ for alle
> > > pedagoger å unngå eksessiv anvendelse av idiomer med fremmed
> > > opprinnelse. I de fleste tilfeller eksisterer adekvate og
> > > relevante synonymer på norsk.
> > >
> > > ___
> > > Developer mailing list
> > > develo...@lists.illumos.org
> > > http://lists.illumos.org/m/listinfo/developer
> > >
> > >
> > >
> > > --
> > > Eric Schrock
> > > Delphix
> > >
> > >
> > > 275 Middlefield Road, Suite 50
> > > Menlo Park, CA 94025
> > > http://www.delphix.com
> > >
> > >
> > > ___
> > > Developer mailing list
> > > develo...@lists.illumos.org
> > > http://lists.illumos.org/m/listinfo/developer
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] ZFS spare disk usage issue

2011-03-04 Thread Roy Sigurd Karlsbakk
So should I post a bug, or is there one there already?

Btw, I can't reach http://bugs.illumos.org/ - it times out

roy

- Original Message -
> We've talked about this, and I will be putting together a fix for this
> incorrect state handling. :-)
> 
> - Garrett
> 
> On Fri, 2011-03-04 at 11:50 -0500, Eric Schrock wrote:
> > This looks like a pretty simple bug. The issue is that the state of
> > the SPARE vdev is being reported as REMOVED instead of DEGRADED. If
> > it were the latter (as it should be), then everything would work
> > just
> > fine. Please file a bug at bugs.illumos.org.
> >
> >
> > On a side note, this continues to expose the overly simplistic vdev
> > state model used by ZFS (one which I can take a bulk of the
> > responsibility for). Back before the days of ditto blocks and
> > SPA3.0,
> > it was sufficient to model state as a fairly binary proposition. But
> > this now has ramifications that don't necessarily make sense. For
> > example, one may be able open a pool even if a toplevel vdev is
> > faulted. And even when a spare has finished resilvering, it's left
> > in
> > the DEGRADED state, which has implications for allocation policies
> > (though I remember discussions around changing this). But the pool
> > state is derived directly from the toplevel vdev state, so if you
> > switch spares to be ONLINE, then 'zpool status' would think your
> > pool
> > is perfectly healthy. In this case it's true from a data protection
> > standpoint, but not necessarily from a "all is well in the world"
> > standpoint, as you are down one spare, and that spare may not have
> > the
> > same RAS properties as other devices in your RAID-Z stripe (it may
> > put
> > 3 disks on the same controller in one stripe, for example).
> >
> >
> > - Eric
> >
> > On Fri, Mar 4, 2011 at 7:06 AM, Roy Sigurd Karlsbakk
> >  wrote:
> > Hi all
> >
> > I just did a small test on RAIDz2 to check whether my
> > suspicion was right about ZFS not treating spares as
> > replicas/copies of drives, and I think I've found it true.
> > The
> > short story: If two spares replaces two drives in raidz2,
> > losing a third drive, even with the spares active, makes the
> > pool unavailable. See full report on
> >
> > ODT: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.odt
> > PDF: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.pdf
> >
> > Vennlige hilsener / Best regards
> >
> > roy
> > --
> > Roy Sigurd Karlsbakk
> > (+47) 97542685
> > r...@karlsbakk.net
> > http://blogg.karlsbakk.net/
> > --
> > I all pedagogikk er det essensielt at pensum presenteres
> > intelligibelt. Det er et elementært imperativ for alle
> > pedagoger å unngå eksessiv anvendelse av idiomer med fremmed
> > opprinnelse. I de fleste tilfeller eksisterer adekvate og
> > relevante synonymer på norsk.
> >
> > ___
> > Developer mailing list
> > develo...@lists.illumos.org
> > http://lists.illumos.org/m/listinfo/developer
> >
> >
> >
> > --
> > Eric Schrock
> > Delphix
> >
> >
> > 275 Middlefield Road, Suite 50
> > Menlo Park, CA 94025
> > http://www.delphix.com
> >
> >
> > ___
> > Developer mailing list
> > develo...@lists.illumos.org
> > http://lists.illumos.org/m/listinfo/developer

-- 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Tim Cook
On Fri, Mar 4, 2011 at 10:22 AM, Robert Hartzell wrote:

> In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz
> storage pool and then shelved the other two for spares. One of the disks
> failed last night so I shut down the server and replaced it with a spare.
> When I tried to zpool replace the disk I get:
>
> zpool replace tank c10t0d0
> cannot replace c10t0d0 with c10t0d0: device is too small
>
> The 4 original disk partition tables look like this:
>
> Current partition table (original):
> Total disk sectors available: 312560317 + 16384 (reserved sectors)
>
> Part  TagFlag First Sector Size Last Sector
>  0usrwm34  149.04GB  312560350
>  1 unassignedwm 0   0   0
>  2 unassignedwm 0   0   0
>  3 unassignedwm 0   0   0
>  4 unassignedwm 0   0   0
>  5 unassignedwm 0   0   0
>  6 unassignedwm 0   0   0
>  8   reservedwm 3125603518.00MB  312576734
>
> Spare disk partition table looks like this:
>
> Current partition table (original):
> Total disk sectors available: 312483549 + 16384 (reserved sectors)
>
> Part  TagFlag First Sector Size Last Sector
>  0usrwm34  149.00GB  312483582
>  1 unassignedwm 0   0   0
>  2 unassignedwm 0   0   0
>  3 unassignedwm 0   0   0
>  4 unassignedwm 0   0   0
>  5 unassignedwm 0   0   0
>  6 unassignedwm 0   0   0
>  8   reservedwm 3124835838.00MB  312499966
>
> So it seems that two of the disks are slightly different models and are
> about 40mb smaller then the original disks.
>
> I know I can just add a larger disk but I would rather user the hardware I
> have if possible.
> 1) Is there anyway to replace the failed disk with one of the spares?
> 2) Can I recreate the zpool using 3 of the original disks and one of the
> slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
> 3) If #2 is possible would I still be able to use the last still shelved
> disk as a spare?
>
> If #2 is possible I would probably recreate the zpool as raidz2 instead of
> the current raidz1.
>
> Any info/comments would be greatly appreciated.
>
> Robert
>
>
>

You cannot.  That's why I suggested two years ago that they chop off 1% from
the end of the disk at install time to equalize drive sizes.  That way you
you wouldn't run into this problem trying to replace disks from a different
vendor or different batch.  The response was that Sun makes sure all drives
are exactly the same size (although I do recall someone on this forum having
this issue with Sun OEM disks as well).  It's ridiculous they don't take
into account the slight differences in drive sizes from vendor to vendor.
 Forcing you to single-source your disks is a bad habit to get into IMO.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] ZFS spare disk usage issue

2011-03-04 Thread Eric Schrock
This looks like a pretty simple bug.  The issue is that the state of the
SPARE vdev is being reported as REMOVED instead of DEGRADED.  If it were the
latter (as it should be), then everything would work just fine.  Please file
a bug at bugs.illumos.org.

On a side note, this continues to expose the overly simplistic vdev state
model used by ZFS (one which I can take a bulk of the responsibility for).
 Back before the days of ditto blocks and SPA3.0, it was sufficient to model
state as a fairly binary proposition.  But this now has ramifications that
don't necessarily make sense.  For example, one may be able open a pool even
if a toplevel vdev is faulted.  And even when a spare has finished
resilvering, it's left in the DEGRADED state, which has implications for
allocation policies (though I remember discussions around changing this).
 But the pool state is derived directly from the toplevel vdev state, so if
you switch spares to be ONLINE, then 'zpool status' would think your pool is
perfectly healthy.  In this case it's true from a data protection
standpoint, but not necessarily from a "all is well in the world"
standpoint, as you are down one spare, and that spare may not have the same
RAS properties as other devices in your RAID-Z stripe (it may put 3 disks on
the same controller in one stripe, for example).

- Eric

On Fri, Mar 4, 2011 at 7:06 AM, Roy Sigurd Karlsbakk wrote:

> Hi all
>
> I just did a small test on RAIDz2 to check whether my suspicion was right
> about ZFS not treating spares as replicas/copies of drives, and I think I've
> found it true. The short story: If two spares replaces two drives in raidz2,
> losing a third drive, even with the spares active, makes the pool
> unavailable. See full report on
>
> ODT: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.odt
> PDF: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.pdf
>
> Vennlige hilsener / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det
> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate
> og relevante synonymer på norsk.
>
> ___
> Developer mailing list
> develo...@lists.illumos.org
> http://lists.illumos.org/m/listinfo/developer
>



-- 
Eric Schrock
Delphix

275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] ZFS spare disk usage issue

2011-03-04 Thread Roy Sigurd Karlsbakk
I understand that some of it may be a simple bug, but should it hang _all_ the 
pools? That's what happens when the third drive is removed... 

roy 

- Original Message -


This looks like a pretty simple bug. The issue is that the state of the SPARE 
vdev is being reported as REMOVED instead of DEGRADED. If it were the latter 
(as it should be), then everything would work just fine. Please file a bug at 
bugs.illumos.org . 


On a side note, this continues to expose the overly simplistic vdev state model 
used by ZFS (one which I can take a bulk of the responsibility for). Back 
before the days of ditto blocks and SPA3.0, it was sufficient to model state as 
a fairly binary proposition. But this now has ramifications that don't 
necessarily make sense. For example, one may be able open a pool even if a 
toplevel vdev is faulted. And even when a spare has finished resilvering, it's 
left in the DEGRADED state, which has implications for allocation policies 
(though I remember discussions around changing this). But the pool state is 
derived directly from the toplevel vdev state, so if you switch spares to be 
ONLINE, then 'zpool status' would think your pool is perfectly healthy. In this 
case it's true from a data protection standpoint, but not necessarily from a 
"all is well in the world" standpoint, as you are down one spare, and that 
spare may not have the same RAS properties as other devices in your RAID-Z 
stripe (it may put 3 disks on the same controller in one stripe, for example). 


- Eric 


On Fri, Mar 4, 2011 at 7:06 AM, Roy Sigurd Karlsbakk < r...@karlsbakk.net > 
wrote: 


Hi all 

I just did a small test on RAIDz2 to check whether my suspicion was right about 
ZFS not treating spares as replicas/copies of drives, and I think I've found it 
true. The short story: If two spares replaces two drives in raidz2, losing a 
third drive, even with the spares active, makes the pool unavailable. See full 
report on 

ODT: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.odt 
PDF: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.pdf 

Vennlige hilsener / Best regards 

roy 
-- 
Roy Sigurd Karlsbakk 
(+47) 97542685 
r...@karlsbakk.net 
http://blogg.karlsbakk.net/ 
-- 
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk. 

___ 
Developer mailing list 
develo...@lists.illumos.org 
http://lists.illumos.org/m/listinfo/developer 



-- 

Eric Schrock 
Delphix 


275 Middlefield Road, Suite 50 
Menlo Park, CA 94025 
http://www.delphix.com 



-- 
Vennlige hilsener / Best regards 

roy 
-- 
Roy Sigurd Karlsbakk 
(+47) 97542685 
r...@karlsbakk.net 
http://blogg.karlsbakk.net/ 
-- 
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
storage pool and then shelved the other two for spares. One of the disks failed 
last night so I shut down the server and replaced it with a spare. When I tried 
to zpool replace the disk I get:

zpool replace tank c10t0d0 
cannot replace c10t0d0 with c10t0d0: device is too small

The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  312560350
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3125603518.00MB  312576734

Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  312483582
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3124835838.00MB  312499966
 
So it seems that two of the disks are slightly different models and are about 
40mb smaller then the original disks. 

I know I can just add a larger disk but I would rather user the hardware I have 
if possible.
1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of the 
slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
3) If #2 is possible would I still be able to use the last still shelved disk 
as a spare?

If #2 is possible I would probably recreate the zpool as raidz2 instead of the 
current raidz1.

Any info/comments would be greatly appreciated.

Robert
  
--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] ZFS spare disk usage issue

2011-03-04 Thread Roy Sigurd Karlsbakk
- Original Message -
> Hi all
> 
> I just did a small test on RAIDz2 to check whether my suspicion was
> right about ZFS not treating spares as replicas/copies of drives, and
> I think I've found it true. The short story: If two spares replaces
> two drives in raidz2, losing a third drive, even with the spares
> active, makes the pool unavailable. See full report on

Update 2010-03-04 14:15 CET
I just tested on another system. This one, not in production yet, has a 
mirrored rpool and a 14-drive
RAID10 pool named tos-data. I started a copy from a Windows machine into this 
CIFS share just
to generate some traffic. Then I did a zfs detach of one side of each of the 
mirrors for tos-data and
created a new 5-drive raidz2 pool name jalla with two dedicated spares. I 
started a dd to fill it up
and plugged one drive, waited for it to resilver and plugged another, again 
waited for the resilver to
finish and plugged the third. The server now hangs on all pools. I've also 
tested removing drives
from mirrors and waiting for them to resilver to spares. This seems to work as 
expected, although I
doubt booting from one will work without grub being installed.

> ODT: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.odt
> PDF: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.pdf

These are mow updated as well

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS spare disk usage issue

2011-03-04 Thread Roy Sigurd Karlsbakk
Hi all

I just did a small test on RAIDz2 to check whether my suspicion was right about 
ZFS not treating spares as replicas/copies of drives, and I think I've found it 
true. The short story: If two spares replaces two drives in raidz2, losing a 
third drive, even with the spares active, makes the pool unavailable. See full 
report on

ODT: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.odt
PDF: http://karlsbakk.net/ZFS/ZFS%20Spare%20disk%20usage.pdf

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss