Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-11-12 Thread Jeroen Roodhart
 I'm running nv126 XvM right now. I haven't tried it
 without XvM.

Without XvM we do not see these issues. We're running the VMs through NFS now 
(using ESXi)...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Peter Eriksson
Have you tried wrapping your disks inside LVM metadevices and then used those 
for your ZFS pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Peter Eriksson
What type of disks are you using?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread M P
I was just looking to see if it is a known problem before I submit it as a bug. 
What would be the best category to submit the bug under? I am not sure if it is 
driver/kernel issue. I would be more than glad to help. One of the machines is 
a test environment and I can run any dumps/debug versions you want.

The issue is reproducible on the two servers Sun and Dell and with different 
SAS JBOD storage. 

The systems consist of raidz2 pool, made from 11 SATA large disks (1.5TB 
Seagate). The pool is 60% or so full.

The easiest way to reproduce it is when running bacula client to back the whole 
pool overnight. After couple of hours the issue will manifest. The machine will 
just print these messages and not respond to any connections, even keyboard.

I was looking into one other machine that we have – a relatively old custom 
build machine with 11 1TB (Western Digital)  disks connected to 8 port SATA 
controller  (+3 from the motherboard). I noticed that there are similar 
messages for the disks there. The machine doesn’t lock, just prints the 
messages when under heavy load (backup), see bellow:



Operating System: Solaris 10 8/07 s10x_u4wos_12b X86

Adapter: 8 port SATA: 
http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm

Oct 21 17:47:22 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 17:47:22 mirror  port 1: link lost
Oct 21 17:47:22 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 17:47:22 mirror  port 1: link established
Oct 21 17:47:22 mirror marvell88sx: [ID 812950 kern.warning] WARNING: 
marvell88sx0: error on port 1:
Oct 21 17:47:22 mirror marvell88sx: [ID 517869 kern.info]   device 
disconnected
Oct 21 17:47:22 mirror marvell88sx: [ID 517869 kern.info]   device connected
Oct 21 17:47:22 mirror scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6/d...@1,0 (sd2):
Oct 21 17:47:22 mirror  Error for Command: read(10)Error Level: 
Retryable
Oct 21 17:47:22 mirror scsi: [ID 107833 kern.notice]Requested Block: 
178328863 Error Block: 178328863
Oct 21 17:47:22 mirror scsi: [ID 107833 kern.notice]Vendor: ATA 
   Serial Number:
Oct 21 17:47:22 mirror scsi: [ID 107833 kern.notice]Sense Key: No 
Additional Sense
Oct 21 17:47:22 mirror scsi: [ID 107833 kern.notice]ASC: 0x0 (no additional 
sense info), ASCQ: 0x0, FRU: 0x0
Oct 21 17:58:51 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 17:58:51 mirror  port 0: device reset
Oct 21 17:58:51 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 17:58:51 mirror  port 0: device reset
Oct 21 17:58:51 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 17:58:51 mirror  port 0: link lost
Oct 21 17:58:51 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 17:58:51 mirror  port 0: link established
Oct 21 17:58:51 mirror marvell88sx: [ID 812950 kern.warning] WARNING: 
marvell88sx0: error on port 0:
Oct 21 17:58:51 mirror marvell88sx: [ID 517869 kern.info]   device 
disconnected
Oct 21 17:58:51 mirror marvell88sx: [ID 517869 kern.info]   device connected
Oct 21 17:58:51 mirror scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6/d...@0,0 (sd1):
Oct 21 17:58:51 mirror  Error for Command: read(10)Error Level: 
Retryable
Oct 21 17:58:51 mirror scsi: [ID 107833 kern.notice]Requested Block: 
929071121 Error Block: 929071121
Oct 21 17:58:51 mirror scsi: [ID 107833 kern.notice]Vendor: ATA 
   Serial Number:
Oct 21 17:58:51 mirror scsi: [ID 107833 kern.notice]Sense Key: No 
Additional Sense
Oct 21 17:58:51 mirror scsi: [ID 107833 kern.notice]ASC: 0x0 (no additional 
sense info), ASCQ: 0x0, FRU: 0x0
Oct 21 18:02:10 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 18:02:10 mirror  port 4: device reset
Oct 21 18:02:10 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 21 18:02:10 mirror  port 4: device reset
Oct 21 18:02:10 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 29 00:03:24 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 29 00:03:24 mirror  port 5: device reset
Oct 29 00:03:24 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 29 00:03:24 mirror  port 5: device reset
Oct 29 00:03:24 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 29 00:03:24 mirror  port 5: link lost
Oct 29 00:03:24 mirror sata: [ID 801593 kern.notice] NOTICE: 
/p...@0,0/pci10de,2...@10/pci11ab,1...@6:
Oct 29 00:03:24 

Re: [zfs-discuss] zfs eradication

2009-11-12 Thread Darren J Moffat

Miles Nordin wrote:

djm == Darren J Moffat darr...@opensolaris.org writes:


  encrypted blocks is much better, even though
  encrypted blocks may be subject to freeze-spray attack if the
  whole computer is compromised 


the idea of crypto deletion is to use many keys to encrypt the drive,
and encrypt keys with other keys.  When you want to delete something,
forget the key that encrypted it. 


Yes I know, remember I designed ZFS crypto and this is exactly one of 
the use case.



   djm Much better for jurisdictions that allow for that, but not all
   djm do.  I know of at least one that wants even ciphertext blocks
   djm to overwritten.

The appropriate answer depends when do they want it done, though.  Do
they want it done continuously while the machine is running whenever
someone rm's something?  Or is it about ``losing'' the data, about
media containing encrypted blocks passing outside the campus, or just
not knowing where something physically is at all times?


I'm not in a position to discuss this jurisdictions requirements and 
rationale on a public mailing list.  All I'm saying is that data 
destruction base only on key destruction/unavailability is not 
considered enough in some cases.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Travis Tabbal
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
j...@opensolaris.orgwrote:


 The first step towards acknowledging that there is a problem
 is you logging a bug in bugs.opensolaris.org. If you don't, we
 don't know that there might be a problem outside of the ones
 that we identify.



I apologize if I offended by not knowing the protocol. I thought that
posting in the forums was watched and the bug tracker updated by people at
Sun. I didn't think normal users had access to submit bugs. Thank you for
the reply. I have submitted a bug on the issue with all the information I
think might be useful. If someone at Sun would like more information, output
from commands, or testing, I would be happy to help.

I was not provided with a bug number by the system. I assume that those are
given out if the bug is deemed worthy of further consideration.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Travis Tabbal
I submitted a bug on this issue, it looks like you can reference other bugs 
when you submit one, so everyone having this issue could possibly link mine and 
submit their own hardware config. It sounds like it's widespread though, so I'm 
not sure if that would help or hinder. I'd hate to bury the developers/QA team 
under a mountain of duplicate requests. 

CR 6900767
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Travis Tabbal
 What type of disks are you using?

I'm using SATA disks with SAS-SATA breakout cables. I've tried different cables 
as I have a couple spares. 

mpt0 has 4x1.5TB Samsung Green drives. 
mpt1 has 4x400GB Seagate 7200 RPM drives.

I get errors from both adapters. Each adapter has an unused SAS channel 
available. If I can get this fixed, I'm planning to populate those as well.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-12 Thread Travis Tabbal
 Have you tried wrapping your disks inside LVM
 metadevices and then used those for your ZFS pool?

I have not tried that. I could try it with my spare disks I suppose. I avoided 
LVM as it didn't seem to offer me anything ZFS/ZPOOL didn't.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-11-12 Thread Travis Tabbal
  I'm running nv126 XvM right now. I haven't tried
 it
  without XvM.
 
 Without XvM we do not see these issues. We're running
 the VMs through NFS now (using ESXi)...

Interesting. It sounds like it might be an XvM specific bug. I'm glad I 
mentioned that in my bug report to Sun. Hopefully they can duplicate it. I'd 
like to stick with XvM as I've spent a fair amount of time getting things 
working well under it. 

How did your migration to ESXi go? Are you using it on the same hardware or did 
you just switch that server to an NFS server and run the VMs on another box?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread Edward Ned Harvey
I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine 'zfs receive'

However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)

 

Is it crazy for me to try the send/receive with these two different versions
of OSes?

 

Is it possible the underlying ZFS's would be compatible?  Is there any way
for me to know?

 

Thanks for the help.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread David Dyer-Bennet

On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
 I built a fileserver on solaris 10u6 (10/08) intending to back it up to
 another server via zfs send | ssh othermachine 'zfs receive'

 However, the new server is too new for 10u6 (10/08) and requires a later
 version of solaris . presently available is 10u8 (10/09)

 Is it crazy for me to try the send/receive with these two different
 versions of OSes?

It says at the end of the zfs send section of the man page The format of
the stream is committed. You will be able to receive your streams on
future versions of ZFS.

That would seem to be a rather strong general commitment.  That makes it
IMHO at least worth experimenting with the case you need, see if it
accepts the stream.  It should, according to the man page.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread Erik Trimble

David Dyer-Bennet wrote:

On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
  

I built a fileserver on solaris 10u6 (10/08) intending to back it up to
another server via zfs send | ssh othermachine 'zfs receive'

However, the new server is too new for 10u6 (10/08) and requires a later
version of solaris . presently available is 10u8 (10/09)

Is it crazy for me to try the send/receive with these two different
versions of OSes?



It says at the end of the zfs send section of the man page The format of
the stream is committed. You will be able to receive your streams on
future versions of ZFS.

That would seem to be a rather strong general commitment.  That makes it
IMHO at least worth experimenting with the case you need, see if it
accepts the stream.  It should, according to the man page.
  


Like so many other things in Solaris,  older revs of ZFS are supported 
in newer releases of Solaris.  :-)


When you create a zfs filesystem on a version of Solaris, by default it 
is created by the latest ZFS fs version that version of Solaris supports.


HOWEVER, you can explicitly create a zfs filesystem with a backrev 
version, and it works fine (you just don't get the latest features).  
Look at the ZFS man page for creating a filesystem with a version other 
than the default.


This works for 'zfs send|receive', with this caveaut:   the receiving 
filesytem will be created with the zfs filesystem version of the SENDER.


So, it's possible to send/receive a ZFS filesystem from a OLDER version 
of Solaris to a NEWER version, but NOT vice versa (unless the zfs 
filesystem on the newer Solaris was explicitly created with a backrev 
version that the older Solaris understands).


An example:  (and, I'm sure I don't have the ZFS version numbers right, 
so check the man pages)


Say 10u6 supports ZFS version 10, and 10u8 support ZFS version 12.

By default, a 10u6 machine creates v10 ZFS filesystems, and 10u8 creates 
v12 filesystems.  But, a 10u8 systems can also create v10 filesystems.  
So, you can send a 10u6 ZFS filesystem to a 10u8 machine, resulting in 
creating a new v10 filesystem on the 10u8 machine.  However, you can't 
send a v12 filesystem from the 10u8 machine to the 10u6 machine.  If you 
explicitly create a v10 filesystem on the 10u8 machine, you can send 
that filesystem to the 10u6 machine.



I hope that's clear.

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Tim Cook
So I've finally finished swapping out my old 300GB drives.  The end result
is one large raidz2 pool. 10+2 with one hot spare.

The drives are:
7x500GB
4x1TB
2x1.5TB

One of the 1.5TB is the hot spare.  zpool list is still showing capacity of
3.25TB (the 1TB drives replaced 300GB drives).  I've tried exporting and
importing the pool, and it doesn't make a difference.

NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
fserv  3.25T  2.73T   532G84%  ONLINE  -


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs eradication

2009-11-12 Thread Bob Friesenhahn

On Wed, 11 Nov 2009, David Magda wrote:


There seem to be 'secure erase' methods available for some SSDs:


Unless the hardware and firmware of these devices has been inspected 
and validated by a certified third party which is well-versed in such 
analaysis, I would not trust such devices with significant phrases 
like tritium core.  Some secrets are very great and should not be 
trusted to a marketing department.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupe question

2009-11-12 Thread Frank Middleton

Got some out-of-curiosity questions for the gurus if they
have time to answer:

Isn't dedupe in some ways the antithesis  of setting copies  1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make things as robust as possible and
then we reduce redundancy with dedupe and compression :-).

What would be the difference in MTTDL between a scenario where
dedupe ratio is exactly two and you've set copies=2 vs. no dedupe
and copies=1?  Intuitively MTTDL would be better because of the
copies=2, but you'd lose twice the data when DL eventually happens.

Similarly, if hypothetically dedupe ratio = 1.5 and you have a
two-way mirror, vs. no dedupe and a 3 disk raidz1,  which would
be more reliable? Again intuition says the mirror because there's
one less device to fail, but device failure isn't the only consideration.

In both cases it sounds like you might gain a bit in performance,
especially if the dedupe ratio is high because you don't have to
write the actual duplicated blocks on a write and on a read you
are more likely to have the data blocks in cache. Does this make
sense?

Maybe there are too many variables, but it would be so interesting
to hear of possible decision making algorithms.  A similar discussion
applies to compression, although that seems to defeat redundancy
more directly.  This analysis requires good statistical maths skills!

Thanks -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Cindy Swearingen

Hi Tim,

In a pool with mixed disk sizes, ZFS can use only the amount of disk
space that is equal to the smallest disk and spares aren't included in
pool size until they are used.

In your RAIDZ-2 pool, this is equivalent to 10 500 GB disks, which
should be about 5 TBs.

I think you are running a current Nevada release. Did you try setting
the autoexpand property to on?

See the example below, I created a RAIDZ-2 pool with 2 68 GB disks and 1 
136 GB disk. I replaced the 2 68 GB disks with 2 136 GB disks and set

autoexpand to on. My pool space increased from 204 GB to 410 GB on
Nevada, build 127, which sounds about right. The autoexpand property
integrated into build 117.

Cindy

# zpool create test raidz2 c2t2d0 c2t3d0 c0t5d0
invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes
# zpool create -f test raidz2 c2t2d0 c2t3d0 c0t5d0
# zpool list test
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
test   204G   286K   204G 0%  1.00x  ONLINE  -
# zpool replace test c2t2d0 c0t6d0
# zpool replace test c2t3d0 c0t7d0
# zpool list test
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
test   204G   464K   204G 0%  1.00x  ONLINE  -
# zpool set autoexpand=on test
# zpool list test
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
test   410G   423K   410G 0%  1.00x  ONLINE  -



On 11/12/09 14:10, Tim Cook wrote:
So I've finally finished swapping out my old 300GB drives.  The end 
result is one large raidz2 pool. 10+2 with one hot spare.


The drives are:
7x500GB
4x1TB
2x1.5TB

One of the 1.5TB is the hot spare.  zpool list is still showing capacity 
of 3.25TB (the 1TB drives replaced 300GB drives).  I've tried exporting 
and importing the pool, and it doesn't make a difference.


NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
fserv  3.25T  2.73T   532G84%  ONLINE  -


--Tim




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-11-12 Thread James C. McPherson

Travis Tabbal wrote:

I'm running nv126 XvM right now. I haven't tried

it

without XvM.

Without XvM we do not see these issues. We're running
the VMs through NFS now (using ESXi)...


Interesting. It sounds like it might be an XvM specific bug. I'm glad I mentioned that in my bug report to Sun. Hopefully they can duplicate it. I'd like to stick with XvM as I've spent a fair amount of time getting things working well under it. 


How did your migration to ESXi go? Are you using it on the same hardware or did 
you just switch that server to an NFS server and run the VMs on another box?



Hi Travis,
your bug showed up - it's   6900767. Since bugs.opensolaris.org
isn't a live system, you won't be able to see it at

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6900767

until tomorrow.


cheers,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Tim Cook
On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen
cindy.swearin...@sun.comwrote:

 Hi Tim,

 In a pool with mixed disk sizes, ZFS can use only the amount of disk
 space that is equal to the smallest disk and spares aren't included in
 pool size until they are used.

 In your RAIDZ-2 pool, this is equivalent to 10 500 GB disks, which
 should be about 5 TBs.

 I think you are running a current Nevada release. Did you try setting
 the autoexpand property to on?

 See the example below, I created a RAIDZ-2 pool with 2 68 GB disks and 1
 136 GB disk. I replaced the 2 68 GB disks with 2 136 GB disks and set
 autoexpand to on. My pool space increased from 204 GB to 410 GB on
 Nevada, build 127, which sounds about right. The autoexpand property
 integrated into build 117.

 Cindy

 # zpool create test raidz2 c2t2d0 c2t3d0 c0t5d0
 invalid vdev specification
 use '-f' to override the following errors:
 raidz contains devices of different sizes
 # zpool create -f test raidz2 c2t2d0 c2t3d0 c0t5d0
 # zpool list test
 NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 test   204G   286K   204G 0%  1.00x  ONLINE  -
 # zpool replace test c2t2d0 c0t6d0
 # zpool replace test c2t3d0 c0t7d0
 # zpool list test
 NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 test   204G   464K   204G 0%  1.00x  ONLINE  -
 # zpool set autoexpand=on test
 # zpool list test
 NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 test   410G   423K   410G 0%  1.00x  ONLINE  -



That did it, thank you.  Didn't the pools expand automatically on an
export/import before, or am I crazy?  I swore that's all I had to do last
time.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Cindy Swearingen

Previous behavior was hard to predict. :-)

It worked for a while, then a bug prevented it from working so that you
had to export/import the pool to see the expanded space.

The export/import thing was a temporary workaround until the autoexpand
features integrated.


cs

On 11/12/09 15:23, Tim Cook wrote:



On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen 
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:


Hi Tim,

In a pool with mixed disk sizes, ZFS can use only the amount of disk
space that is equal to the smallest disk and spares aren't included in
pool size until they are used.

In your RAIDZ-2 pool, this is equivalent to 10 500 GB disks, which
should be about 5 TBs.

I think you are running a current Nevada release. Did you try setting
the autoexpand property to on?

See the example below, I created a RAIDZ-2 pool with 2 68 GB disks
and 1 136 GB disk. I replaced the 2 68 GB disks with 2 136 GB disks
and set
autoexpand to on. My pool space increased from 204 GB to 410 GB on
Nevada, build 127, which sounds about right. The autoexpand property
integrated into build 117.

Cindy

# zpool create test raidz2 c2t2d0 c2t3d0 c0t5d0
invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes
# zpool create -f test raidz2 c2t2d0 c2t3d0 c0t5d0
# zpool list test
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
test   204G   286K   204G 0%  1.00x  ONLINE  -
# zpool replace test c2t2d0 c0t6d0
# zpool replace test c2t3d0 c0t7d0
# zpool list test
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
test   204G   464K   204G 0%  1.00x  ONLINE  -
# zpool set autoexpand=on test
# zpool list test
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
test   410G   423K   410G 0%  1.00x  ONLINE  -



That did it, thank you.  Didn't the pools expand automatically on an 
export/import before, or am I crazy?  I swore that's all I had to do 
last time.


--Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread Edward Ned Harvey
 *snip*
 I hope that's clear.

Yes, perfectly clear, and very helpful.  Thank you very much.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupe question

2009-11-12 Thread Richard Elling

On Nov 12, 2009, at 1:36 PM, Frank Middleton wrote:


Got some out-of-curiosity questions for the gurus if they
have time to answer:

Isn't dedupe in some ways the antithesis  of setting copies  1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make things as robust as possible and
then we reduce redundancy with dedupe and compression :-).

What would be the difference in MTTDL between a scenario where
dedupe ratio is exactly two and you've set copies=2 vs. no dedupe
and copies=1?  Intuitively MTTDL would be better because of the
copies=2, but you'd lose twice the data when DL eventually happens.


The MTTDL models I've used consider any loss a complete loss.
But there are some interesting wrinkles to explore here... :-)


Similarly, if hypothetically dedupe ratio = 1.5 and you have a
two-way mirror, vs. no dedupe and a 3 disk raidz1,  which would
be more reliable? Again intuition says the mirror because there's
one less device to fail, but device failure isn't the only  
consideration.


In both cases it sounds like you might gain a bit in performance,
especially if the dedupe ratio is high because you don't have to
write the actual duplicated blocks on a write and on a read you
are more likely to have the data blocks in cache. Does this make
sense?

Maybe there are too many variables, but it would be so interesting
to hear of possible decision making algorithms.  A similar discussion
applies to compression, although that seems to defeat redundancy
more directly.  This analysis requires good statistical maths skills!


There are several dimensions here. But I'm not yet convinced there is
a configuration decision point to consume a more detailed analysis.
In other words, if you could decide between two or more possible
configurations, what would you wish to consider to improve the
outcome?  Thoughts?
 -- richard




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread Phil Harman

On 12 Nov 2009, at 19:54, David Dyer-Bennet d...@dd-b.net wrote:



On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
I built a fileserver on solaris 10u6 (10/08) intending to back it  
up to

another server via zfs send | ssh othermachine 'zfs receive'

However, the new server is too new for 10u6 (10/08) and requires a  
later

version of solaris . presently available is 10u8 (10/09)

Is it crazy for me to try the send/receive with these two different
versions of OSes?


It says at the end of the zfs send section of the man page The  
format of the stream is committed. You will be able to receive your  
streams on

future versions of ZFS.


'Twas not always so. It used to say The format of the stream is  
evolving. No backwards compatibility is guaranteed. You may not be  
able to receive your streams on future versions of ZFS.


See http://hub.opensolaris.org/bin/view/Community+Group+on/2008042301

However, the above states that you're ok within Solaris 10 (which got  
a usable ZFS quite late in the day - you were very brave if you used  
the s10u3 implementation), and I've only fallen foul of the issue with  
old Nevada and OpenSolaris versions.



That would seem to be a rather strong general commitment.  That  
makes it

IMHO at least worth experimenting with the case you need, see if it
accepts the stream.  It should, according to the man page.
--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


On 12 Nov 2009, at 19:54, David Dyer-Bennet d...@dd-b.net wrote:



On Thu, November 12, 2009 13:36, Edward Ned Harvey wrote:
I built a fileserver on solaris 10u6 (10/08) intending to back it  
up to

another server via zfs send | ssh othermachine 'zfs receive'

However, the new server is too new for 10u6 (10/08) and requires a  
later

version of solaris . presently available is 10u8 (10/09)

Is it crazy for me to try the send/receive with these two different
versions of OSes?


It says at the end of the zfs send section of the man page The  
format of the stream is committed. You will be able to receive your  
streams on

future versions of ZFS.


Twas not always so. It used to say


http://hub.opensolaris.org/bin/view/Community+Group+on/2008042301



That would seem to be a rather strong general commitment.  That  
makes it

IMHO at least worth experimenting with the case you need, see if it
accepts the stream.  It should, according to the man page.
--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss