Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-29 Thread Kenny
To All...

Problem solved.  Operator error on my part.  (but I did learn something!!  
grin)

Thank you all very much!


--Kenny
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Kenny
Bob,  Thanks for the reply.  Yes I did read your white paper and am using it!!  
Thanks again!!

I used zpool iostat -v and it did't give the information as advertised...  see 
below


bash-3.00# zpool iostat -v

capacity 
operationsbandwidth
poolused  avail   read  
write   read  write
--  -  -  -  -  -  -


log_data  147K  9.81G   
   0  0  0  4
 
 raidz1147K  9.81G  
0  0  0  4

c6t600A0B800049F93C030A48B3EA2Cd0  -  -  0  0  0
 22
c6t600A0B800049F93C030D48B3EAB6d0  -  -  0  0  0
 22
c6t600A0B800049F93C031C48B3EC76d0  -  -  0  0  0
 22
c6t600A0B800049F93C031F48B3ECA8d0  -  -  0  0  0
 22
c6t600A0B800049F93C030448B3CDEEd0  -  -  0  0  0
 22
c6t600A0B800049F93C030748B3E9F0d0  -  -  0  0  0
 22
c6t600A0B800049F93C031048B3EB44d0  -  -  0  0  0
 22
c6t600A0B800049F93C031348B3EB94d0  -  -  0  0  0
 22
c6t600A0B800049F93C031648B3EBE4d0  -  -  0  0  0
 22
c6t600A0B800049F93C031948B3EC28d0  -  -  0  0  0
 22
c6t600A0B800049F93C032248B3ECDEd0  -  -  0  0  0
 22

--  -  -  -  -  -  -

(sorry but I can't get the horizontal format to set the columns correctly...)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Kenny
Tim,

Per your request...

df -h

bash-3.00# df -h
Filesystem size   used  avail capacity  Mounted on
/dev/md/dsk/d10 98G   4.2G92G 5%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
proc 0K 0K 0K 0%/proc
mnttab   0K 0K 0K 0%/etc/mnttab
swap32G   1.4M32G 1%/etc/svc/volatile
objfs0K 0K 0K 0%/system/object
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap1.so.1
98G   4.2G92G 5%
/platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
98G   4.2G92G 5%
/platform/sun4v/lib/sparcv9/libc_psr.so.1
fd   0K 0K 0K 0%/dev/fd
/dev/md/dsk/d50 19G   4.3G15G23%/var
swap   512M   112K   512M 1%/tmp
swap32G40K32G 1%/var/run
/dev/md/dsk/d309.6G   1.5G   8.1G16%/opt
/dev/md/dsk/d401.9G   142M   1.7G 8%/export/home
/vol/dev/dsk/c0t0d0/fm540cd3
   591M   591M 0K   100%/cdrom/fm540cd3
log_data   8.8G44K   8.8G 1%/log_data
bash-3.00# bash-3.00# df -h
v/dsk/c0t0d0/fm540cd3
   591M   591M 0K   100%/cdrom/fm540cd3
log_data   8.8G44K   8.8G 1%/log_data



zpool status

bash-3.00# zpool status   
  pool: log_data
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
log_data   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
c6t600A0B800049F93C030A48B3EA2Cd0  ONLINE   0 0 0
c6t600A0B800049F93C030D48B3EAB6d0  ONLINE   0 0 0
c6t600A0B800049F93C031C48B3EC76d0  ONLINE   0 0 0
c6t600A0B800049F93C031F48B3ECA8d0  ONLINE   0 0 0
c6t600A0B800049F93C030448B3CDEEd0  ONLINE   0 0 0
c6t600A0B800049F93C030748B3E9F0d0  ONLINE   0 0 0
c6t600A0B800049F93C031048B3EB44d0  ONLINE   0 0 0
c6t600A0B800049F93C031348B3EB94d0  ONLINE   0 0 0
c6t600A0B800049F93C031648B3EBE4d0  ONLINE   0 0 0
c6t600A0B800049F93C031948B3EC28d0  ONLINE   0 0 0
c6t600A0B800049F93C032248B3ECDEd0  ONLINE   0 0 0

errors: No known data errors



format

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 SUN146G cyl 14087 alt 2 hd 24 sec 848
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SUN146G cyl 14087 alt 2 hd 24 sec 848
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c6t600A0B800049F93C030A48B3EA2Cd0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
   3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
  /scsi_vhci/[EMAIL PROTECTED]
   4. c6t600A0B800049F93C031C48B3EC76d0 SUN-LCSM100_F-0670-931.01MB
  /scsi_vhci/[EMAIL PROTECTED]
   5. c6t600A0B800049F93C031F48B3ECA8d0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
   6. c6t600A0B800049F93C030448B3CDEEd0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
   7. c6t600A0B800049F93C030748B3E9F0d0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
   8. c6t600A0B800049F93C031048B3EB44d0 SUN-LCSM100_F-0670-931.01MB
  /scsi_vhci/[EMAIL PROTECTED]
   9. c6t600A0B800049F93C031348B3EB94d0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
  10. c6t600A0B800049F93C031648B3EBE4d0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
  11. c6t600A0B800049F93C031948B3EC28d0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
  12. c6t600A0B800049F93C032248B3ECDEd0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]
Specify disk (enter its number):
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Daniel Rock
Kenny schrieb:
2. c6t600A0B800049F93C030A48B3EA2Cd0 SUN-LCSM100_F-0670-931.01GB
   /scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]

Disk 2: 931GB
Disk 3: 931MB

Do you see the difference?



Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Kyle McDonald
Daniel Rock wrote:

 Kenny schrieb:
 2. c6t600A0B800049F93C030A48B3EA2Cd0 
 SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
 3. c6t600A0B800049F93C030D48B3EAB6d0 
 SUN-LCSM100_F-0670-931.01MB
/scsi_vhci/[EMAIL PROTECTED]

 Disk 2: 931GB
 Disk 3: 931MB

 Do you see the difference?

Not just disk 3:

 AVAILABLE DISK SELECTIONS:
3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
4. c6t600A0B800049F93C031C48B3EC76d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
8. c6t600A0B800049F93C031048B3EB44d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
   
This all makes sense now, since a RAIDZ (or RAIDZ2) vdev can only be as 
big as it's *smallest* component device.

   -Kyle



 Daniel
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Tim
exactly :)



On 8/28/08, Kyle McDonald [EMAIL PROTECTED] wrote:
 Daniel Rock wrote:

 Kenny schrieb:
 2. c6t600A0B800049F93C030A48B3EA2Cd0
 SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
 3. c6t600A0B800049F93C030D48B3EAB6d0
 SUN-LCSM100_F-0670-931.01MB
/scsi_vhci/[EMAIL PROTECTED]

 Disk 2: 931GB
 Disk 3: 931MB

 Do you see the difference?

 Not just disk 3:

 AVAILABLE DISK SELECTIONS:
3. c6t600A0B800049F93C030D48B3EAB6d0
 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
4. c6t600A0B800049F93C031C48B3EC76d0
 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
8. c6t600A0B800049F93C031048B3EB44d0
 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]

 This all makes sense now, since a RAIDZ (or RAIDZ2) vdev can only be as
 big as it's *smallest* component device.

-Kyle



 Daniel
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Bob Friesenhahn
On Thu, 28 Aug 2008, Kenny wrote:
   2. c6t600A0B800049F93C030A48B3EA2Cd0 SUN-LCSM100_F-0670-931.01GB
  /scsi_vhci/[EMAIL PROTECTED]

Good.

   3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
  /scsi_vhci/[EMAIL PROTECTED]

Oops!  Oops!  Oops!

It seems that some of your drives have the full 931.01GB exported 
while others have only 931.01MB exported.  The smallest device size 
will be used to size the vdev in your pool.  I sense a user error in 
the tedious CAM interface.  CAM is slow so you need to be patient and 
take extra care when configuring the 2540 volumes.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Kenny
Ok so I knew it had to be operator headspace...  grin

I found my error and have fixed it in CAM.  Thanks to all for helping my 
education!!  

However I do have a question.  And pardon if it's a 101 type...

How did you determine from the format output the GB vs MB amount??

Where do you compute 931 GB vs 932 MB from this??

2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]

3. c6t600A0B800049F93C030D48B3EAB6d0
/scsi_vhci/[EMAIL PROTECTED]

Please educate me!!  grin

Thanks again!

--Kenny
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Kenny
Ok so I knew it had to be operator headspace...  grin

I found my error and have fixed it in CAM.  Thanks to all for helping my 
education!!  

However I do have a question.  And pardon if it's a 101 type...

How did you determine from the format output the GB vs MB amount??

Where do you compute 931 GB vs 932 MB from this??

2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]

3. c6t600A0B800049F93C030D48B3EAB6d0
/scsi_vhci/[EMAIL PROTECTED]

Please educate me!!  grin

Thanks again!

--Kenny
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Kyle McDonald
Kenny wrote:

 How did you determine from the format output the GB vs MB amount??

 Where do you compute 931 GB vs 932 MB from this??

 2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]

 3. c6t600A0B800049F93C030D48B3EAB6d0
 /scsi_vhci/[EMAIL PROTECTED]

It's in the part you didn't cut and paste:

AVAILABLE DISK SELECTIONS:
3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
4. c6t600A0B800049F93C031C48B3EC76d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
8. c6t600A0B800049F93C031048B3EB44d0 SUN-LCSM100_F-0670-931.01MB
   /scsi_vhci/[EMAIL PROTECTED]
   

Look at the label:

SUN-LCSM100_F-0670-931.01MB

The last field.


 Please educate me!!  grin

No problem. Things like this have happened to me from time to time.

   -Kyle

 Thanks again!

 --Kenny
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 1:08 PM, Kenny [EMAIL PROTECTED] wrote:

 Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?

 I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each).  The
 host system ( SUN Enterprise 5220) reconizes the disks as each having
 931GB space.  So that should be 10+ TB in size total.  However when I zpool
 them together (using raidz) the zpool status reports 9GB instead of 9TB.

 Does ZFS have problem reporting TB and defaults to GB instead??  Is my pool
 really TB in size??

 I've read in the best practice wiki that splitting them into smaller pools.
  Any recommendation on this??  I'm desperate in keepingas much space useable
 as possible.




OS version and zfs version would be helpful.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Aaron Blew
Couple of questions,
What version of Solaris are you using? (cat /etc/release)
If you're exposing each disk individually through a LUN/2540 Volume, you
don't really gain anything by having a spare on the 2540 (which I assume
you're doing by only exposing 11 LUNs instead of 12).  Your best bet is to
set no spares on the 2540 and then set one of the LUNs as a spare via ZFS.
How will you be using the storage?  This will help determine how your zpool
should be structured.

-Aaron


On Wed, Aug 27, 2008 at 11:08 AM, Kenny [EMAIL PROTECTED] wrote:

 Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?

 I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each).  The
 host system ( SUN Enterprise 5220) reconizes the disks as each having
 931GB space.  So that should be 10+ TB in size total.  However when I zpool
 them together (using raidz) the zpool status reports 9GB instead of 9TB.

 Does ZFS have problem reporting TB and defaults to GB instead??  Is my pool
 really TB in size??

 I've read in the best practice wiki that splitting them into smaller pools.
  Any recommendation on this??  I'm desperate in keepingas much space useable
 as possible.

 Thanks   --Kenny


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Claus Guttesen
 Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?

 I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each).  The host 
 system ( SUN Enterprise 5220) reconizes the disks as each having 931GB 
 space.  So that should be 10+ TB in size total.  However when I zpool them 
 together (using raidz) the zpool status reports 9GB instead of 9TB.

 Does ZFS have problem reporting TB and defaults to GB instead??  Is my pool 
 really TB in size??

 I've read in the best practice wiki that splitting them into smaller pools.  
 Any recommendation on this??  I'm desperate in keepingas much space useable 
 as possible.

This is from a zpool with three disks at 1 metric TB (= 931 GB) using raidz.

[EMAIL PROTECTED]:~# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
ef12.72T  2.65T  67.0G97%  ONLINE  -

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Kenny
Tcook - Sorry bout that...

Solaris 10 (8/07 I think)
ZFS version 4

How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?

Thanks   --Kenny
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Kenny
Claus - Thanks!!  At least I know I'm not going crazy!!

Yes, I've got 11 metric 1 TB disks and would like 10TB useable (end game...)

--Kenny
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Kenny
Arron,

Thanks...  Yes I did reserve one for Hot spare on the hardware side  Guess 
I can change that thinking.  grin

Solaris 10 8/07 is my OS.

This storage is to become our syslog repository for approx 20 servers.  We have 
approx 3TB of data now and wanted space to grow and keep more online for 
research before moving items off to tape.

Thanks  --Kenny
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Kenny
Claus,  Thanks for the sanity check...  I thought I wasn't crazy  Now on to 
find out why my 9TB turned into 9GB...  grin

Thanks again

--Kenny
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Richard Elling
Kenny wrote:
 Arron,

 Thanks...  Yes I did reserve one for Hot spare on the hardware side  
 Guess I can change that thinking.  grin

 Solaris 10 8/07 is my OS.

 This storage is to become our syslog repository for approx 20 servers.  We 
 have approx 3TB of data now and wanted space to grow and keep more online for 
 research before moving items off to tape.
   

That will compress rather nicely!  IMHO, you should enable ZFS
compression.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Bob Friesenhahn
On Wed, 27 Aug 2008, Kenny wrote:

 Tcook - Sorry bout that...

 Solaris 10 (8/07 I think)
 ZFS version 4

 How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?

You can use 'smpatch' to apply patches to your system so that 
kernel/zfs wise it is essentially Sol 10 5/08.  However, I have never 
heard of this sort of problem before.  Perhaps there is user error. 
Perhaps you accidentally did something silly like create an 11 disk 
mirror.  Or maybe you thought you configured the StorageTek 2540 to 
export the entire drive as a volume but got a smaller allocation 
instead (been there, done that). Using CAM is pretty tedious so you 
could do the right thing for one disk and accidentally use the minimum 
default size for the others.

You said that 'zpool status' reported only 9GB but there is no size 
output produced by 'zpool status'.  You can use 'zpool iostat' to see 
the space available.  With 'zpool iostat -v' you can see how much 
space zfs is obtaining from each device.

If you can post the output of 'zpool iostat -v' then people here can 
help you further.

While I don't have 1TB disks and did not use raidz, I have done much 
of what you are attempting to do.  You can read about what I did at 
http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf;.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 1:51 PM, Kenny [EMAIL PROTECTED] wrote:

 Tcook - Sorry bout that...

 Solaris 10 (8/07 I think)
 ZFS version 4

 How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?

 Thanks   --Kenny


Please paste the output of df, zpool status, and format so we can verify
what you're seeing. :)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Bob Friesenhahn
On Wed, 27 Aug 2008, Kenny wrote:

 Thanks...  Yes I did reserve one for Hot spare on the hardware 
 side  Guess I can change that thinking.  grin

Disks in the 2540 are expensive.  The hot spare does not need to be in 
the 2540.  You also use a suitably large disk (1TB) installed in your 
server as the hot spare.  This assumes that disks in the server are 
cheaper than in the 2540.  With this approach you can then use all 12 
disks in your 2540 and configure them as two raidz2 vdevs in one pool.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss