To All...
Problem solved. Operator error on my part. (but I did learn something!!
grin)
Thank you all very much!
--Kenny
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Bob, Thanks for the reply. Yes I did read your white paper and am using it!!
Thanks again!!
I used zpool iostat -v and it did't give the information as advertised... see
below
bash-3.00# zpool iostat -v
capacity
operations
Tim,
Per your request...
df -h
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d10 98G 4.2G92G 5%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
Kenny schrieb:
2. c6t600A0B800049F93C030A48B3EA2Cd0 SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
/scsi_vhci/[EMAIL PROTECTED]
Disk 2: 931GB
Disk 3: 931MB
Do you see the
Daniel Rock wrote:
Kenny schrieb:
2. c6t600A0B800049F93C030A48B3EA2Cd0
SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0
SUN-LCSM100_F-0670-931.01MB
/scsi_vhci/[EMAIL PROTECTED]
Disk 2: 931GB
exactly :)
On 8/28/08, Kyle McDonald [EMAIL PROTECTED] wrote:
Daniel Rock wrote:
Kenny schrieb:
2. c6t600A0B800049F93C030A48B3EA2Cd0
SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0
On Thu, 28 Aug 2008, Kenny wrote:
2. c6t600A0B800049F93C030A48B3EA2Cd0 SUN-LCSM100_F-0670-931.01GB
/scsi_vhci/[EMAIL PROTECTED]
Good.
3. c6t600A0B800049F93C030D48B3EAB6d0 SUN-LCSM100_F-0670-931.01MB
/scsi_vhci/[EMAIL PROTECTED]
Oops! Oops! Oops!
It
Ok so I knew it had to be operator headspace... grin
I found my error and have fixed it in CAM. Thanks to all for helping my
education!!
However I do have a question. And pardon if it's a 101 type...
How did you determine from the format output the GB vs MB amount??
Where do you compute
Ok so I knew it had to be operator headspace... grin
I found my error and have fixed it in CAM. Thanks to all for helping my
education!!
However I do have a question. And pardon if it's a 101 type...
How did you determine from the format output the GB vs MB amount??
Where do you compute
Kenny wrote:
How did you determine from the format output the GB vs MB amount??
Where do you compute 931 GB vs 932 MB from this??
2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]
3. c6t600A0B800049F93C030D48B3EAB6d0
/scsi_vhci/[EMAIL PROTECTED]
It's in the part
On Wed, Aug 27, 2008 at 1:08 PM, Kenny [EMAIL PROTECTED] wrote:
Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The
host system ( SUN Enterprise 5220) reconizes the disks as each having
931GB
Couple of questions,
What version of Solaris are you using? (cat /etc/release)
If you're exposing each disk individually through a LUN/2540 Volume, you
don't really gain anything by having a spare on the 2540 (which I assume
you're doing by only exposing 11 LUNs instead of 12). Your best bet is
Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The host
system ( SUN Enterprise 5220) reconizes the disks as each having 931GB
space. So that should be 10+ TB in size total. However when I
Tcook - Sorry bout that...
Solaris 10 (8/07 I think)
ZFS version 4
How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
Thanks --Kenny
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Claus - Thanks!! At least I know I'm not going crazy!!
Yes, I've got 11 metric 1 TB disks and would like 10TB useable (end game...)
--Kenny
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Arron,
Thanks... Yes I did reserve one for Hot spare on the hardware side Guess
I can change that thinking. grin
Solaris 10 8/07 is my OS.
This storage is to become our syslog repository for approx 20 servers. We have
approx 3TB of data now and wanted space to grow and keep more
Claus, Thanks for the sanity check... I thought I wasn't crazy Now on to
find out why my 9TB turned into 9GB... grin
Thanks again
--Kenny
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Kenny wrote:
Arron,
Thanks... Yes I did reserve one for Hot spare on the hardware side
Guess I can change that thinking. grin
Solaris 10 8/07 is my OS.
This storage is to become our syslog repository for approx 20 servers. We
have approx 3TB of data now and wanted space to grow
On Wed, 27 Aug 2008, Kenny wrote:
Tcook - Sorry bout that...
Solaris 10 (8/07 I think)
ZFS version 4
How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
You can use 'smpatch' to apply patches to your system so that
kernel/zfs wise it is essentially Sol 10 5/08. However, I have
On Wed, Aug 27, 2008 at 1:51 PM, Kenny [EMAIL PROTECTED] wrote:
Tcook - Sorry bout that...
Solaris 10 (8/07 I think)
ZFS version 4
How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
Thanks --Kenny
Please paste the output of df, zpool status, and format so we can verify
what
On Wed, 27 Aug 2008, Kenny wrote:
Thanks... Yes I did reserve one for Hot spare on the hardware
side Guess I can change that thinking. grin
Disks in the 2540 are expensive. The hot spare does not need to be in
the 2540. You also use a suitably large disk (1TB) installed in your
21 matches
Mail list logo