Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-15 Thread Jim Klimov
On 2012-12-14 17:03, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

Suspicion and conjecture only:  I think format uses a fdisk label, which has a 
2T limit.



Technically, fdisk is a program and labels (partitioning tables)
are MBR and EFI/GPT :)

And fdisk at least in OpenIndiana can explicitly label a disk as
EFI, similarly to what ZFS does when given the whole disk to a pool.

You might also have luck with GNU parted, though I've had older
builds (i.e. in SXCE) crash on 3Tb disks too, including one that's
labeled as EFI and used in a pool on the same SXCE. There were no
such problems with newer build of parted as in OI, so that disk was
in fact labeled for SXCE while the box was booted with OI LiveCD.

HTH,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread sol
Here it is:

# pstack core.format1
core 'core.format1' of 3351:    format
-  lwp# 1 / thread# 1  
 0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
 08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
 08068a41 c_disk   (4, 806f250, 0, 0, 0, 0) + 48d
 0806626b main     (1, f416c3b0, f416c3b8, f416c36c) + 18b
 0805803d _start   (1, f416c47c, 0, f416c483, f416c48a, f416c497) + 7d
-  lwp# 2 / thread# 2  
 eed690b1 __door_return (0, 0, 0, 0) + 21
 eed50668 door_create_func (0, eee02000, eea1efe8, eed643e9) + 32
 eed6443c _thrp_setup (ee910240) + 9d
 eed646e0 _lwp_start (ee910240, 0, 0, 0, 0, 0)
-  lwp# 3 / thread# 3  
 eed6471b __lwp_park (8780880, 8780890) + b
 eed5e0d3 cond_wait_queue (8780880, 8780890, 0, eed5e5f0) + 63
 eed5e668 __cond_wait (8780880, 8780890, ee90ef88, eed5e6b1) + 89
 eed5e6bf cond_wait (8780880, 8780890, 208, eea740ad) + 27
 eea740f8 subscriber_event_handler (8778dd0, eee02000, ee90efe8, eed643e9) + 5c
 eed6443c _thrp_setup (ee910a40) + 9d
 eed646e0 _lwp_start (ee910a40, 0, 0, 0, 0, 0)





 From: John D Groenveld jdg...@elvis.arl.psu.edu
# pstack core
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of sol
 
 I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but
 it crashed and dumped core.
 
 However the zpool 'create' command managed to create a pool on the whole
 disk (2.68 TB space).
 
 I hope that's only a problem with the format command and not with zfs or
 any other part of the kernel.

Suspicion and conjecture only:  I think format uses a fdisk label, which has a 
2T limit.  

Normally it's advised to use the whole disk directly via zpool anyway, so 
hopefully that's a good solution for you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Cindy Swearingen

Hey Sol,

Can you send me the core file, please?

I would like to file a bug for this problem.

Thanks, Cindy

On 12/14/12 02:21, sol wrote:

Here it is:

# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1 
0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
08068a41 c_disk (4, 806f250, 0, 0, 0, 0) + 48d
0806626b main (1, f416c3b0, f416c3b8, f416c36c) + 18b
0805803d _start (1, f416c47c, 0, f416c483, f416c48a, f416c497) + 7d
- lwp# 2 / thread# 2 
eed690b1 __door_return (0, 0, 0, 0) + 21
eed50668 door_create_func (0, eee02000, eea1efe8, eed643e9) + 32
eed6443c _thrp_setup (ee910240) + 9d
eed646e0 _lwp_start (ee910240, 0, 0, 0, 0, 0)
- lwp# 3 / thread# 3 
eed6471b __lwp_park (8780880, 8780890) + b
eed5e0d3 cond_wait_queue (8780880, 8780890, 0, eed5e5f0) + 63
eed5e668 __cond_wait (8780880, 8780890, ee90ef88, eed5e6b1) + 89
eed5e6bf cond_wait (8780880, 8780890, 208, eea740ad) + 27
eea740f8 subscriber_event_handler (8778dd0, eee02000, ee90efe8,
eed643e9) + 5c
eed6443c _thrp_setup (ee910a40) + 9d
eed646e0 _lwp_start (ee910a40, 0, 0, 0, 0, 0)



*From:* John D Groenveld jdg...@elvis.arl.psu.edu
# pstack core



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-13 Thread sol
Hi

I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but it 
crashed and dumped core.


However the zpool 'create' command managed to create a pool on the whole disk 
(2.68 TB space).

I hope that's only a problem with the format command and not with zfs or any 
other part of the kernel.

(Solaris 11.1 by the way)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-13 Thread John D Groenveld
# pstack core

John
groenv...@acm.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss