Björn, thank you for the command, I was curious why I didn't have a 
kernel.log.  I found nothing in the logs related to zfs when running the 
failed zpool create command.

As a follow-up to my last post, the low-level format finished on the drive, 
but the zpool command still returns the I/O error when trying to create a 
pool for the drive with 4K block size.

I failed to mention previously that the drives in question are Hitachi 
HDS72202 2TB.  Hitachi has a tool that can re-align the drives to 4K, but I 
haven't tried it yet.

So for anyone else running into this issue, you might check your device 
block size with the diskutil info command:

*This one works:*

MacPro:~ bump$ diskutil info /dev/disk4
   Device Identifier:        disk4
   Device Node:              /dev/disk4
   Part of Whole:            disk4
   Device / Media Name:      ATTO 100 Media

   Volume Name:              Not applicable (no file system)

   Mounted:                  Not applicable (no file system)

   File System:              None

   Content (IOContent):      None
   OS Can Be Installed:      No
   Media Type:               Generic
   Protocol:                 SAS
   SMART Status:             Not Supported

   Total Size:               2.0 TB (2000348512256 Bytes) (exactly 
3906930688 512-Byte-Blocks)
   Volume Free Space:        Not applicable (no file system)
   *Device Block Size:        512 Bytes*

   Read-Only Media:          No
   Read-Only Volume:         Not applicable (no file system)
   Ejectable:                Yes

   Whole:                    Yes
   Internal:                 No
   OS 9 Drivers:             No
   Low Level Format:         Not supported

*This one doesn't work:*

MacPro:~ bump$ diskutil info /dev/disk5
   Device Identifier:        disk5
   Device Node:              /dev/disk5
   Part of Whole:            disk5
   Device / Media Name:      ATTO 200 Media

   Volume Name:              Not applicable (no file system)

   Mounted:                  Not applicable (no file system)

   File System:              None

   Content (IOContent):      None
   OS Can Be Installed:      No
   Media Type:               Generic
   Protocol:                 SAS
   SMART Status:             Not Supported

   Total Size:               2.0 TB (2000348512256 Bytes) (exactly 
3906930688 512-Byte-Blocks)
   Volume Free Space:        Not applicable (no file system)
   *Device Block Size:        4096 Bytes*

   Read-Only Media:          No
   Read-Only Volume:         Not applicable (no file system)
   Ejectable:                Yes

   Whole:                    Yes
   Internal:                 No
   OS 9 Drivers:             No
   Low Level Format:         Not supported

Steve


On Monday, February 24, 2014 3:45:09 PM UTC-7, Bjoern Kahl wrote:
>
> -----BEGIN PGP SIGNED MESSAGE----- 
> Hash: SHA1 
>
> Am 24.02.14 22:35, schrieb Gregg Wonderly: 
> > What about doing it without the partitions?  I thought that it was 
> > best to use the whole disk anyway. 
> > 
> > sudo zpool create -o ashift=12 Data raidz2 disk4 disk5 disk6 disk7 
> > disk8 disk9 disk10 spare disk11 
>
>  IIRC, the recommended way on Mac OSX is to use a partition, and to 
>  create it using diskutil.  That way one meets the expectations of Mac 
>  OSX' disk management.  One _can_ go the full disk path, but it is not 
>  the default method for MacZFS-74.x 
>
>  In any case, I doubt that using whole disk access will help. 
>
>  @ m...@saudette.net <javascript:> : 
>
>  Can you check if there is anything MacZFS related in 
>  /var/log/kernel.log or /var/log/system.log around your attempt 
>  to create the raidz2?  You may need to try "syslog -k Sender kernel | 
> grep -5 -e zfs -e ZFS" if no /var/log/kernel.log exist on your system. 
>
>
>
>  Björn 
>
>
> > On 2/24/2014 11:09 AM, m...@saudette.net <javascript:> wrote: 
> >> *Summary:* * * I'm trying to create a pool with my iStoragePro 
> >> enclosure with 8x2TB connected to an ATTO ExpressSAS R380 RAID 
> >> card, but it always fails with an I/O error: 
> >> 
> >> sudo zpool create -o ashift=12 Data raidz2 disk4s2 disk5s2 
> >> disk6s2 disk7s2 disk8s2 disk9s2 disk10s2 spare disk11s2 "cannot 
> >> create 'Data': I/O error" 
> >> 
> >> Could be related to this old topic 
> >> <https://groups.google.com/forum/#%21topic/zfs-macos/kZQTiICDZR0> 
> >> and this closed (invalid) bug 
> >> <https://code.google.com/p/maczfs/issues/detail?id=107>. 
> >> 
> >> 
> >> *Details:* 
> >> 
> >> I'm running MacZFS-74.3.3 on Mountain Lion 10.8.5. I've created a 
> >> few other pools on this system already using internal drives, so 
> >> I know my install is good. 
> >> 
> >> The hardware is a MacPro 2,1 8-core, 32GB RAM with an 
> >> iStoragePro enclosure with 8x2TB connected to an ATTO ExpressSAS 
> >> R380 RAID card. I've been running this setup as a RAID-6 for over 
> >> a year without a problem. 
> >> 
> >> I wanted to convert to a RAIDZ2, so I switched the enclosure to 
> >> JBOD. I can see all 8 disk and format them individually with Disk 
> >> Utility, so everything is working there, but if I try to create a 
> >> pool I get "cannot create 'Data': I/O error" 
> >> 
> >> sudo zpool create -o ashift=12 Data raidz2 disk4s2 disk5s2 
> >> disk6s2 disk7s2 disk8s2 disk9s2 disk10s2 spare disk11s2 "cannot 
> >> create 'Data': I/O error" 
> >> 
> >> I've tried the following, without success... 
> >> 
> >> * adding /dev/ before all the disk * creating a pool with just a 
> >> single disk * creating a mirror with just two of the disks * not 
> >> using the ashift parameter * picking a different pool name (tank, 
> >> puddle, etc) 
> >> 
> >> 
> >> Anyone have any ideas? 
> >> 
> >> Regards, Steve 
> >> 
> >> 
> >> MacPro:~ bump$ diskutil list /dev/disk0 #:         TYPE NAME 
> >> SIZE IDENTIFIER 0:  GUID_partition_scheme 
> >> *120.0 GB   disk0 1:          EFI                         209.7 
> >> MB   disk0s1 2:    Apple_HFS Macintosh HD            119.2 GB 
> >> disk0s2 /dev/disk1 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *3.0 
> >> TB   disk1 1:          EFI                         209.7 MB 
> >> disk1s1 2:          ZFS pool                    3.0 TB 
> >> disk1s2 /dev/disk2 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *3.0 
> >> TB   disk2 1:          EFI                         209.7 MB 
> >> disk2s1 2:          ZFS pool                    3.0 TB 
> >> disk2s2 /dev/disk3 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *3.0 
> >> TB   disk3 1:          EFI                         209.7 MB 
> >> disk3s1 2:          ZFS pool                    3.0 TB 
> >> disk3s2 /dev/disk4 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk4 1:          EFI                         314.6 MB 
> >> disk4s1 2:          ZFS                         2.0 TB 
> >> disk4s2 /dev/disk5 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk5 1:          EFI                         314.6 MB 
> >> disk5s1 2:          ZFS                         2.0 TB 
> >> disk5s2 /dev/disk6 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk6 1:          EFI                         314.6 MB 
> >> disk6s1 2:          ZFS                         2.0 TB 
> >> disk6s2 /dev/disk7 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk7 1:          EFI                         314.6 MB 
> >> disk7s1 2:          ZFS                         2.0 TB 
> >> disk7s2 /dev/disk8 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk8 1:          EFI                         314.6 MB 
> >> disk8s1 2:          ZFS                         2.0 TB 
> >> disk8s2 /dev/disk9 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk9 1:          EFI                         314.6 MB 
> >> disk9s1 2:          ZFS                         2.0 TB 
> >> disk9s2 /dev/disk10 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk10 1:          EFI                         314.6 MB 
> >> disk10s1 2:          ZFS                         2.0 TB 
> >> disk10s2 /dev/disk11 #:         TYPE NAME                    SIZE 
> >> IDENTIFIER 0:  GUID_partition_scheme                        *2.0 
> >> TB   disk11 1:          EFI                         314.6 MB 
> >> disk11s1 2:          ZFS                         2.0 TB 
> >> disk11s2 
> >> 
> >> 
> >> MacPro:~ bump$ zpool status -v pool: pool state: ONLINE scrub: 
> >> none requested config: 
> >> 
> >> NAME        STATE     READ WRITE CKSUM pool        ONLINE       0 
> >> 0     0 disk2s2   ONLINE       0     0     0 disk3s2   ONLINE 
> >> 0     0     0 disk1s2   ONLINE       0     0     0 
> >> 
> >> errors: No known data errors 
>
>
> - -- 
> |     Bjoern Kahl   +++   Siegburg   +++    Germany     | 
> | "googlelogin@-my-domain-"   +++   www.bjoern-kahl.de  | 
> | Languages: German, English, Ancient Latin (a bit :-)) | 
> -----BEGIN PGP SIGNATURE----- 
> Version: GnuPG v1 
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ 
>
> iQCVAgUBUwvLdFsDv2ib9OLFAQJpAgP/Y/e773y8WAa/Rz08J8wXWIfzbJCxyK7b 
> BFXO2cbpQuD3yd6e3X0/gTQ9CBwSUAdHyjv4lTghCXBZgoHJEdwgiOYkVIacJrRN 
> f98Kk5JXDzoV0ETaOoZNS+g6YClzedkEbP8CNgvtU6GiUsaQ6hAxohH7O9viDU2Z 
> GIy87UIHPrE= 
> =11Vl 
> -----END PGP SIGNATURE----- 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to zfs-macos+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to