[zfs-discuss] poor CIFS and NFS performance

2012-12-30 Thread Eugen Leitl

Happy $holidays,

I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++ 
1.03 values, e.g.  145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s 
Seq-Read @ 53% CPU. It scrubs with 230+ MByte/s with reasonable
system load. No hybrid pools yet. This is latest beta napp-it 
on OpenIndiana 151a5 server, living on a dedicated 64 GByte SSD.

The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
memory, no ECC. All the systems have Intel NICs with mtu 9000
enabled, including all switches in the path.

My problem is pretty poor network throughput. An NFS
mount on 12.04 64 bit Ubuntu (mtu 9000) or CIFS are
read at about 23 MBytes/s. Windows 7 64 bit (also jumbo
frames) reads at about 65 MBytes/s. The highest transfer
speed on Windows just touches 90 MByte/s, before falling
back to the usual 60-70 MBytes/s.

I kinda can live with above values, but I have a feeling
the setup should be able to saturate GBit Ethernet with
large file transfers, especially on Linux (20 MByte/s
is nothing to write home about).

Does anyone have any suggestions on how to debug/optimize
throughput?

Thanks, and happy 2013.

P.S. Not sure whether this is pathological, but the system
does produce occasional soft errors like e.g. dmesg

Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
unique code 0x0), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/disk@g5000c50009c72c48 (sd9):
Dec 30 17:45:01 oizfs   Error for Command: undecoded cmd 0xa1Error Level: 
Recovered
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
unique code 0x0), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 0
Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 1
Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/disk@g5000c50009c73968 (sd4):
Dec 30 17:45:01 oizfs   Error for Command: undecoded cmd 0xa1Error Level: 
Recovered
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
unique code 0x0), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/disk@g5000c500098be9dd (sd10):
Dec 30 17:45:03 oizfs   Error for Command: undecoded cmd 0xa1Error Level: 
Recovered
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
unique code 0x0), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci1462,7720@11/disk@3,0 (sd8):
Dec 30 17:45:04 oizfs   Error for Command: undecoded cmd 0xa1Error Level: 
Recovered
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (no additional 
sense info), ASCQ: 0x0, FRU: 0x0

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2012-12-30 Thread Richard Elling
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:

 
 Happy $holidays,
 
 I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
 a raidz3 (no compression nor dedup) with reasonable bonnie++ 
 1.03 values, e.g.  145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s 
 Seq-Read @ 53% CPU. It scrubs with 230+ MByte/s with reasonable
 system load. No hybrid pools yet. This is latest beta napp-it 
 on OpenIndiana 151a5 server, living on a dedicated 64 GByte SSD.
 
 The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
 memory, no ECC. All the systems have Intel NICs with mtu 9000
 enabled, including all switches in the path.

Does it work faster with the default MTU?
Also check for retrans and errors, using the usual network performance
debugging checks.

 
 My problem is pretty poor network throughput. An NFS
 mount on 12.04 64 bit Ubuntu (mtu 9000) or CIFS are
 read at about 23 MBytes/s. Windows 7 64 bit (also jumbo
 frames) reads at about 65 MBytes/s. The highest transfer
 speed on Windows just touches 90 MByte/s, before falling
 back to the usual 60-70 MBytes/s.
 
 I kinda can live with above values, but I have a feeling
 the setup should be able to saturate GBit Ethernet with
 large file transfers, especially on Linux (20 MByte/s
 is nothing to write home about).
 
 Does anyone have any suggestions on how to debug/optimize
 throughput?
 
 Thanks, and happy 2013.
 
 P.S. Not sure whether this is pathological, but the system
 does produce occasional soft errors like e.g. dmesg

More likely these are due to SMART commands not being properly handled
for SATA devices. They are harmless.
 -- richard

 
 Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0
  Error Block: 0
 Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA   
  Serial Number:  
 Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
 Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
 unique code 0x0), ASCQ: 0x1d, FRU: 0x0
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.warning] WARNING: 
 /scsi_vhci/disk@g5000c50009c72c48 (sd9):
 Dec 30 17:45:01 oizfs   Error for Command: undecoded cmd 0xa1Error 
 Level: Recovered
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0
  Error Block: 0
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA   
  Serial Number:  
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
 unique code 0x0), ASCQ: 0x1d, FRU: 0x0
 Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
 instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 0
 Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
 instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 1
 Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
 instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 0
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.warning] WARNING: 
 /scsi_vhci/disk@g5000c50009c73968 (sd4):
 Dec 30 17:45:01 oizfs   Error for Command: undecoded cmd 0xa1Error 
 Level: Recovered
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0
  Error Block: 0
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA   
  Serial Number:  
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
 Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
 unique code 0x0), ASCQ: 0x1d, FRU: 0x0
 Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.warning] WARNING: 
 /scsi_vhci/disk@g5000c500098be9dd (sd10):
 Dec 30 17:45:03 oizfs   Error for Command: undecoded cmd 0xa1Error 
 Level: Recovered
 Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0
  Error Block: 0
 Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA   
  Serial Number:  
 Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
 Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (vendor 
 unique code 0x0), ASCQ: 0x1d, FRU: 0x0
 Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci1462,7720@11/disk@3,0 (sd8):
 Dec 30 17:45:04 oizfs   Error for Command: undecoded cmd 0xa1Error 
 Level: Recovered
 Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0
  Error Block: 0
 Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA   
  Serial Number:  
 Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
 Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice]  

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-30 Thread cindy swearingen
Existing Solaris 10 releases are not impacted. S10u11 isn't released yet so
I think
we can assume that this upcoming Solaris 10 release will include a
preventative fix.

Thanks, Cindy

On Thu, Dec 27, 2012 at 11:11 PM, Andras Spitzer wsen...@gmail.com wrote:

 Josh,

 You mention that Oracle is preparing patches for both Solaris 11.2 and
 S10u11, does that mean that the bug exist in Solaris 10 as well? I may be
 wrong but Cindy mentioned the bug is only in Solaris 11.

 Regards,
 sendai

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Expanding a raidz vdev in zpool

2012-12-30 Thread Curtis Schiewek
Hello All,

I have a zpool that consists of 2 raidz vdevs (raidz1-0 and raidz1-1). The
first vdev is 4 1.5TB drives. The second was 4 500GB drives. I replaced the
4 500GB drives with 4 3TB drives.

I replaced one at time, and resilvered each. Now the process is complete, I
expected to have an extra 10TB (4*2.5TB) of raw space, but it's still the
same amount of space.

I did an export and import, which I have read might be required before
you'd see the extra space, but that still hasn't happened?

What am I missing? What can I do to get the extra space?

Thanks,

Curtis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding a raidz vdev in zpool

2012-12-30 Thread Dan Swartzendruber
Did you set the autoexpand property?___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding a raidz vdev in zpool

2012-12-30 Thread Curtis Schiewek
I set it after I replaced and resilvered the drivers, but before I did the
export/import.


On Sun, Dec 30, 2012 at 6:27 PM, Dan Swartzendruber dswa...@druber.comwrote:

 Did you set the autoexpand property?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding a raidz vdev in zpool

2012-12-30 Thread Jimmy Olgeni
On 12/31/2012 00:22, Curtis Schiewek wrote:
 I did an export and import, which I have read might be required before
 you'd see the extra space, but that still hasn't happened?

Same here - I had to run zpool online -e my_pool my_device for each
device, then the new space came up.

-- 
jimmy

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding a raidz vdev in zpool

2012-12-30 Thread Curtis Schiewek
Thanks Jimmy, worked like a charm!


On Sun, Dec 30, 2012 at 7:07 PM, Jimmy Olgeni olg...@olgeni.com wrote:

 On 12/31/2012 00:22, Curtis Schiewek wrote:
  I did an export and import, which I have read might be required before
  you'd see the extra space, but that still hasn't happened?

 Same here - I had to run zpool online -e my_pool my_device for each
 device, then the new space came up.

 --
 jimmy


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss