Am 23.09.15 um 10:51 schrieb Martin Truhlář:
Tests revealed, that problem is somewhere in disk array itself. Write 
performance of disk connected directly (via iSCSI) to KVM is poor as well, even 
write performance measured on Omnios is very poor. So loop is tightened, but 
there still remains lot of possible hacks.
I strived to use professional hw (disks included), so I would try to seek the 
error in a software setup first. Do you have any ideas where to search first 
(and second, third...)?

FYI mirror 5 was added lately to the running pool.

pool: dpool
  state: ONLINE
   scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015
config:

        NAME                       STATE     READ WRITE CKSUM      CAP          
  Product /napp-it   IOstat mess
        dpool                      ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c1t50014EE00400FA16d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE2B40F14DBd0  ONLINE       0     0     0      1 TB         
  WDC WD1003FBYX-0   S:0 H:0 T:0
          mirror-1                 ONLINE       0     0     0
            c1t50014EE05950B131d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE2B5E5A6B8d0  ONLINE       0     0     0      1 TB         
  WDC WD1003FBYZ-0   S:0 H:0 T:0
          mirror-2                 ONLINE       0     0     0
            c1t50014EE05958C51Bd0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0595617ACd0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
          mirror-3                 ONLINE       0     0     0
            c1t50014EE0AEAE7540d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0AEAE9B65d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
          mirror-5                 ONLINE       0     0     0
            c1t50014EE0AEABB8E7d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
            c1t50014EE0AEB44327d0  ONLINE       0     0     0      1 TB         
  WDC WD1002F9YZ-0   S:0 H:0 T:0
        logs
          mirror-4                 ONLINE       0     0     0
            c1t55CD2E404B88ABE1d0  ONLINE       0     0     0      120 GB       
  INTEL SSDSC2BW12   S:0 H:0 T:0
            c1t55CD2E404B88E4CFd0  ONLINE       0     0     0      120 GB       
  INTEL SSDSC2BW12   S:0 H:0 T:0
        cache
          c1t55CD2E4000339A59d0    ONLINE       0     0     0      180 GB       
  INTEL SSDSC2BW18   S:0 H:0 T:0
        spares
          c2t2d0                   AVAIL         1 TB           WDC 
WD10EFRX-68F   S:0 H:0 T:0

errors: No known data errors

Martin


-----Original Message-----
From: Dan McDonald [mailto:dan...@omniti.com]
Sent: Wednesday, September 16, 2015 1:51 PM
To: Martin Truhlář
Cc: omnios-discuss@lists.omniti.com; Dan McDonald
Subject: Re: [OmniOS-discuss] iSCSI poor write performance


On Sep 16, 2015, at 4:04 AM, Martin Truhlář <martin.truh...@archcon.cz> wrote:

Yes, I'm aware, that problem can be hidden in many places.
MTU is 1500. All nics and their setup are included at this email.
Start by making your 10GigE network use 9000 MTU.  You'll need to configure 
this on both ends (is this directly-attached 10GigE?  Or over a switch?).

Dan

To understand what might be going on with our zpool, I'd monitor the disks using iostat -xme 5 and keep an eye on the errors and svc_t. Just today I had an issue, where the zpools on one of my OmniOS boxes showed incredible svc_t for all my zpools, although the drives themselves showed only moderate ones. The impact was a very high load on the initiators, which were connected to the targets exported from those zpools.

As I couldn't figure out what was going on, I decided to boot that box and afterwards things returned to normal again. Luckily, this was only one side of an ASM mirror, so bouncing the box didn't matter.

Also, when you say, that mirror-5 has been added recently, how is the data spread across thre vdevs? If the other vdevs have already been quite full, than that could also lead to significant performance issues.

At any way, you will need to get the performance of your zpools straight first, before even beginning to think on how to tweak the performance over the network.

Cheers,
stephan
_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to