Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Hanno Hirschberger

Hi Martin,

On 23.09.2015 10:51, Martin Truhlář wrote:

Tests revealed, that problem is somewhere in disk array itself.


are you familiar with the ashift problem on 4k drives? My best guess 
would be that the 1 TB WD drives are emulating a block size of 512 bytes 
while using 4k sectors internally. OmniOS is using a ashift value of 9 
then to align the data efficiently (on 512 byte sectors!). This slows 
the whole pool down - I had the same problem before. The ashift value 
has to be 12 on 4k drives!


Try the command 'zdb' to gather the values for your drives. Look for 
'ashift: 9' oder 'ashift: 12'.


Regards,

Hanno
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Steffen Wagner
Hi Hanno,

how do you calculate the best ashift value?

Thanks,
Steffen

-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf 
Of Hanno Hirschberger
Sent: Mittwoch, 23. September 2015 14:43
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] iSCSI poor write performance

Hi Martin,

On 23.09.2015 10:51, Martin Truhlář wrote:
> Tests revealed, that problem is somewhere in disk array itself.

are you familiar with the ashift problem on 4k drives? My best guess 
would be that the 1 TB WD drives are emulating a block size of 512 bytes 
while using 4k sectors internally. OmniOS is using a ashift value of 9 
then to align the data efficiently (on 512 byte sectors!). This slows 
the whole pool down - I had the same problem before. The ashift value 
has to be 12 on 4k drives!

Try the command 'zdb' to gather the values for your drives. Look for 
'ashift: 9' oder 'ashift: 12'.

Regards,

Hanno
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Michael Rasmussen
 path: '/dev/dsk/c1t50014EE0AEAE9B65d0s0'
>devid: 'id1,sd@n50014ee0aeae9b65/a'
>phys_path: '/scsi_vhci/disk@g50014ee0aeae9b65:a'
>whole_disk: 1
>DTL: 491
>create_txg: 119
>children[4]:
>type: 'mirror'
>id: 4
>guid: 13450996153705674574
>metaslab_array: 45
>metaslab_shift: 30
>ashift: 9
>asize: 120020795392
>is_log: 1
>create_txg: 172
>children[0]:
>type: 'disk'
>id: 0
>guid: 642840549260709901
>path: '/dev/dsk/c1t55CD2E404B88ABE1d0s0'
>devid: 'id1,sd@n55cd2e404b88abe1/a'
>phys_path: '/scsi_vhci/disk@g55cd2e404b88abe1:a'
>whole_disk: 1
>DTL: 494
>create_txg: 172
>children[1]:
>type: 'disk'
>id: 1
>guid: 17473204952243782915
>path: '/dev/dsk/c1t55CD2E404B88E4CFd0s0'
>devid: 'id1,sd@n55cd2e404b88e4cf/a'
>phys_path: '/scsi_vhci/disk@g55cd2e404b88e4cf:a'
>whole_disk: 1
>DTL: 493
>create_txg: 172
>children[5]:
>type: 'mirror'
>id: 5
>guid: 6461803899340698053
>metaslab_array: 520
>metaslab_shift: 33
>ashift: 12
>asize: 1000191557632
>is_log: 0
>create_txg: 422833
>children[0]:
>type: 'disk'
>id: 0
>guid: 15790186799979059305
>path: '/dev/dsk/c1t50014EE0AEABB8E7d0s0'
>devid: 'id1,sd@n50014ee0aeabb8e7/a'
>phys_path: '/scsi_vhci/disk@g50014ee0aeabb8e7:a'
>whole_disk: 1
>create_txg: 422833
>children[1]:
>type: 'disk'
>        id: 1
>    guid: 3033691275784652782
>path: '/dev/dsk/c1t50014EE0AEB44327d0s0'
>devid: 'id1,sd@n50014ee0aeb44327/a'
>phys_path: '/scsi_vhci/disk@g50014ee0aeb44327:a'
>whole_disk: 1
>create_txg: 422833
>features_for_read:
>com.delphix:hole_birth
>com.delphix:embedded_data
>
>
>-Original Message-
>From: Hanno Hirschberger [mailto:hannohirschber...@googlemail.com] 
>Sent: Wednesday, September 23, 2015 2:43 PM
>To: omnios-discuss@lists.omniti.com
>Subject: Re: [OmniOS-discuss] iSCSI poor write performance
>
>Hi Martin,
>
>On 23.09.2015 10:51, Martin Truhlář wrote:
>> Tests revealed, that problem is somewhere in disk array itself.
>
>are you familiar with the ashift problem on 4k drives? My best guess
>would be that the 1 TB WD drives are emulating a block size of 512
>bytes while using 4k sectors internally. OmniOS is using a ashift value
>of 9 then to align the data efficiently (on 512 byte sectors!). This
>slows the whole pool down - I had the same problem before. The ashift
>value has to be 12 on 4k drives!
>
>Try the command 'zdb' to gather the values for your drives. Look for
>'ashift: 9' oder 'ashift: 12'.
>
>Regards,
>
>Hanno
>___
>OmniOS-discuss mailing list
>OmniOS-discuss@lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss
>___
>OmniOS-discuss mailing list
>OmniOS-discuss@lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Guenther Alka

Poor write performance is often related to sync write.
Enable write back cache for your logical units (and disable ZFS sync 
property on the filesystem for filebased lu's) and redo some performance 
tests.


Gea


Am 23.09.2015 um 10:51 schrieb Martin Truhlář:

Tests revealed, that problem is somewhere in disk array itself. Write 
performance of disk connected directly (via iSCSI) to KVM is poor as well, even 
write performance measured on Omnios is very poor. So loop is tightened, but 
there still remains lot of possible hacks.
I strived to use professional hw (disks included), so I would try to seek the 
error in a software setup first. Do you have any ideas where to search first 
(and second, third...)?

FYI mirror 5 was added lately to the running pool.

pool: dpool
  state: ONLINE
   scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015
config:

NAME   STATE READ WRITE CKSUM  CAP  
  Product /napp-it   IOstat mess
dpool  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c1t50014EE00400FA16d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE2B40F14DBd0  ONLINE   0 0 0  1 TB 
  WDC WD1003FBYX-0   S:0 H:0 T:0
  mirror-1 ONLINE   0 0 0
c1t50014EE05950B131d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE2B5E5A6B8d0  ONLINE   0 0 0  1 TB 
  WDC WD1003FBYZ-0   S:0 H:0 T:0
  mirror-2 ONLINE   0 0 0
c1t50014EE05958C51Bd0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0595617ACd0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
  mirror-3 ONLINE   0 0 0
c1t50014EE0AEAE7540d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0AEAE9B65d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
  mirror-5 ONLINE   0 0 0
c1t50014EE0AEABB8E7d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0AEB44327d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
logs
  mirror-4 ONLINE   0 0 0
c1t55CD2E404B88ABE1d0  ONLINE   0 0 0  120 GB   
  INTEL SSDSC2BW12   S:0 H:0 T:0
c1t55CD2E404B88E4CFd0  ONLINE   0 0 0  120 GB   
  INTEL SSDSC2BW12   S:0 H:0 T:0
cache
  c1t55CD2E4000339A59d0ONLINE   0 0 0  180 GB   
  INTEL SSDSC2BW18   S:0 H:0 T:0
spares
  c2t2d0   AVAIL 1 TB   WDC 
WD10EFRX-68F   S:0 H:0 T:0

errors: No known data errors

Martin


-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com]
Sent: Wednesday, September 16, 2015 1:51 PM
To: Martin Truhlář
Cc: omnios-discuss@lists.omniti.com; Dan McDonald
Subject: Re: [OmniOS-discuss] iSCSI poor write performance



On Sep 16, 2015, at 4:04 AM, Martin Truhlář <martin.truh...@archcon.cz> wrote:

Yes, I'm aware, that problem can be hidden in many places.
MTU is 1500. All nics and their setup are included at this email.

Start by making your 10GigE network use 9000 MTU.  You'll need to configure 
this on both ends (is this directly-attached 10GigE?  Or over a switch?).

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Stephan Budach

Am 23.09.15 um 10:51 schrieb Martin Truhlář:

Tests revealed, that problem is somewhere in disk array itself. Write 
performance of disk connected directly (via iSCSI) to KVM is poor as well, even 
write performance measured on Omnios is very poor. So loop is tightened, but 
there still remains lot of possible hacks.
I strived to use professional hw (disks included), so I would try to seek the 
error in a software setup first. Do you have any ideas where to search first 
(and second, third...)?

FYI mirror 5 was added lately to the running pool.

pool: dpool
  state: ONLINE
   scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015
config:

NAME   STATE READ WRITE CKSUM  CAP  
  Product /napp-it   IOstat mess
dpool  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c1t50014EE00400FA16d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE2B40F14DBd0  ONLINE   0 0 0  1 TB 
  WDC WD1003FBYX-0   S:0 H:0 T:0
  mirror-1 ONLINE   0 0 0
c1t50014EE05950B131d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE2B5E5A6B8d0  ONLINE   0 0 0  1 TB 
  WDC WD1003FBYZ-0   S:0 H:0 T:0
  mirror-2 ONLINE   0 0 0
c1t50014EE05958C51Bd0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0595617ACd0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
  mirror-3 ONLINE   0 0 0
c1t50014EE0AEAE7540d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0AEAE9B65d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
  mirror-5 ONLINE   0 0 0
c1t50014EE0AEABB8E7d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0AEB44327d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
logs
  mirror-4 ONLINE   0 0 0
c1t55CD2E404B88ABE1d0  ONLINE   0 0 0  120 GB   
  INTEL SSDSC2BW12   S:0 H:0 T:0
c1t55CD2E404B88E4CFd0  ONLINE   0 0 0  120 GB   
  INTEL SSDSC2BW12   S:0 H:0 T:0
cache
  c1t55CD2E4000339A59d0ONLINE   0 0 0  180 GB   
  INTEL SSDSC2BW18   S:0 H:0 T:0
spares
  c2t2d0   AVAIL 1 TB   WDC 
WD10EFRX-68F   S:0 H:0 T:0

errors: No known data errors

Martin


-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com]
Sent: Wednesday, September 16, 2015 1:51 PM
To: Martin Truhlář
Cc: omnios-discuss@lists.omniti.com; Dan McDonald
Subject: Re: [OmniOS-discuss] iSCSI poor write performance



On Sep 16, 2015, at 4:04 AM, Martin Truhlář <martin.truh...@archcon.cz> wrote:

Yes, I'm aware, that problem can be hidden in many places.
MTU is 1500. All nics and their setup are included at this email.

Start by making your 10GigE network use 9000 MTU.  You'll need to configure 
this on both ends (is this directly-attached 10GigE?  Or over a switch?).

Dan

To understand what might be going on with our zpool, I'd monitor the 
disks using iostat -xme 5 and keep an eye on the errors and svc_t. Just 
today I had an issue, where the zpools on one of my OmniOS boxes showed 
incredible svc_t for all my zpools, although the drives themselves 
showed only moderate ones. The impact was a very high load on the 
initiators, which were connected to the targets exported from those zpools.


As I couldn't figure out what was going on, I decided to boot that box 
and afterwards things returned to normal again. Luckily, this was only 
one side of an ASM mirror, so bouncing the box didn't matter.


Also, when you say, that mirror-5 has been added recently, how is the 
data spread across thre vdevs? If the other vdevs have already been 
quite full, than that could also lead to significant performance issues.


At any way, you will need to get the performance of your zpools straight 
first, before even beginning to think on how to tweak the performance 
over the network.


Cheers,
stephan
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Martin Truhlář
'
devid: 'id1,sd@n55cd2e404b88abe1/a'
phys_path: '/scsi_vhci/disk@g55cd2e404b88abe1:a'
whole_disk: 1
DTL: 494
create_txg: 172
children[1]:
type: 'disk'
id: 1
guid: 17473204952243782915
path: '/dev/dsk/c1t55CD2E404B88E4CFd0s0'
devid: 'id1,sd@n55cd2e404b88e4cf/a'
phys_path: '/scsi_vhci/disk@g55cd2e404b88e4cf:a'
whole_disk: 1
DTL: 493
create_txg: 172
children[5]:
type: 'mirror'
id: 5
guid: 6461803899340698053
metaslab_array: 520
metaslab_shift: 33
ashift: 12
asize: 1000191557632
is_log: 0
create_txg: 422833
children[0]:
type: 'disk'
id: 0
guid: 15790186799979059305
path: '/dev/dsk/c1t50014EE0AEABB8E7d0s0'
devid: 'id1,sd@n50014ee0aeabb8e7/a'
phys_path: '/scsi_vhci/disk@g50014ee0aeabb8e7:a'
whole_disk: 1
create_txg: 422833
children[1]:
type: 'disk'
id: 1
guid: 3033691275784652782
path: '/dev/dsk/c1t50014EE0AEB44327d0s0'
devid: 'id1,sd@n50014ee0aeb44327/a'
phys_path: '/scsi_vhci/disk@g50014ee0aeb44327:a'
whole_disk: 1
create_txg: 422833
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data


-Original Message-
From: Hanno Hirschberger [mailto:hannohirschber...@googlemail.com] 
Sent: Wednesday, September 23, 2015 2:43 PM
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] iSCSI poor write performance

Hi Martin,

On 23.09.2015 10:51, Martin Truhlář wrote:
> Tests revealed, that problem is somewhere in disk array itself.

are you familiar with the ashift problem on 4k drives? My best guess would be 
that the 1 TB WD drives are emulating a block size of 512 bytes while using 4k 
sectors internally. OmniOS is using a ashift value of 9 then to align the data 
efficiently (on 512 byte sectors!). This slows the whole pool down - I had the 
same problem before. The ashift value has to be 12 on 4k drives!

Try the command 'zdb' to gather the values for your drives. Look for
'ashift: 9' oder 'ashift: 12'.

Regards,

Hanno
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Michael Rasmussen
On Wed, 23 Sep 2015 17:23:24 +0200
Stephan Budach  wrote:

> 
> At any way, you will need to get the performance of your zpools straight 
> first, before even beginning to think on how to tweak the performance over 
> the network.
> 
Since his pool is comprised of vdev mirror pairs where on disk is local
and the other disk is attached via iSCSI solving network performance is
also part of solving the pool performance.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
This fortune is encrypted -- get your decoder rings ready!


pgpVfmZCjBjeP.pgp
Description: OpenPGP digital signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Stephan Budach

Am 23.09.15 um 18:59 schrieb Michael Rasmussen:

On Wed, 23 Sep 2015 17:23:24 +0200
Stephan Budach  wrote:


At any way, you will need to get the performance of your zpools straight first, 
before even beginning to think on how to tweak the performance over the network.


Since his pool is comprised of vdev mirror pairs where on disk is local
and the other disk is attached via iSCSI solving network performance is
also part of solving the pool performance.

Huh? Where did that escape me? I don't think, that the pool layout 
showed any remote disks, they all seemed to be from the same controller, 
aren't they? And even, if that was the case, then one would always start 
at the zpool and work one's way up from there, no?


Cheers,
Stephan
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Martin Truhlář
Tests revealed, that problem is somewhere in disk array itself. Write 
performance of disk connected directly (via iSCSI) to KVM is poor as well, even 
write performance measured on Omnios is very poor. So loop is tightened, but 
there still remains lot of possible hacks.
I strived to use professional hw (disks included), so I would try to seek the 
error in a software setup first. Do you have any ideas where to search first 
(and second, third...)?

FYI mirror 5 was added lately to the running pool.

pool: dpool
 state: ONLINE
  scan: scrub repaired 0 in 5h33m with 0 errors on Sun Sep 20 00:33:15 2015
config:

NAME   STATE READ WRITE CKSUM  CAP  
  Product /napp-it   IOstat mess
dpool  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c1t50014EE00400FA16d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE2B40F14DBd0  ONLINE   0 0 0  1 TB 
  WDC WD1003FBYX-0   S:0 H:0 T:0
  mirror-1 ONLINE   0 0 0
c1t50014EE05950B131d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE2B5E5A6B8d0  ONLINE   0 0 0  1 TB 
  WDC WD1003FBYZ-0   S:0 H:0 T:0
  mirror-2 ONLINE   0 0 0
c1t50014EE05958C51Bd0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0595617ACd0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
  mirror-3 ONLINE   0 0 0
c1t50014EE0AEAE7540d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0AEAE9B65d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
  mirror-5 ONLINE   0 0 0
c1t50014EE0AEABB8E7d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
c1t50014EE0AEB44327d0  ONLINE   0 0 0  1 TB 
  WDC WD1002F9YZ-0   S:0 H:0 T:0
logs
  mirror-4 ONLINE   0 0 0
c1t55CD2E404B88ABE1d0  ONLINE   0 0 0  120 GB   
  INTEL SSDSC2BW12   S:0 H:0 T:0
c1t55CD2E404B88E4CFd0  ONLINE   0 0 0  120 GB   
  INTEL SSDSC2BW12   S:0 H:0 T:0
cache
  c1t55CD2E4000339A59d0ONLINE   0 0 0  180 GB   
  INTEL SSDSC2BW18   S:0 H:0 T:0
spares
  c2t2d0   AVAIL 1 TB   WDC 
WD10EFRX-68F   S:0 H:0 T:0

errors: No known data errors

Martin


-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com] 
Sent: Wednesday, September 16, 2015 1:51 PM
To: Martin Truhlář
Cc: omnios-discuss@lists.omniti.com; Dan McDonald
Subject: Re: [OmniOS-discuss] iSCSI poor write performance


> On Sep 16, 2015, at 4:04 AM, Martin Truhlář <martin.truh...@archcon.cz> wrote:
> 
> Yes, I'm aware, that problem can be hidden in many places.
> MTU is 1500. All nics and their setup are included at this email.

Start by making your 10GigE network use 9000 MTU.  You'll need to configure 
this on both ends (is this directly-attached 10GigE?  Or over a switch?).

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-23 Thread Michael Rasmussen
On Wed, 23 Sep 2015 19:56:26 +0200
Stephan Budach  wrote:

> Huh? Where did that escape me? I don't think, that the pool layout showed any 
> remote disks, they all 
Sorry, reading to hastig. Misread phys_path:
'/scsi_vhci/disk@g50014ee00400fa16:a'

for iscsi.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Whatever doesn't succeed in two months and a half in California will
never succeed.
-- Rev. Henry Durant, founder of the University of
California


pgpC1Ya_k3TCa.pgp
Description: OpenPGP digital signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-16 Thread Martin Truhlář
Yes, I'm aware, that problem can be hidden in many places.
MTU is 1500. All nics and their setup are included at this email.

Martin

-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com] 
Sent: Wednesday, September 09, 2015 6:32 PM
To: Martin Truhlář
Cc: omnios-discuss@lists.omniti.com; Dan McDonald
Subject: Re: [OmniOS-discuss] iSCSI poor write performance


> On Sep 9, 2015, at 12:24 PM, Martin Truhlář <martin.truh...@archcon.cz> wrote:
> 
> Hello everybody,
>  
> I have a problem here, I can’t move with. My Windows server runs as virtual 
> machine under KVM. I’m using a 10GB network card. On this hw configuration I 
> expect much better performance than I’m getting. Two less important disks 
> uses KVM cache, that improve performance a bit. But I don’t want to use KVM‘s 
> cache for system and databases disks and there I’m getting 6MB/s for writing. 
> Also 4K writing is low even with KVM cache.


So you have windows on KVM, and KVM is using iSCSI to speak to OmniOS?  That's 
a lot of indirection...

Question:  What's the MTU on the 10Gig Link?

Dan


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] iSCSI poor write performance

2015-09-16 Thread Dan McDonald

> On Sep 16, 2015, at 4:04 AM, Martin Truhlář  wrote:
> 
> Yes, I'm aware, that problem can be hidden in many places.
> MTU is 1500. All nics and their setup are included at this email.

Start by making your 10GigE network use 9000 MTU.  You'll need to configure 
this on both ends (is this directly-attached 10GigE?  Or over a switch?).

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss