Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Michael Rasmussen
That might explain the difference.

On October 14, 2016 12:15:42 PM GMT+02:00, Andreas Steinel 
 wrote:
>On Fri, Oct 14, 2016 at 12:08 PM, datanom.net  wrote:
>
>> On 2016-10-14 11:13, Andreas Steinel wrote:
>>>
>>> So, what was your test environment? How big was the difference?
>>>
>>> Are you running your ZFS pool on the proxmox node?
>
>
>Yes, everything local on the node itself.
>___
>pve-devel mailing list
>pve-devel@pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 12:08 PM, datanom.net  wrote:

> On 2016-10-14 11:13, Andreas Steinel wrote:
>>
>> So, what was your test environment? How big was the difference?
>>
>> Are you running your ZFS pool on the proxmox node?


Yes, everything local on the node itself.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread datanom.net

On 2016-10-14 11:13, Andreas Steinel wrote:

Hi Mir,

On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen  
wrote:

I use virio-scsi-single exclusively because of the hough performance
gain in comparison to virtio-scsi so I can concur to that.


I just benchmarked it in on a full-SSD-ZFS system of mine and got 
reverse

results.
I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this
varies:

Test  | sequential 8K | randread 4K | randrw 4K 50/50
--+---+-+
virtio-scsi   |   53k | 57k | 11k
virtio-scsi-single|   35k | 41k | 11k
virtio-scsi IO/Thread |   29k | 43k | 11k
virtio-scsi-single IO |   29k | 44k | 11k


So, what was your test environment? How big was the difference?


Are you running your ZFS pool on the proxmox node?
My benchmarks were made using ZFS over iSCSI.

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Alexandre DERUMIER
>>So, what was your test environment? How big was the difference?

That's strange, they are technical difference between virtio-scsi && 
virtio-scsi-single.

with virtio-scsi-single you have 1 virtio-scsi controller by disk.


for iothread, you should see difference with multiple disk in 1 vm.
This need virtio-scsi-single, because the iothread is  mapped on controller, 
not the disk.



- Mail original -
De: "Andreas Steinel" <a.stei...@gmail.com>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 14 Octobre 2016 11:13:34
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

Hi Mir, 

On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen <m...@datanom.net> wrote: 
> I use virio-scsi-single exclusively because of the hough performance 
> gain in comparison to virtio-scsi so I can concur to that. 

I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse 
results. 
I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this 
varies: 

Test | sequential 8K | randread 4K | randrw 4K 50/50 
--+---+-+ 
virtio-scsi | 53k | 57k | 11k 
virtio-scsi-single | 35k | 41k | 11k 
virtio-scsi IO/Thread | 29k | 43k | 11k 
virtio-scsi-single IO | 29k | 44k | 11k 


So, what was your test environment? How big was the difference? 

Best, 
LnxBil 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Andreas Steinel
Hi Mir,

On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen  wrote:
> I use virio-scsi-single exclusively because of the hough performance
> gain in comparison to virtio-scsi so I can concur to that.

I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse
results.
I used 4 cores, 512 MB-RAM (fio 2.1.11, qd32, direct, libaio) and this
varies:

Test  | sequential 8K | randread 4K | randrw 4K 50/50
--+---+-+
virtio-scsi   |   53k | 57k | 11k
virtio-scsi-single|   35k | 41k | 11k
virtio-scsi IO/Thread |   29k | 43k | 11k
virtio-scsi-single IO |   29k | 44k | 11k


So, what was your test environment? How big was the difference?

Best,
LnxBil
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 07:42:38 +0200 (CEST)
Alexandre DERUMIER  wrote:

> 
> Also, currently, we have virtio-scsi-single. I don't known if a lot of user 
> already use it,
> but maybe it could be better to use it as an option   
> scsihw:virtio-scsi,type=generic|block,x=single
> 
> ?
I use virio-scsi-single exclusively because of the hough performance
gain in comparison to virtio-scsi so I can concur to that.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
It's gonna be alright,
It's almost midnight,
And I've got two more bottles of wine.


pgph3ARjw3Rlx.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-13 Thread Alexandre DERUMIER
>> maybe could we add an option on scsihw ? 
>> scsihw:virtio-scsi,type=generic|block 
>> ?

>>would be great if somebody provides a patch...

Also, currently, we have virtio-scsi-single. I don't known if a lot of user 
already use it,
but maybe it could be better to use it as an option   
scsihw:virtio-scsi,type=generic|block,x=single

?



- Mail original -
De: "dietmar" <diet...@proxmox.com>
À: "datanom.net" <m...@datanom.net>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 14 Octobre 2016 05:50:28
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

> > @Alexandre: This was for performance reasons? 
> > 
> Any decisions made yet to revert this patch? 


I want to use scsi-block by default, but the suggestion was to 
provide a way to switch back to scsi-generic. 

Alrexandre suggested: 
> maybe could we add an option on scsihw ? 
> scsihw:virtio-scsi,type=generic|block 
> ? 

would be great if somebody provides a patch... 



___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-13 Thread Dietmar Maurer
> > @Alexandre: This was for performance reasons?
> > 
> Any decisions made yet to revert this patch?


I want to use scsi-block by default, but the suggestion was to
provide a way to switch back to scsi-generic.

Alrexandre suggested:
> maybe could we add an option on scsihw ? scsihw:virtio-scsi,type=generic|block
>  ?

would be great if somebody provides a patch...

 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Alexandre DERUMIER
>>To summe it all up. I think scsi-generic should be reserved for the
>>situations were you make pass-through of a HBA/RAID controller to a VM
>>or if device is either scsi-CD, scsi-tape, or scsi-backplane.

maybe could we add an option on scsihw ? scsihw:virtio-scsi,type=generic|block  
?

- Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 30 Septembre 2016 08:48:51
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

On Fri, 30 Sep 2016 07:50:11 +0200 (CEST) 
Dietmar Maurer <diet...@proxmox.com> wrote: 

> > So my question is: Why use scsi-generic instead of scsi-block when 
> > scsi-generic prevents blockstats? 
> 
> commit d454d040338a6216c8d3e5cc9623d6223476cb5a 
> Author: Alexandre Derumier <aderum...@odiso.com> 
> Date: Tue Aug 28 12:46:07 2012 +0200 
> 
> use scsi-generic by default with libiscsi 
> 
> This add scsi passthrough with libiscsi 
> 
> Signed-off-by: Alexandre Derumier <aderum...@odiso.com> 
> 
> 
> @Alexandre: This was for performance reasons? 
> 
Remember, this is 4 years ago with many iterations and releases later. 
I could be at that time scsi-generic gave better performance but today 
the situation seems to have changed in favor of scsi-block? And as well 
as scsi-generic scsi-block likewise passes through scsi-unmap so trim 
also works with scsi-block. 

To summe it all up. I think scsi-generic should be reserved for the 
situations were you make pass-through of a HBA/RAID controller to a VM 
or if device is either scsi-CD, scsi-tape, or scsi-backplane. 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael  rasmussen  cc 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E 
mir  datanom  net 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C 
mir  miras  org 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 
-- 
/usr/games/fortune -es says: 
Delta: We're Amtrak with wings. -- David Letterman 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Alexandre DERUMIER

99.00th=[ 8768] (usec)
99.00th=[ 8] (msec)

>>So this is quite the same?

I mean >>99,5th


99.50th=[13120], 99.90th=[52992], 99.95th=[103936], 99.99th=[536576] 
99.50th=[ 11],   99.90th=[ 23],   99.95th=[ 63],99.99th=[ 153] 


I known it's marginal,  but I just wonder how it's perform with something like 
100k iops.


- Mail original -
De: "dietmar" <diet...@proxmox.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 30 Septembre 2016 08:14:42
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

> On September 30, 2016 at 8:00 AM Alexandre DERUMIER <aderum...@odiso.com> 
> wrote: 
> 
> 
> >>Where do you see that 11% difference? 
> 
> oh, sorry, my fault, I read the wrong line... 
> Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0 
> iops] [eta 00m:00s] 
> Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0 
> iops] [eta 00m:00s] 
> 
> But check the latencies (>95th are twice lower) : 

Where do you see that? I see that 


99.00th=[ 8768] (usec) 
99.00th=[ 8] (msec) 

So this is quite the same? 

> 
> read 
> - 
> 
> block: 
> read : io=2454.9MB, bw=87501KB/s, iops=14328, runt= 28728msec 
> clat percentiles (usec): 
> | 1.00th=[ 1768], 5.00th=[ 2480], 10.00th=[ 2640], 20.00th=[ 2864], 
> | 30.00th=[ 2960], 40.00th=[ 3056], 50.00th=[ 3088], 60.00th=[ 3152], 
> | 70.00th=[ 3248], 80.00th=[ 3376], 90.00th=[ 3824], 95.00th=[ 4448], 
> | 99.00th=[ 8768], 99.50th=[13120], 99.90th=[52992], 99.95th=[103936], 
> | 99.99th=[536576] 
> 
> generic: 
> 
> read : io=2454.9MB, bw=88384KB/s, iops=14473, runt= 28441msec 
> slat (usec): min=5, max=5814, avg=10.86, stdev=21.71 
> clat (usec): min=459, max=885935, avg=3451.71, stdev=3297.21 
> lat (usec): min=526, max=885944, avg=3462.97, stdev=3297.14 
> clat percentiles (msec): 
> | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 
> | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
> | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 
> | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 23], 99.95th=[ 63], 
> | 99.99th=[ 153] 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Michael Rasmussen
On Fri, 30 Sep 2016 07:50:11 +0200 (CEST)
Dietmar Maurer  wrote:

> > So my question is: Why use scsi-generic instead of scsi-block when
> > scsi-generic prevents blockstats?  
> 
> commit d454d040338a6216c8d3e5cc9623d6223476cb5a
> Author: Alexandre Derumier 
> Date:   Tue Aug 28 12:46:07 2012 +0200
> 
> use scsi-generic by default with libiscsi
> 
> This add scsi passthrough with libiscsi
> 
> Signed-off-by: Alexandre Derumier 
> 
> 
> @Alexandre: This was for performance reasons?
> 
Remember, this is 4 years ago with many iterations and releases later.
I could be at that time scsi-generic gave better performance but today
the situation seems to have changed in favor of scsi-block? And as well
as scsi-generic scsi-block likewise passes through scsi-unmap so trim
also works with scsi-block.

To summe it all up. I think scsi-generic should be reserved for the
situations were you make pass-through of a HBA/RAID controller to a VM
or if device is either scsi-CD, scsi-tape, or scsi-backplane.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Delta: We're Amtrak with wings.-- David Letterman


pgpjHJkXHtNbL.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Dmitry Petuhov

30.09.2016 09:18, Dietmar Maurer wrote:

This is not really true - seems scsi-block and scsi-generic are quite same
speed.
So we could use iscsi-inq or iscsi-readcapacity16 to see what volume 
actually (block device, or, say, streamer) is and select appropriate 
device type for qemu.


Also, with iscsi-readcapacity16 we could read physical and logical block 
sizes to pass to qemu for volumes with non-scsi interface.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Michael Rasmussen
On Fri, 30 Sep 2016 08:17:58 +0200
Michael Rasmussen  wrote:

> 
> I will run another test now.
> 
New test run. Here scsi-generic loose but again I cannot run a clinical
test. My best guess is that if you run a number of tests on equal
hardware and under similar conditions and make an average calculation
the two would show more or less identical performance and therefore
either one could be a candidate except for one crucial thing. With
scsi-generic you have no disk IO stats which in my book disqualify
scsi-generic.

After this discovery I will change the code locally so that I use
scsi-block since disk IO stats for vm's is important in my book.

scsi-generic
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, 
iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 3072MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [25688KB/6534KB/0KB /s] [6096/1549/0 iops] 
[eta 00m:00s] 
iometer: (groupid=0, jobs=1): err= 0: pid=692: Fri Sep 30 08:29:45 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=2454.9MB, bw=61632KB/s, iops=10092, runt= 40786msec
slat (usec): min=5, max=4132, avg=11.21, stdev=12.77
clat (usec): min=20, max=14294K, avg=3429.47, stdev=66357.97
 lat (usec): min=181, max=14295K, avg=3441.05, stdev=66357.98
clat percentiles (usec):
 |  1.00th=[  237],  5.00th=[  290], 10.00th=[  334], 20.00th=[  430],
 | 30.00th=[  900], 40.00th=[ 2544], 50.00th=[ 2832], 60.00th=[ 3088],
 | 70.00th=[ 3312], 80.00th=[ 3472], 90.00th=[ 3824], 95.00th=[ 8896],
 | 99.00th=[20608], 99.50th=[23680], 99.90th=[36608], 99.95th=[42240],
 | 99.99th=[905216]
bw (KB  /s): min= 8261, max=159707, per=100.00%, avg=62673.16, 
stdev=38084.27
  write: io=631998KB, bw=15495KB/s, iops=2529, runt= 40786msec
slat (usec): min=6, max=22420, avg=13.61, stdev=74.39
clat (usec): min=550, max=10671K, avg=11538.39, stdev=51649.55
 lat (msec): min=1, max=10671, avg=11.55, stdev=51.65
clat percentiles (msec):
 |  1.00th=[3],  5.00th=[4], 10.00th=[4], 20.00th=[4],
 | 30.00th=[4], 40.00th=[4], 50.00th=[5], 60.00th=[   10],
 | 70.00th=[   16], 80.00th=[   21], 90.00th=[   24], 95.00th=[   27],
 | 99.00th=[   42], 99.50th=[   47], 99.90th=[   61], 99.95th=[   80],
 | 99.99th=[  922]
bw (KB  /s): min= 2100, max=43759, per=100.00%, avg=15752.12, stdev=9676.98
lat (usec) : 50=0.01%, 100=0.01%, 250=1.30%, 500=18.10%, 750=3.84%
lat (usec) : 1000=1.12%
lat (msec) : 2=3.06%, 4=53.90%, 10=7.08%, 20=6.08%, 50=5.44%
lat (msec) : 100=0.05%, 500=0.01%, 750=0.01%, 1000=0.01%, 2000=0.01%
lat (msec) : >=2000=0.01%
  cpu  : usr=6.63%, sys=20.68%, ctx=449891, majf=0, minf=8
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
 issued: total=r=411627/w=103162/d=0, short=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=2454.9MB, aggrb=61632KB/s, minb=61632KB/s, maxb=61632KB/s, 
mint=40786msec, maxt=40786msec
  WRITE: io=631997KB, aggrb=15495KB/s, minb=15495KB/s, maxb=15495KB/s, 
mint=40786msec, maxt=40786msec

Disk stats (read/write):
  sda: ios=411628/103171, merge=0/37, ticks=1378732/1182232, in_queue=2727776, 
util=99.81%

Disk stats (read/write):
  sda: ios=412899/103503, merge=0/13, ticks=3375120/854608, in_queue=4309832, 
util=99.92%

scsi-block
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, 
iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 3072MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [64417KB/17015KB/0KB /s] [14.1K/3777/0 
iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=693: Fri Sep 30 08:26:45 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=2454.9MB, bw=90185KB/s, iops=14767, runt= 27873msec
slat (usec): min=5, max=2673, avg=10.15, stdev=11.94
clat (usec): min=205, max=2095.6K, avg=3410.58, stdev=12296.52
 lat (usec): min=220, max=2095.6K, avg=3421.09, stdev=12296.53
clat percentiles (usec):
 |  1.00th=[ 1864],  5.00th=[ 2480], 10.00th=[ 2736], 20.00th=[ 2896],
 | 30.00th=[ 2992], 40.00th=[ 3056], 50.00th=[ 3120], 60.00th=[ 3184],
 | 70.00th=[ 3248], 80.00th=[ 3344], 90.00th=[ 3664], 95.00th=[ 4192],
 | 99.00th=[ 7072], 99.50th=[ 9536], 99.90th=[34048], 99.95th=[51968],
 | 99.99th=[415744]
bw (KB  /s): min=56041, max=146699, per=100.00%, avg=90641.29, 
stdev=22841.18
  write: io=631998KB, bw=22674KB/s, iops=3701, runt= 27873msec
slat (usec): min=6, max=4419, avg=12.22, stdev=21.60
clat (usec): min=214, max=1518.1K, avg=3618.09, stdev=9459.93
 lat (usec): min=225, max=1518.1K, avg=3630.69, stdev=9459.99
  

Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Dietmar Maurer
> >>@Alexandre: This was for performance reasons?
> 
> yes, I think. (don't remember exactly).
> see the original post from stefan
> 
> http://pve.proxmox.com/pipermail/pve-devel/2012-August/003347.html
> 
> "Hello list,
> 
> right now when you select SCSI proxmox always use scsi-hd for device. 
> With virtio-scsi-pci as scsihw we can also select scsi-block or 
> scsi-generic.
> 
> With scsi-block and scsi-generic you can bypass qemu scsi emulation and 
> use trim / discard support as the guest can talk directly to the 
> underlying storage.
> 
> Also scsi-generic (needs guest kernel 3.4) is a lot faster then scsi-hd 
> or scsi-block.

This is not really true - seems scsi-block and scsi-generic are quite same
speed.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Michael Rasmussen
On Fri, 30 Sep 2016 08:11:00 +0200 (CEST)
Dietmar Maurer  wrote:

> 
> Is there a reasonable explanation for that?
> 
> @mir: can you reproduce those results reliable?
> 
First a comment on the numbers: The tests was made on a production
setup so conditions could vary slightly. Having that in mind I would
qualify scsi-generic and scsi-block to be more or less on par
performance wise.

Given the above and by using scsi-generic loosing the ability to have
realtime numbers for disk IO dos not favor using scsi-generic.

I will run another test now.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Formatted to fit your screen.


pgpJJEt_OLZhf.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Dietmar Maurer


> On September 30, 2016 at 8:00 AM Alexandre DERUMIER 
> wrote:
> 
> 
> >>Where do you see that 11% difference? 
> 
> oh, sorry, my fault, I read the wrong line...
> Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0
> iops] [eta 00m:00s] 
> Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0
> iops] [eta 00m:00s] 
> 
> But check the latencies (>95th are twice lower) :

Where do you see that? I see that 


99.00th=[ 8768] (usec)
99.00th=[ 8] (msec)

So this is quite the same?

> 
> read
> -
> 
> block:
> read : io=2454.9MB, bw=87501KB/s, iops=14328, runt= 28728msec 
> clat percentiles (usec): 
> | 1.00th=[ 1768], 5.00th=[ 2480], 10.00th=[ 2640], 20.00th=[ 2864], 
> | 30.00th=[ 2960], 40.00th=[ 3056], 50.00th=[ 3088], 60.00th=[ 3152], 
> | 70.00th=[ 3248], 80.00th=[ 3376], 90.00th=[ 3824], 95.00th=[ 4448], 
> | 99.00th=[ 8768], 99.50th=[13120], 99.90th=[52992], 99.95th=[103936], 
> | 99.99th=[536576] 
> 
> generic:
> 
> read : io=2454.9MB, bw=88384KB/s, iops=14473, runt= 28441msec 
> slat (usec): min=5, max=5814, avg=10.86, stdev=21.71 
> clat (usec): min=459, max=885935, avg=3451.71, stdev=3297.21 
> lat (usec): min=526, max=885944, avg=3462.97, stdev=3297.14 
> clat percentiles (msec): 
> | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 
> | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
> | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 
> | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 23], 99.95th=[ 63], 
> | 99.99th=[ 153] 
>

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Dietmar Maurer
> >>Where do you see that 11% difference? 
> 
> oh, sorry, my fault, I read the wrong line...
> Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0
> iops] [eta 00m:00s] 
> Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0
> iops] [eta 00m:00s] 
> 
> But check the latencies (>95th are twice lower) :

Is there a reasonable explanation for that?

@mir: can you reproduce those results reliable?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Alexandre DERUMIER
>>@Alexandre: This was for performance reasons?

yes, I think. (don't remember exactly).
see the original post from stefan

http://pve.proxmox.com/pipermail/pve-devel/2012-August/003347.html

"Hello list,

right now when you select SCSI proxmox always use scsi-hd for device. 
With virtio-scsi-pci as scsihw we can also select scsi-block or 
scsi-generic.

With scsi-block and scsi-generic you can bypass qemu scsi emulation and 
use trim / discard support as the guest can talk directly to the 
underlying storage.

Also scsi-generic (needs guest kernel 3.4) is a lot faster then scsi-hd 
or scsi-block.

What would be the expected way to integrate a selection between them in 
proxmox?

"



- Mail original -
De: "dietmar" <diet...@proxmox.com>
À: "datanom.net" <m...@datanom.net>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 30 Septembre 2016 07:50:11
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

> So my question is: Why use scsi-generic instead of scsi-block when 
> scsi-generic prevents blockstats? 

commit d454d040338a6216c8d3e5cc9623d6223476cb5a 
Author: Alexandre Derumier <aderum...@odiso.com> 
Date: Tue Aug 28 12:46:07 2012 +0200 

use scsi-generic by default with libiscsi 

This add scsi passthrough with libiscsi 

Signed-off-by: Alexandre Derumier <aderum...@odiso.com> 


@Alexandre: This was for performance reasons? 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-30 Thread Alexandre DERUMIER
>>Where do you see that 11% difference? 

oh, sorry, my fault, I read the wrong line...
Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0 
iops] [eta 00m:00s] 
Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0 
iops] [eta 00m:00s] 

But check the latencies (>95th are twice lower) :

read
-

block:
read : io=2454.9MB, bw=87501KB/s, iops=14328, runt= 28728msec 
clat percentiles (usec): 
| 1.00th=[ 1768], 5.00th=[ 2480], 10.00th=[ 2640], 20.00th=[ 2864], 
| 30.00th=[ 2960], 40.00th=[ 3056], 50.00th=[ 3088], 60.00th=[ 3152], 
| 70.00th=[ 3248], 80.00th=[ 3376], 90.00th=[ 3824], 95.00th=[ 4448], 
| 99.00th=[ 8768], 99.50th=[13120], 99.90th=[52992], 99.95th=[103936], 
| 99.99th=[536576] 

generic:

read : io=2454.9MB, bw=88384KB/s, iops=14473, runt= 28441msec 
slat (usec): min=5, max=5814, avg=10.86, stdev=21.71 
clat (usec): min=459, max=885935, avg=3451.71, stdev=3297.21 
lat (usec): min=526, max=885944, avg=3462.97, stdev=3297.14 
clat percentiles (msec): 
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
| 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 
| 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 23], 99.95th=[ 63], 
| 99.99th=[ 153] 



write
-
block

bw (KB /s): min= 7148, max=193016, per=100.00%, avg=87866.39, stdev=28395.12 
write: io=631998KB, bw=21999KB/s, iops=3590, runt= 28728msec 
slat (usec): min=4, max=9301, avg=12.69, stdev=33.41 
clat (usec): min=299, max=778312, avg=3871.08, stdev=7378.66 
lat (usec): min=305, max=778320, avg=3884.17, stdev=7378.66 
clat percentiles (msec): 
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
| 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 5], 95.00th=[ 7], 
| 99.00th=[ 13], 99.50th=[ 19], 99.90th=[ 55], 99.95th=[ 101], 
| 99.99th=[ 537] 

generic

write: io=631998KB, bw=1KB/s, iops=3627, runt= 28441msec 
slat (usec): min=6, max=3864, avg=12.96, stdev=24.18 
clat (usec): min=582, max=156777, avg=3801.87, stdev=3128.06 
lat (usec): min=610, max=156789, avg=3815.24, stdev=3128.36 
clat percentiles (msec): 
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
| 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 5], 95.00th=[ 7], 
| 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 49], 99.95th=[ 74], 
| 99.99th=[ 153] 
- Mail original -
De: "dietmar" <diet...@proxmox.com>
À: "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 30 Septembre 2016 07:38:43
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

> On September 30, 2016 at 7:03 AM Alexandre DERUMIER <aderum...@odiso.com> 
> wrote: 
> 
> 
> "Running a fio test also only shows marginal performance difference 
> between scsi-block and scsi-generic" 
> 
> I think that 11% difference is not so marginal. 

Where do you see that 11% difference? 

-device scsi-block 

READ: io=2454.9MB, aggrb=87501KB/s, minb=87501KB/s, maxb=87501KB/s, 
mint=28728msec, maxt=28728msec 
WRITE: io=631997KB, aggrb=21999KB/s, minb=21999KB/s, maxb=21999KB/s, 
mint=28728msec, maxt=28728msec 

-device scsi-generic 
READ: io=2454.9MB, aggrb=88384KB/s, minb=88384KB/s, maxb=88384KB/s, 
mint=28441msec, maxt=28441msec 
WRITE: io=631997KB, aggrb=1KB/s, minb=1KB/s, maxb=1KB/s, 
mint=28441msec, maxt=28441msec 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Dietmar Maurer
> So my question is: Why use scsi-generic instead of scsi-block when
> scsi-generic prevents blockstats?

commit d454d040338a6216c8d3e5cc9623d6223476cb5a
Author: Alexandre Derumier 
Date:   Tue Aug 28 12:46:07 2012 +0200

use scsi-generic by default with libiscsi

This add scsi passthrough with libiscsi

Signed-off-by: Alexandre Derumier 


@Alexandre: This was for performance reasons?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Alexandre DERUMIER
"Running a fio test also only shows marginal performance difference
between scsi-block and scsi-generic"

I think that 11% difference is not so marginal.
I'm curious to see difference with full flash array, if we have the same cpu 
iothread bottleneck like ceph, with scsi-block vs scsi-generic.

Maybe can we add an option to choose between scsi-block && scsi-generic




- Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 30 Septembre 2016 01:23:20
Objet: Re: [pve-devel] pve-manager and disk IO monitoring

On Fri, 30 Sep 2016 00:51:06 +0200 
Michael Rasmussen <m...@datanom.net> wrote: 

> 
> So my question is: Why use scsi-generic instead of scsi-block when 
> scsi-generic prevents blockstats? 
> 
Running a fio test also only shows marginal performance difference 
between scsi-block and scsi-generic 

-device scsi-block 
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, 
iodepth=64 
fio-2.1.11 
Starting 1 process 
iometer: Laying out IO file(s) (1 file(s) / 3072MB) 
Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0 
iops] [eta 00m:00s] 
iometer: (groupid=0, jobs=1): err= 0: pid=1568: Fri Sep 30 01:17:05 2016 
Description : [Emulation of Intel IOmeter File Server Access Pattern] 
read : io=2454.9MB, bw=87501KB/s, iops=14328, runt= 28728msec 
slat (usec): min=2, max=4703, avg=10.47, stdev=16.99 
clat (usec): min=315, max=1505.6K, avg=3479.55, stdev=8270.22 
lat (usec): min=321, max=1505.6K, avg=3490.40, stdev=8270.14 
clat percentiles (usec): 
| 1.00th=[ 1768], 5.00th=[ 2480], 10.00th=[ 2640], 20.00th=[ 2864], 
| 30.00th=[ 2960], 40.00th=[ 3056], 50.00th=[ 3088], 60.00th=[ 3152], 
| 70.00th=[ 3248], 80.00th=[ 3376], 90.00th=[ 3824], 95.00th=[ 4448], 
| 99.00th=[ 8768], 99.50th=[13120], 99.90th=[52992], 99.95th=[103936], 
| 99.99th=[536576] 
bw (KB /s): min= 7148, max=193016, per=100.00%, avg=87866.39, stdev=28395.12 
write: io=631998KB, bw=21999KB/s, iops=3590, runt= 28728msec 
slat (usec): min=4, max=9301, avg=12.69, stdev=33.41 
clat (usec): min=299, max=778312, avg=3871.08, stdev=7378.66 
lat (usec): min=305, max=778320, avg=3884.17, stdev=7378.66 
clat percentiles (msec): 
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
| 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 5], 95.00th=[ 7], 
| 99.00th=[ 13], 99.50th=[ 19], 99.90th=[ 55], 99.95th=[ 101], 
| 99.99th=[ 537] 
bw (KB /s): min= 1524, max=46713, per=100.00%, avg=22089.18, stdev=7184.64 
lat (usec) : 500=0.01%, 750=0.03%, 1000=0.06% 
lat (msec) : 2=1.29%, 4=88.78%, 10=8.94%, 20=0.56%, 50=0.22% 
lat (msec) : 100=0.05%, 250=0.05%, 500=0.01%, 750=0.01%, 1000=0.01% 
lat (msec) : 2000=0.01% 
cpu : usr=8.24%, sys=28.49%, ctx=451227, majf=0, minf=8 
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 
issued : total=r=411627/w=103162/d=0, short=r=0/w=0/d=0 
latency : target=0, window=0, percentile=100.00%, depth=64 

Run status group 0 (all jobs): 
READ: io=2454.9MB, aggrb=87501KB/s, minb=87501KB/s, maxb=87501KB/s, 
mint=28728msec, maxt=28728msec 
WRITE: io=631997KB, aggrb=21999KB/s, minb=21999KB/s, maxb=21999KB/s, 
mint=28728msec, maxt=28728msec 

Disk stats (read/write): 
sda: ios=407383/102110, merge=123/54, ticks=1413272/456272, in_queue=1869620, 
util=99.71% 

-device scsi-generic 
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, 
iodepth=64 
fio-2.1.11 
Starting 1 process 
iometer: Laying out IO file(s) (1 file(s) / 3072MB) 
Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0 
iops] [eta 00m:00s] 
iometer: (groupid=0, jobs=1): err= 0: pid=701: Fri Sep 30 01:20:45 2016 
Description : [Emulation of Intel IOmeter File Server Access Pattern] 
read : io=2454.9MB, bw=88384KB/s, iops=14473, runt= 28441msec 
slat (usec): min=5, max=5814, avg=10.86, stdev=21.71 
clat (usec): min=459, max=885935, avg=3451.71, stdev=3297.21 
lat (usec): min=526, max=885944, avg=3462.97, stdev=3297.14 
clat percentiles (msec): 
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 
| 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 
| 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 23], 99.95th=[ 63], 
| 99.99th=[ 153] 
bw (KB /s): min=46295, max=139025, per=100.00%, avg=88833.25, stdev=22609.61 
write: io=631998KB, bw=1KB/s, iops=3627, runt= 28441msec 
slat (usec): min=6, max=3864, avg=12.96, stdev=24.18 
clat (usec): min=582, max=156777, avg=3801.87, stdev=3128.06 
lat (usec): min=610, max=156789, avg=3815.24, stdev=3128.36 
clat percentiles (msec): 
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 
| 30.00th=[ 4], 40.00th=[ 4], 50.00th

Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Michael Rasmussen
On Fri, 30 Sep 2016 00:51:06 +0200
Michael Rasmussen  wrote:

> 
> So my question is: Why use scsi-generic instead of scsi-block when
> scsi-generic prevents blockstats?
> 
Running a fio test also only shows marginal performance difference
between scsi-block and scsi-generic

-device scsi-block
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, 
iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 3072MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0 
iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1568: Fri Sep 30 01:17:05 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=2454.9MB, bw=87501KB/s, iops=14328, runt= 28728msec
slat (usec): min=2, max=4703, avg=10.47, stdev=16.99
clat (usec): min=315, max=1505.6K, avg=3479.55, stdev=8270.22
 lat (usec): min=321, max=1505.6K, avg=3490.40, stdev=8270.14
clat percentiles (usec):
 |  1.00th=[ 1768],  5.00th=[ 2480], 10.00th=[ 2640], 20.00th=[ 2864],
 | 30.00th=[ 2960], 40.00th=[ 3056], 50.00th=[ 3088], 60.00th=[ 3152],
 | 70.00th=[ 3248], 80.00th=[ 3376], 90.00th=[ 3824], 95.00th=[ 4448],
 | 99.00th=[ 8768], 99.50th=[13120], 99.90th=[52992], 99.95th=[103936],
 | 99.99th=[536576]
bw (KB  /s): min= 7148, max=193016, per=100.00%, avg=87866.39, 
stdev=28395.12
  write: io=631998KB, bw=21999KB/s, iops=3590, runt= 28728msec
slat (usec): min=4, max=9301, avg=12.69, stdev=33.41
clat (usec): min=299, max=778312, avg=3871.08, stdev=7378.66
 lat (usec): min=305, max=778320, avg=3884.17, stdev=7378.66
clat percentiles (msec):
 |  1.00th=[3],  5.00th=[3], 10.00th=[3], 20.00th=[3],
 | 30.00th=[4], 40.00th=[4], 50.00th=[4], 60.00th=[4],
 | 70.00th=[4], 80.00th=[4], 90.00th=[5], 95.00th=[7],
 | 99.00th=[   13], 99.50th=[   19], 99.90th=[   55], 99.95th=[  101],
 | 99.99th=[  537]
bw (KB  /s): min= 1524, max=46713, per=100.00%, avg=22089.18, stdev=7184.64
lat (usec) : 500=0.01%, 750=0.03%, 1000=0.06%
lat (msec) : 2=1.29%, 4=88.78%, 10=8.94%, 20=0.56%, 50=0.22%
lat (msec) : 100=0.05%, 250=0.05%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2000=0.01%
  cpu  : usr=8.24%, sys=28.49%, ctx=451227, majf=0, minf=8
  IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
 issued: total=r=411627/w=103162/d=0, short=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=2454.9MB, aggrb=87501KB/s, minb=87501KB/s, maxb=87501KB/s, 
mint=28728msec, maxt=28728msec
  WRITE: io=631997KB, aggrb=21999KB/s, minb=21999KB/s, maxb=21999KB/s, 
mint=28728msec, maxt=28728msec

Disk stats (read/write):
  sda: ios=407383/102110, merge=123/54, ticks=1413272/456272, in_queue=1869620, 
util=99.71%

-device scsi-generic
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, 
iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 3072MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0 
iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=701: Fri Sep 30 01:20:45 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=2454.9MB, bw=88384KB/s, iops=14473, runt= 28441msec
slat (usec): min=5, max=5814, avg=10.86, stdev=21.71
clat (usec): min=459, max=885935, avg=3451.71, stdev=3297.21
 lat (usec): min=526, max=885944, avg=3462.97, stdev=3297.14
clat percentiles (msec):
 |  1.00th=[3],  5.00th=[3], 10.00th=[3], 20.00th=[4],
 | 30.00th=[4], 40.00th=[4], 50.00th=[4], 60.00th=[4],
 | 70.00th=[4], 80.00th=[4], 90.00th=[4], 95.00th=[5],
 | 99.00th=[8], 99.50th=[   11], 99.90th=[   23], 99.95th=[   63],
 | 99.99th=[  153]
bw (KB  /s): min=46295, max=139025, per=100.00%, avg=88833.25, 
stdev=22609.61
  write: io=631998KB, bw=1KB/s, iops=3627, runt= 28441msec
slat (usec): min=6, max=3864, avg=12.96, stdev=24.18
clat (usec): min=582, max=156777, avg=3801.87, stdev=3128.06
 lat (usec): min=610, max=156789, avg=3815.24, stdev=3128.36
clat percentiles (msec):
 |  1.00th=[3],  5.00th=[3], 10.00th=[3], 20.00th=[4],
 | 30.00th=[4], 40.00th=[4], 50.00th=[4], 60.00th=[4],
 | 70.00th=[4], 80.00th=[4], 90.00th=[5], 95.00th=[7],
 | 99.00th=[   11], 99.50th=[   15], 99.90th=[   49], 99.95th=[   74],
 | 99.99th=[  153]
bw (KB  /s): min=11151, max=36378, per=100.00%, avg=22332.46, stdev=5869.71
lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.67%, 4=90.61%, 10=8.03%, 

Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Michael Rasmussen
On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER  wrote:

> iostats are coming from qemu.
> 
> what is the output of monitor "info blockstats" for the vm where you don't 
> have stats ?
> 
I have just tested with replacing -device scsi-generic with -device
scsi-block. Machine boots and seems to work and, low and behold I have disk IO 
stats again!
# info blockstats
drive-ide2: rd_bytes=152 wr_bytes=0 rd_operations=4 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=95326 
flush_total_time_ns=0 rd_merged=0 wr_merged=0 idle_time_ns=258695709709
drive-scsi0: rd_bytes=266729984 wr_bytes=1690120192 rd_operations=15168 
wr_operations=6182 flush_operations=0 wr_total_time_ns=512105318651 
rd_total_time_ns=61640872040 flush_total_time_ns=0 rd_merged=0 wr_merged=0 
idle_time_ns=6506064324

So my question is: Why use scsi-generic instead of scsi-block when
scsi-generic prevents blockstats?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
The last thing one knows in constructing a work is what to put first.
-- Blaise Pascal


pgpgsYMEmW_Jp.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Michael Rasmussen
On Thu, 29 Sep 2016 09:41:35 +0300
Dmitry Petuhov  wrote:

> In QemuServer.pm (some code omitted):
> 
> if ($drive->{interface} eq 'scsi')
> my $devicetype = 'hd';
>  if($path =~ m/^iscsi\:\/\//){
>  $devicetype = 'generic';
>  }
> $device = "scsi-$devicetype ...
> 
> So usually if drive interface is scsi, PVE uses fully-emulated qemu device 
> 'scsi-hd'. But for iscsi: volumes (iscsi direct and zfs over iscsi) it uses 
> 'scsi-generic' device, which just proxies scsi commands between guest OS and 
> your SAN's iscsi target.
> 
I see. So currently by using scsi-generic you sort off disable all
qemu block features like monitoring etc. ?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
There is nothing wrong with Southern California that a rise in the
ocean level wouldn't cure.
-- Ross MacDonald


pgpUqvBZbELeE.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Dmitry Petuhov

29.09.2016 09:21, Michael Rasmussen пишет:

On Thu, 29 Sep 2016 09:17:56 +0300
Dmitry Petuhov  wrote:


It's side effect of scsi pass-through, which is being used by default for 
[libi]scsi volumes with scsi VM disk interface. QEMU is just not aware of VM 
block IO in that case. Also, cache settings for volumes are ineffective, 
because qemu is just proxying raw scsi commands to backing storage, so caching 
is impossible.

Do you use PVE backups (vzdump)? Is it works for machines without stats? I 
think it's also shall not work with pass-through.


What do you mean by pass-through? (no pass-through is happening here
since the storage resides on a SAN)

In QemuServer.pm (some code omitted):

if ($drive->{interface} eq 'scsi')
my $devicetype = 'hd';
if($path =~ m/^iscsi\:\/\//){
$devicetype = 'generic';
}
$device = "scsi-$devicetype ...

So usually if drive interface is scsi, PVE uses fully-emulated qemu 
device 'scsi-hd'. But for iscsi: volumes (iscsi direct and zfs over 
iscsi) it uses 'scsi-generic' device, which just proxies scsi commands 
between guest OS and your SAN's iscsi target.


BTW, I began write code to on|off pass-through in storage's config, so 
that we could force it off, even if it can be used. If developers are 
interested, I can find it.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Michael Rasmussen
On Thu, 29 Sep 2016 09:17:56 +0300
Dmitry Petuhov  wrote:

> It's side effect of scsi pass-through, which is being used by default for 
> [libi]scsi volumes with scsi VM disk interface. QEMU is just not aware of VM 
> block IO in that case. Also, cache settings for volumes are ineffective, 
> because qemu is just proxying raw scsi commands to backing storage, so 
> caching is impossible.
> 
> Do you use PVE backups (vzdump)? Is it works for machines without stats? I 
> think it's also shall not work with pass-through.
> 
What do you mean by pass-through? (no pass-through is happening here
since the storage resides on a SAN)

And yes, vzdump works for these machines.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Mad, adj.:
Affected with a high degree of intellectual independence ...
-- Ambrose Bierce, "The Devil's Dictionary"


pgp3SVtE9ervA.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Dmitry Petuhov

29.09.2016 09:05, Michael Rasmussen wrote:

On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER  wrote:


iostats are coming from qemu.

what is the output of monitor "info blockstats" for the vm where you don't have 
stats ?



Two examples below:
# info blockstats
drive-ide2: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
drive-scsi0: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
drive-scsi1: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
# info blockstats
drive-ide2: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
drive-scsi0: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
It's side effect of scsi pass-through, which is being used by default 
for [libi]scsi volumes with scsi VM disk interface. QEMU is just not 
aware of VM block IO in that case. Also, cache settings for volumes are 
ineffective, because qemu is just proxying raw scsi commands to backing 
storage, so caching is impossible.


Do you use PVE backups (vzdump)? Is it works for machines without stats? 
I think it's also shall not work with pass-through.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-29 Thread Michael Rasmussen
On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER  wrote:

> iostats are coming from qemu.
> 
> what is the output of monitor "info blockstats" for the vm where you don't 
> have stats ?
> 
> 
Two examples below:
# info blockstats
drive-ide2: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
drive-scsi0: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
drive-scsi1: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
# info blockstats
drive-ide2: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0
drive-scsi0: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_operations=0 
flush_operations=0 wr_total_time_ns=0 rd_total_time_ns=0 flush_total_time_ns=0 
rd_merged=0 wr_merged=0 idle_time_ns=0


-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
If you sell diamonds, you cannot expect to have many customers.
But a diamond is a diamond even if there are no customers.
-- Swami Prabhupada


pgpWhr8UEs41J.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-28 Thread Alexandre DERUMIER
iostats are coming from qemu.

what is the output of monitor "info blockstats" for the vm where you don't have 
stats ?



- Mail original -
De: "datanom.net" 
À: "pve-devel" 
Envoyé: Mercredi 28 Septembre 2016 22:46:25
Objet: [pve-devel] pve-manager and disk IO monitoring

Hi all, 

I have for some time wondering why disk IO graphs is showing no IO for 
some VM's. I think I have found the cause. It seems that Disk IO is 
only monitored on the first disk and I tend to split OS and data on 
different disks which sort of renders the disk IO monitoring in 
pve-manager useless. 

So is this a feature or a bug? 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael  rasmussen  cc 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E 
mir  datanom  net 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C 
mir  miras  org 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 
-- 
/usr/games/fortune -es says: 
Oh Dad! We're ALL Devo! 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-28 Thread Michael Rasmussen
On Wed, 28 Sep 2016 23:37:40 +0200
Michael Rasmussen  wrote:

> My assumption seems to be wrong. Rather it seems that disk IO is not
> monitored if disk type is scsi. virtio seems to work fine.
> 
Or more precisely:
Disk type: scsi
Storage: zfs over iscsi (libiscsi)
does not work:
Disk type: virtio
Storage: zfs over iscsi (libiscsi)
works

can somebody confirm this?

Forgot to mension:
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165)

But it was the same for 4.1 and 4.2.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Only a fool fights in a burning house.
-- Kank the Klingon, "Day of the Dove", stardate unknown


pgp0G3N0wQtbS.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pve-manager and disk IO monitoring

2016-09-28 Thread Michael Rasmussen
On Wed, 28 Sep 2016 22:46:25 +0200
Michael Rasmussen  wrote:

> Hi all,
> 
> I have for some time wondering why disk IO graphs is showing no IO for
> some VM's. I think I have found the cause. It seems that Disk IO is
> only monitored on the first disk and I tend to split OS and data on
> different disks which sort of renders the disk IO monitoring in
> pve-manager useless.
> 
> So is this a feature or a bug?
> 
My assumption seems to be wrong. Rather it seems that disk IO is not
monitored if disk type is scsi. virtio seems to work fine.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Only a fool fights in a burning house.
-- Kank the Klingon, "Day of the Dove", stardate unknown


pgpMGJH9M6WnF.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel