That might explain the difference.
On October 14, 2016 12:15:42 PM GMT+02:00, Andreas Steinel
wrote:
>On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote:
>
>> On 2016-10-14 11:13, Andreas Steinel wrote:
>>>
>>> So, what was your test environment? How big was the difference?
>>>
>>> Are you run
On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote:
> On 2016-10-14 11:13, Andreas Steinel wrote:
>>
>> So, what was your test environment? How big was the difference?
>>
>> Are you running your ZFS pool on the proxmox node?
Yes, everything local on the node itself.
___
On 2016-10-14 11:13, Andreas Steinel wrote:
Hi Mir,
On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen
wrote:
I use virio-scsi-single exclusively because of the hough performance
gain in comparison to virtio-scsi so I can concur to that.
I just benchmarked it in on a full-SSD-ZFS system of
ltiple disk in 1 vm.
This need virtio-scsi-single, because the iothread is mapped on controller,
not the disk.
- Mail original -
De: "Andreas Steinel"
À: "pve-devel"
Envoyé: Vendredi 14 Octobre 2016 11:13:34
Objet: Re: [pve-devel] pve-manager and disk IO monitoring
Hi
Hi Mir,
On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen wrote:
> I use virio-scsi-single exclusively because of the hough performance
> gain in comparison to virtio-scsi so I can concur to that.
I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse
results.
I used 4 cores,
On Fri, 14 Oct 2016 07:42:38 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> Also, currently, we have virtio-scsi-single. I don't known if a lot of user
> already use it,
> but maybe it could be better to use it as an option
> scsihw:virtio-scsi,type=generic|block,x=single
>
> ?
I use virio-
better to use it as an option
scsihw:virtio-scsi,type=generic|block,x=single
?
- Mail original -
De: "dietmar"
À: "datanom.net" , "pve-devel"
Envoyé: Vendredi 14 Octobre 2016 05:50:28
Objet: Re: [pve-devel] pve-manager and disk IO monitoring
> > @Alexan
> > @Alexandre: This was for performance reasons?
> >
> Any decisions made yet to revert this patch?
I want to use scsi-block by default, but the suggestion was to
provide a way to switch back to scsi-generic.
Alrexandre suggested:
> maybe could we add an option on scsihw ? scsihw:virtio-scsi,t
On Fri, 30 Sep 2016 07:50:11 +0200 (CEST)
Dietmar Maurer wrote:
> > So my question is: Why use scsi-generic instead of scsi-block when
> > scsi-generic prevents blockstats?
>
> commit d454d040338a6216c8d3e5cc9623d6223476cb5a
> Author: Alexandre Derumier
> Date: Tue Aug 28 12:46:07 2012 +020
type=generic|block
?
- Mail original -
De: "datanom.net"
À: "pve-devel"
Envoyé: Vendredi 30 Septembre 2016 08:48:51
Objet: Re: [pve-devel] pve-manager and disk IO monitoring
On Fri, 30 Sep 2016 07:50:11 +0200 (CEST)
Dietmar Maurer wrote:
> > So my question is: Why
it's perform with something like
100k iops.
- Mail original -
De: "dietmar"
À: "aderumier"
Cc: "pve-devel"
Envoyé: Vendredi 30 Septembre 2016 08:14:42
Objet: Re: [pve-devel] pve-manager and disk IO monitoring
> On September 30, 2016 at 8:00 AM Al
On Fri, 30 Sep 2016 07:50:11 +0200 (CEST)
Dietmar Maurer wrote:
> > So my question is: Why use scsi-generic instead of scsi-block when
> > scsi-generic prevents blockstats?
>
> commit d454d040338a6216c8d3e5cc9623d6223476cb5a
> Author: Alexandre Derumier
> Date: Tue Aug 28 12:46:07 2012 +020
30.09.2016 09:18, Dietmar Maurer wrote:
This is not really true - seems scsi-block and scsi-generic are quite same
speed.
So we could use iscsi-inq or iscsi-readcapacity16 to see what volume
actually (block device, or, say, streamer) is and select appropriate
device type for qemu.
Also, with
On Fri, 30 Sep 2016 08:17:58 +0200
Michael Rasmussen wrote:
>
> I will run another test now.
>
New test run. Here scsi-generic loose but again I cannot run a clinical
test. My best guess is that if you run a number of tests on equal
hardware and under similar conditions and make an average calc
> >>@Alexandre: This was for performance reasons?
>
> yes, I think. (don't remember exactly).
> see the original post from stefan
>
> http://pve.proxmox.com/pipermail/pve-devel/2012-August/003347.html
>
> "Hello list,
>
> right now when you select SCSI proxmox always use scsi-hd for device.
>
On Fri, 30 Sep 2016 08:11:00 +0200 (CEST)
Dietmar Maurer wrote:
>
> Is there a reasonable explanation for that?
>
> @mir: can you reproduce those results reliable?
>
First a comment on the numbers: The tests was made on a production
setup so conditions could vary slightly. Having that in mind
> On September 30, 2016 at 8:00 AM Alexandre DERUMIER
> wrote:
>
>
> >>Where do you see that 11% difference?
>
> oh, sorry, my fault, I read the wrong line...
> Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0
> iops] [eta 00m:00s]
> Jobs: 1 (f=1): [m(1)] [100.0% d
> >>Where do you see that 11% difference?
>
> oh, sorry, my fault, I read the wrong line...
> Jobs: 1 (f=1): [m(1)] [100.0% done] [64339KB/16908KB/0KB /s] [15.2K/3816/0
> iops] [eta 00m:00s]
> Jobs: 1 (f=1): [m(1)] [100.0% done] [73928KB/19507KB/0KB /s] [17.3K/4381/0
> iops] [eta 00m:00s]
>
>
scsi-hd
or scsi-block.
What would be the expected way to integrate a selection between them in
proxmox?
"
- Mail original -
De: "dietmar"
À: "datanom.net" , "pve-devel"
Envoyé: Vendredi 30 Septembre 2016 07:50:11
Objet: Re: [pve-devel] pve-manager a
90th=[ 49], 99.95th=[ 74],
| 99.99th=[ 153]
- Mail original -
De: "dietmar"
À: "aderumier" , "pve-devel"
Envoyé: Vendredi 30 Septembre 2016 07:38:43
Objet: Re: [pve-devel] pve-manager and disk IO monitoring
> On September 30, 2016 at 7:03 AM Alexandre DERUMI
> So my question is: Why use scsi-generic instead of scsi-block when
> scsi-generic prevents blockstats?
commit d454d040338a6216c8d3e5cc9623d6223476cb5a
Author: Alexandre Derumier
Date: Tue Aug 28 12:46:07 2012 +0200
use scsi-generic by default with libiscsi
This add scsi passthrough
> On September 30, 2016 at 7:03 AM Alexandre DERUMIER
> wrote:
>
>
> "Running a fio test also only shows marginal performance difference
> between scsi-block and scsi-generic"
>
> I think that 11% difference is not so marginal.
Where do you see that 11% difference?
-device scsi-block
RE
vs scsi-generic.
Maybe can we add an option to choose between scsi-block && scsi-generic
- Mail original -
De: "datanom.net"
À: "pve-devel"
Envoyé: Vendredi 30 Septembre 2016 01:23:20
Objet: Re: [pve-devel] pve-manager and disk IO monitoring
On Fri, 30 Sep 20
On Fri, 30 Sep 2016 00:51:06 +0200
Michael Rasmussen wrote:
>
> So my question is: Why use scsi-generic instead of scsi-block when
> scsi-generic prevents blockstats?
>
Running a fio test also only shows marginal performance difference
between scsi-block and scsi-generic
-device scsi-block
iom
On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER wrote:
> iostats are coming from qemu.
>
> what is the output of monitor "info blockstats" for the vm where you don't
> have stats ?
>
I have just tested with replacing -device scsi-generic with -device
scsi-block. Machine boots and
On Thu, 29 Sep 2016 09:41:35 +0300
Dmitry Petuhov wrote:
> In QemuServer.pm (some code omitted):
>
> if ($drive->{interface} eq 'scsi')
> my $devicetype = 'hd';
> if($path =~ m/^iscsi\:\/\//){
> $devicetype = 'generic';
> }
> $device = "scsi-$devicetype ...
>
> So usually if
29.09.2016 09:21, Michael Rasmussen пишет:
On Thu, 29 Sep 2016 09:17:56 +0300
Dmitry Petuhov wrote:
It's side effect of scsi pass-through, which is being used by default for
[libi]scsi volumes with scsi VM disk interface. QEMU is just not aware of VM
block IO in that case. Also, cache settin
On Thu, 29 Sep 2016 09:17:56 +0300
Dmitry Petuhov wrote:
> It's side effect of scsi pass-through, which is being used by default for
> [libi]scsi volumes with scsi VM disk interface. QEMU is just not aware of VM
> block IO in that case. Also, cache settings for volumes are ineffective,
> becau
29.09.2016 09:05, Michael Rasmussen wrote:
On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER wrote:
iostats are coming from qemu.
what is the output of monitor "info blockstats" for the vm where you don't have
stats ?
Two examples below:
# info blockstats
drive-ide2: rd_bytes=0
On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER wrote:
> iostats are coming from qemu.
>
> what is the output of monitor "info blockstats" for the vm where you don't
> have stats ?
>
>
Two examples below:
# info blockstats
drive-ide2: rd_bytes=0 wr_bytes=0 rd_operations=0 wr_oper
iostats are coming from qemu.
what is the output of monitor "info blockstats" for the vm where you don't have
stats ?
- Mail original -
De: "datanom.net"
À: "pve-devel"
Envoyé: Mercredi 28 Septembre 2016 22:46:25
Objet: [pve-devel] pve-manager and disk IO monitoring
Hi all,
I have
On Wed, 28 Sep 2016 23:37:40 +0200
Michael Rasmussen wrote:
> My assumption seems to be wrong. Rather it seems that disk IO is not
> monitored if disk type is scsi. virtio seems to work fine.
>
Or more precisely:
Disk type: scsi
Storage: zfs over iscsi (libiscsi)
does not work:
Disk type: virtio
On Wed, 28 Sep 2016 22:46:25 +0200
Michael Rasmussen wrote:
> Hi all,
>
> I have for some time wondering why disk IO graphs is showing no IO for
> some VM's. I think I have found the cause. It seems that Disk IO is
> only monitored on the first disk and I tend to split OS and data on
> different
33 matches
Mail list logo