[ovirt-users] Re: oVirt 4.4.0 Alpha release refresh is now available for testing

2020-03-07 Thread Martin Tessun
Hi Mathieu,

> Hi
> Am Fr., 6. März 2020 um 10:50 Uhr schrieb Sandro Bonazzola <
> sbonazzo(a)redhat.com:
>
> Speaking in terms of RHV (explicitely not oVirt) this means that since 4.3
> has no EUS support (only 4.2) all systems on 4.3 will have to upgrade to
> 4.4 pretty much instantly after RHV 4.4  becomes available in order to
> receive further security updates if I understand this correctly.

RHV 4.3 will have EUS due to the exact reason that RHV 4.4 changes the
Hosts as well as the Engine to RHEL 8. Also RHV 4.3 does have the SAP
HANA certification for MultiVM (which RHV 4.4 does not have yet).

So there is no need to update to RHV 4.4 as soon as it is released.

Does that help?
Cheers,
Martin

> (I guess that those with hosted engine will be in an easier position.)
>
> Thanks for the clarification.
>
> Regards
> Mathieu

-- 
Martin Tessun
Principal Technical Product Manager Virtualization, Red Hat GmbH (Munich Office)

mobile  +49.173.6595494
desk+49.89.205071-107
fax +49.89.205071-111

GPG Fingerprint: EDBB 7C6A B5FE 9199 B861  478D 3526 E46D 0D8B 44F0

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, 
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Laurie Krebs, Michael O'Neill, Thomas 
Savage
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/534NYP3AGDO7ZH5VU6NXJL5MD3QQCWF5/


[ovirt-users] Re: oVirt 4.4.0 Alpha release refresh is now available for testing

2020-03-07 Thread it9exm
Hi
Is this also working on Centos Stream?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVJ7AKI7FGFUXQN2L4YXJT3N2YQXDAQR/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Jayme
No worries at all about the length of the email, the details are highly
appreciated. You've given me lots to look into and consider.



On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
wrote:

> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
> >Thanks again for the info. You’re probably right about the testing
> >method.
> >Though the reason I’m down this path in the first place is because I’m
> >seeing a problem in real world work loads. Many of my vms are used in
> >development environments where working with small files is common such
> >as
> >npm installs working with large node_module folders, ci/cd doing lots
> >of
> >mixed operations io and compute.
> >
> >I started testing some of these things by comparing side to side with a
> >vm
> >using same specs only difference being gluster vs nfs storage. Nfs
> >backed
> >storage is performing about 3x better real world.
> >
> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
> >updating it outside of official ovirt updates.
> >
> >I’d like to see if I could improve it to handle my workloads better. I
> >also
> >understand that replication adds overhead.
> >
> >I do wonder how much difference in performance there would be with
> >replica
> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
> >perhaps not by a considerable difference.
> >
> >I will check into c states as well
> >
> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
> >wrote:
> >
> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
> >wrote:
> >> >Strahil,
> >> >
> >> >Thanks for your suggestions. The config is pretty standard HCI setup
> >> >with
> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
> >> >automatically. The gluster volumes were optimized for virt store.
> >> >
> >> >I tried noop on the SSDs, that made zero difference in the tests I
> >was
> >> >running above. I took a look at the random-io-profile and it looks
> >like
> >> >it
> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
> >--
> >> >my
> >> >hosts already appear to have those sysctl values, and by default are
> >> >using virtual-host tuned profile.
> >> >
> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
> >> >
> >> >I haven't done much with gluster profiling but will take a look and
> >see
> >> >if
> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
> >HCI
> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
> >not
> >> >coming anywhere close to maxing network throughput.
> >> >
> >> >The NFS export I was testing was an export from a local server
> >> >exporting a
> >> >single SSD (same type as in the oVirt hosts).
> >> >
> >> >I might end up switching storage to NFS and ditching gluster if
> >> >performance
> >> >is really this much better...
> >> >
> >> >
> >> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov
> >
> >> >wrote:
> >> >
> >> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
> >> >wrote:
> >> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >> >> >disks).
> >> >> >Small file performance inner-vm is pretty terrible compared to a
> >> >> >similar
> >> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >> >> >
> >> >> >VM with gluster storage:
> >> >> >
> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >> >1000+0 records in
> >> >> >1000+0 records out
> >> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >> >> >
> >> >> >VM with NFS:
> >> >> >
> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >> >1000+0 records in
> >> >> >1000+0 records out
> >> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >> >> >
> >> >> >This is a very big difference, 2 seconds to copy 1000 files on
> >NFS
> >> >VM
> >> >> >VS 53
> >> >> >seconds on the other.
> >> >> >
> >> >> >Aside from enabling libgfapi is there anything I can tune on the
> >> >> >gluster or
> >> >> >VM side to improve small file performance? I have seen some
> >guides
> >> >by
> >> >> >Redhat in regards to small file performance but I'm not sure
> >what/if
> >> >> >any of
> >> >> >it applies to oVirt's implementation of gluster in HCI.
> >> >>
> >> >> You can use the rhgs-random-io tuned  profile from
> >> >>
> >> >
> >>
> >
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
> >> >> and try with that on your hosts.
> >> >> In my case, I have  modified  it so it's a mixture between
> >> >rhgs-random-io
> >> >> and the profile for Virtualization Host.
> >> >>
> >> >> Also,ensure that your bricks are  using XFS with relatime/noatime
> >> >mount
> >> >> option and your scheduler for the SSDs is either  'noop' or 'none'
> >> >.The
> >> >> default  I/O scheduler for RHEL7 is deadline which is giving
> >> >preference to
> >> >> reads and  your  workload  is  definitely 'write'.
> >> >>
> >> >> 

[ovirt-users] Re: upgrade from 4.38 to 4.39

2020-03-07 Thread Strahil Nikolov
On March 7, 2020 10:11:13 PM GMT+02:00, eev...@digitaldatatechs.com wrote:
>The upgrade went successfully, however I have lost the ability to
>migrate vm's manually. 
>The engine log:
>2020-03-07 15:05:32,826-05 INFO 
>[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
>(EE-ManagedThreadFactory-engineScheduled-Thread-58) [] Fetched 0 VMs
>from VDS '3e4e1779-045d-4cee-be45-99a80471a1e4'
>2020-03-07 15:05:59,675-05 WARN 
>[org.ovirt.engine.core.bll.SearchQuery] (default task-5)
>[f2f9faa4-41ac-475b-ad57-cb17792d4c77]
>ResourceManager::searchBusinessObjects - Invalid search text - ''VMs :
>id=''
>2020-03-07 15:07:31,640-05 WARN 
>[org.ovirt.engine.core.bll.SearchQuery] (default task-5)
>[edce2f10-d670-4e7e-a69a-937d91882146]
>ResourceManager::searchBusinessObjects - Invalid search text - ''VMs :
>id=''
>If I put a host into maintenance the vm's migrate automatically.But
>manual migration is broken for some reason.
>Any ideas? 
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKWY5FAL2FZRU2NRYJ5LP4N3PMYR36AQ/

Check the libvirt log.
Sometimes  it gives  more clues:

/var/log/libvirt/qemu/.log

Also, check if it can be done via the API.
If API works,  core functionality is OK and only UI is affected.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SW6X2EBB5RLCRLQW3QYGYHTDAPVS3F6D/


[ovirt-users] Re: upgrade from 4.38 to 4.39

2020-03-07 Thread eevans
The upgrade went successfully, however I have lost the ability to migrate vm's 
manually. 
The engine log:
2020-03-07 15:05:32,826-05 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] 
(EE-ManagedThreadFactory-engineScheduled-Thread-58) [] Fetched 0 VMs from VDS 
'3e4e1779-045d-4cee-be45-99a80471a1e4'
2020-03-07 15:05:59,675-05 WARN  [org.ovirt.engine.core.bll.SearchQuery] 
(default task-5) [f2f9faa4-41ac-475b-ad57-cb17792d4c77] 
ResourceManager::searchBusinessObjects - Invalid search text - ''VMs : id=''
2020-03-07 15:07:31,640-05 WARN  [org.ovirt.engine.core.bll.SearchQuery] 
(default task-5) [edce2f10-d670-4e7e-a69a-937d91882146] 
ResourceManager::searchBusinessObjects - Invalid search text - ''VMs : id=''
If I put a host into maintenance the vm's migrate automatically.But manual 
migration is broken for some reason.
Any ideas? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKWY5FAL2FZRU2NRYJ5LP4N3PMYR36AQ/


[ovirt-users] Re: upgrade from 4.38 to 4.39

2020-03-07 Thread Strahil Nikolov
On March 7, 2020 6:07:51 PM GMT+02:00, eev...@digitaldatatechs.com wrote:
>I read documentation on upgrade procedures for minor upgrades.
>Before I begin, I wanted to know of any issues or special procedures I
>need to follow.
>Please advise.
>I appreciate any advice or guidance.
>Thank you.
>Eric
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2VX4OMNQCJV6HIZ7SK32NVLCMWFUDCK/

Hi Eric,

If you use  gluster for your  HostedEngine's  storage domain (and you  didn't 
add  any VMs there)  - shutdown the VM and make a snapshot before the patch ... 
Just in case.

Patch the engine , then the hosts.
If you have intel-based  cpu hosts,  you might hit some  spectre  fixes.
Once  a node is patched  do basic test (at least)  -  stop a VM,  start  a VM, 
make a snapshot, delete the snapshot, live migration... Something like that.

If it's OK,  you can procceed with the next node and repeat.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KGSJA6DSL7UBRDCNVUVYXTXMVVAHE7OB/


[ovirt-users] upgrade from 4.38 to 4.39

2020-03-07 Thread eevans
I read documentation on upgrade procedures for minor upgrades.
Before I begin, I wanted to know of any issues or special procedures I need to 
follow.
Please advise.
I appreciate any advice or guidance.
Thank you.
Eric
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2VX4OMNQCJV6HIZ7SK32NVLCMWFUDCK/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Strahil Nikolov
On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>Thanks again for the info. You’re probably right about the testing
>method.
>Though the reason I’m down this path in the first place is because I’m
>seeing a problem in real world work loads. Many of my vms are used in
>development environments where working with small files is common such
>as
>npm installs working with large node_module folders, ci/cd doing lots
>of
>mixed operations io and compute.
>
>I started testing some of these things by comparing side to side with a
>vm
>using same specs only difference being gluster vs nfs storage. Nfs
>backed
>storage is performing about 3x better real world.
>
>Gluster version is stock that comes with 4.3.7. I haven’t attempted
>updating it outside of official ovirt updates.
>
>I’d like to see if I could improve it to handle my workloads better. I
>also
>understand that replication adds overhead.
>
>I do wonder how much difference in performance there would be with
>replica
>3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>perhaps not by a considerable difference.
>
>I will check into c states as well
>
>On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>wrote:
>
>> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>wrote:
>> >Strahil,
>> >
>> >Thanks for your suggestions. The config is pretty standard HCI setup
>> >with
>> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>> >automatically. The gluster volumes were optimized for virt store.
>> >
>> >I tried noop on the SSDs, that made zero difference in the tests I
>was
>> >running above. I took a look at the random-io-profile and it looks
>like
>> >it
>> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>--
>> >my
>> >hosts already appear to have those sysctl values, and by default are
>> >using virtual-host tuned profile.
>> >
>> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
>> >count=1000 oflag=dsync" on one of your VMs would show for results?
>> >
>> >I haven't done much with gluster profiling but will take a look and
>see
>> >if
>> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
>HCI
>> >deployment with SSD backed storage and 10Gbe storage network.  I'm
>not
>> >coming anywhere close to maxing network throughput.
>> >
>> >The NFS export I was testing was an export from a local server
>> >exporting a
>> >single SSD (same type as in the oVirt hosts).
>> >
>> >I might end up switching storage to NFS and ditching gluster if
>> >performance
>> >is really this much better...
>> >
>> >
>> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov
>
>> >wrote:
>> >
>> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
>> >wrote:
>> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
>> >> >disks).
>> >> >Small file performance inner-vm is pretty terrible compared to a
>> >> >similar
>> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
>> >> >
>> >> >VM with gluster storage:
>> >> >
>> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
>> >> >1000+0 records in
>> >> >1000+0 records out
>> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
>> >> >
>> >> >VM with NFS:
>> >> >
>> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
>> >> >1000+0 records in
>> >> >1000+0 records out
>> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
>> >> >
>> >> >This is a very big difference, 2 seconds to copy 1000 files on
>NFS
>> >VM
>> >> >VS 53
>> >> >seconds on the other.
>> >> >
>> >> >Aside from enabling libgfapi is there anything I can tune on the
>> >> >gluster or
>> >> >VM side to improve small file performance? I have seen some
>guides
>> >by
>> >> >Redhat in regards to small file performance but I'm not sure
>what/if
>> >> >any of
>> >> >it applies to oVirt's implementation of gluster in HCI.
>> >>
>> >> You can use the rhgs-random-io tuned  profile from
>> >>
>> >
>>
>ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
>> >> and try with that on your hosts.
>> >> In my case, I have  modified  it so it's a mixture between
>> >rhgs-random-io
>> >> and the profile for Virtualization Host.
>> >>
>> >> Also,ensure that your bricks are  using XFS with relatime/noatime
>> >mount
>> >> option and your scheduler for the SSDs is either  'noop' or 'none'
>> >.The
>> >> default  I/O scheduler for RHEL7 is deadline which is giving
>> >preference to
>> >> reads and  your  workload  is  definitely 'write'.
>> >>
>> >> Ensure that the virt settings are  enabled for your gluster
>volumes:
>> >> 'gluster volume set  group virt'
>> >>
>> >> Also, are you running  on fully allocated disks for the VM or you
>> >started
>> >> thin ?
>> >> I'm asking as creation of new shards  at gluster  level is a slow
>> >task.
>> >>
>> >> Have you checked  gluster  profiling the volume?  It can clarify
>what
>> >is
>> >> going on.
>> >>
>> >>
>> >> Also are you comparing apples to apples ?
>> >> For 

[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-07 Thread Jayme
Thanks again for the info. You’re probably right about the testing method.
Though the reason I’m down this path in the first place is because I’m
seeing a problem in real world work loads. Many of my vms are used in
development environments where working with small files is common such as
npm installs working with large node_module folders, ci/cd doing lots of
mixed operations io and compute.

I started testing some of these things by comparing side to side with a vm
using same specs only difference being gluster vs nfs storage. Nfs backed
storage is performing about 3x better real world.

Gluster version is stock that comes with 4.3.7. I haven’t attempted
updating it outside of official ovirt updates.

I’d like to see if I could improve it to handle my workloads better. I also
understand that replication adds overhead.

I do wonder how much difference in performance there would be with replica
3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
perhaps not by a considerable difference.

I will check into c states as well

On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
wrote:

> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme  wrote:
> >Strahil,
> >
> >Thanks for your suggestions. The config is pretty standard HCI setup
> >with
> >cockpit and hosts are oVirt node. XFS was handled by the deployment
> >automatically. The gluster volumes were optimized for virt store.
> >
> >I tried noop on the SSDs, that made zero difference in the tests I was
> >running above. I took a look at the random-io-profile and it looks like
> >it
> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5 --
> >my
> >hosts already appear to have those sysctl values, and by default are
> >using virtual-host tuned profile.
> >
> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
> >count=1000 oflag=dsync" on one of your VMs would show for results?
> >
> >I haven't done much with gluster profiling but will take a look and see
> >if
> >I can make sense of it. Otherwise, the setup is pretty stock oVirt HCI
> >deployment with SSD backed storage and 10Gbe storage network.  I'm not
> >coming anywhere close to maxing network throughput.
> >
> >The NFS export I was testing was an export from a local server
> >exporting a
> >single SSD (same type as in the oVirt hosts).
> >
> >I might end up switching storage to NFS and ditching gluster if
> >performance
> >is really this much better...
> >
> >
> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov 
> >wrote:
> >
> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
> >wrote:
> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >> >disks).
> >> >Small file performance inner-vm is pretty terrible compared to a
> >> >similar
> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
> >> >
> >> >VM with gluster storage:
> >> >
> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >1000+0 records in
> >> >1000+0 records out
> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
> >> >
> >> >VM with NFS:
> >> >
> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
> >> >1000+0 records in
> >> >1000+0 records out
> >> >512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
> >> >
> >> >This is a very big difference, 2 seconds to copy 1000 files on NFS
> >VM
> >> >VS 53
> >> >seconds on the other.
> >> >
> >> >Aside from enabling libgfapi is there anything I can tune on the
> >> >gluster or
> >> >VM side to improve small file performance? I have seen some guides
> >by
> >> >Redhat in regards to small file performance but I'm not sure what/if
> >> >any of
> >> >it applies to oVirt's implementation of gluster in HCI.
> >>
> >> You can use the rhgs-random-io tuned  profile from
> >>
> >
> ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
> >> and try with that on your hosts.
> >> In my case, I have  modified  it so it's a mixture between
> >rhgs-random-io
> >> and the profile for Virtualization Host.
> >>
> >> Also,ensure that your bricks are  using XFS with relatime/noatime
> >mount
> >> option and your scheduler for the SSDs is either  'noop' or 'none'
> >.The
> >> default  I/O scheduler for RHEL7 is deadline which is giving
> >preference to
> >> reads and  your  workload  is  definitely 'write'.
> >>
> >> Ensure that the virt settings are  enabled for your gluster volumes:
> >> 'gluster volume set  group virt'
> >>
> >> Also, are you running  on fully allocated disks for the VM or you
> >started
> >> thin ?
> >> I'm asking as creation of new shards  at gluster  level is a slow
> >task.
> >>
> >> Have you checked  gluster  profiling the volume?  It can clarify what
> >is
> >> going on.
> >>
> >>
> >> Also are you comparing apples to apples ?
> >> For example, 1 ssd  mounted  and exported  as NFS and a replica 3
> >volume
> >> of the same type of ssd ? If not,  the NFS can have more iops due to
> >> multiple disks behind it, while Gluster has to write the