[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-18 Thread Karli Sjöberg
fre 2019-09-13 klockan 12:52 + skrev Mikael Öhman:
> Perhaps it will help future sysadmins will learn from my mistake.
> I also saw very poor upload speeds (~30MB/s) 

Bits and bytes man :)

/K

> no matter what I tried. I went through the whole route with unix-
> sockets and whatnot.
> 
> But, in the end, it just turned out that the glusterfs itself was the
> bottleneck; abysmal performance for small block sizes.
> 
> I found the list of suggested performance tweaks that RHEL suggests.
> In particular, it was the "network.remote-dio=on" setting that made
> all the difference. Almost 10x faster.
> 
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage
> /3.1/html/configuring_red_hat_enterprise_virtualization_with_red_hat_
> gluster_storage/chap-
> hosting_virtual_machine_images_on_red_hat_storage_volumes
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> y-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/F657ZJ32EYON4X5FAE2BQAHDI4LV5D36/

signature.asc
Description: This is a digitally signed message part
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWMIXKECCTJCYLJMIJFSSWEM6BVZV4NQ/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Nir Soffer
On Sun, Sep 15, 2019 at 8:37 PM Mikael Öhman  wrote:

> > What do you mean by "small block sizes"?
>
> inside a VM, or directly on the mounted glusterfs;
> dd if=/dev/zero of=tmpfile  bs=1M count=100 oflag=direct
> of course, a terrible way to write data, but also things like compiling
> software inside one of the VMs was terrible slow, 5-10x slower than
> hardware, consisting of almost only idling.
>
> Uploading disk images never got above 30MB/s.
> (and I did try all options i could find; using upload_disk.py on one of
> the hosts, even through a unix socket or with -d option, tweaking buffer
> size, all of which made no difference).
> Adding an NFS volume and uploading to it I reach +200MB/s.
>
> I tried tuning a few parameters on glusterfs but saw no improvements until
> I got to network.remote-dio, which made everything listed above really fast.
>
> > Note that network.remote-dio is not the recommended configuration
> > for ovirt, in particular if on hyperconverge setup when it can be harmful
> > by delaying sanlock I/O.
> >
> >
> https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c1.
> ..
> > (Patch in discussion)
>
> Oh, I had seen this page, thanks. Is "remote-dio=enabled" harmful as in
> things breaking, or just worse performance?
>

I think the issue is delayed writes to storage until you call fsync() on
the node. The
kernel may try to flush too much data, which may cause delays in other
processes.

Many years ago we had such issues that cause sanlock failures in renewing
leases, which
can end in terminating vdsm. This is the reason we always use direct I/O
when copying
disks. If you have hypervonvrged setup and use network.remote-dio you may
have the
same problems.

Sahina worked on a bug that was related to network.remote-dio, I hope she
can add more
details.


> I was a bit reluctant to turn it on, but after seeing it was part of the
> virt group I thought it must have been safe.
>

I think it should be safe for general virt usage, but you may need to tweak
some
host settings to avoid large delays when using fsync().


> Perhaps some of the other options like "performance.strict-o-direct on"
> would solve my performance issues in a nicer way (I will test it out first
> thing on monday)
>

This is not likely to improve performance, but it is seems to be required
if you use
network.remote-dio = off. Without it direct I/O does not behave in a
predictable way
and may cause failures in qemu, vdsm and ovirt-imageio.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXULQKGQ6CZDUW4D3XBTLKTS4I3TNSC4/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Mikael Öhman
> What do you mean by "small block sizes"?

inside a VM, or directly on the mounted glusterfs;
dd if=/dev/zero of=tmpfile  bs=1M count=100 oflag=direct
of course, a terrible way to write data, but also things like compiling 
software inside one of the VMs was terrible slow, 5-10x slower than hardware, 
consisting of almost only idling.

Uploading disk images never got above 30MB/s. 
(and I did try all options i could find; using upload_disk.py on one of the 
hosts, even through a unix socket or with -d option, tweaking buffer size, all 
of which made no difference).
Adding an NFS volume and uploading to it I reach +200MB/s.

I tried tuning a few parameters on glusterfs but saw no improvements until I 
got to network.remote-dio, which made everything listed above really fast.

> Note that network.remote-dio is not the recommended configuration
> for ovirt, in particular if on hyperconverge setup when it can be harmful
> by delaying sanlock I/O.
> 
> https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c1...
> (Patch in discussion)

Oh, I had seen this page, thanks. Is "remote-dio=enabled" harmful as in things 
breaking, or just worse performance?
I was a bit reluctant to turn it on, but after seeing it was part of the virt 
group I thought it must have been safe.
Perhaps some of the other options like "performance.strict-o-direct on" would 
solve my performance issues in a nicer way (I will test it out first thing on 
monday)

Thanks (I'm not much of a filesystem guy, thanks for putting up with my 
ignorance)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFKVVKHYSLARJWED7KONQSN4RIG5FXSM/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Nir Soffer
On Fri, Sep 13, 2019 at 3:54 PM Mikael Öhman  wrote:

> Perhaps it will help future sysadmins will learn from my mistake.
> I also saw very poor upload speeds (~30MB/s) no matter what I tried. I
> went through the whole route with unix-sockets and whatnot.
>
> But, in the end, it just turned out that the glusterfs itself was the
> bottleneck; abysmal performance for small block sizes.
>

What do you mean by "small block sizes"?

I found the list of suggested performance tweaks that RHEL suggests. In
> particular, it was the "network.remote-dio=on" setting that made all the
> difference. Almost 10x faster.
>

Note that network.remote-dio is not the recommended configuration
for ovirt, in particular if on hyperconverge setup when it can be harmful
by delaying sanlock I/O.

https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c10a1cbfa2/source/documentation/admin-guide/chap-Working_with_Gluster_Storage.md#options-set-on-gluster-storage-volumes-to-store-virtual-machine-images
(Patch in discussion)

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RIRLB6KIDEBYSIJCRJQFZ4RVHOHQAC7/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-13 Thread Mikael Öhman
Perhaps it will help future sysadmins will learn from my mistake.
I also saw very poor upload speeds (~30MB/s) no matter what I tried. I went 
through the whole route with unix-sockets and whatnot.

But, in the end, it just turned out that the glusterfs itself was the 
bottleneck; abysmal performance for small block sizes.

I found the list of suggested performance tweaks that RHEL suggests. In 
particular, it was the "network.remote-dio=on" setting that made all the 
difference. Almost 10x faster.

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/configuring_red_hat_enterprise_virtualization_with_red_hat_gluster_storage/chap-hosting_virtual_machine_images_on_red_hat_storage_volumes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F657ZJ32EYON4X5FAE2BQAHDI4LV5D36/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-06-05 Thread Nir Soffer
On Wed, Apr 24, 2019 at 12:13 AM Dev Ops  wrote:

> I am working on integrating a backup solution for our ovirt environment
> and having issues with the time it takes to backup the VM's. This backup
> solution is simply taking a snapshot and making a clone and backing the
> clone up to a backup server.
>
> A VM that is 100 gig takes 52 minutes to back up. The same VM doing a file
> backup using the same product, and bypassing their rhv plugin, takes 14
> minutes. So the throughput is there but the ovirt imageio-proxy process
> seems to be what manages how images are uploaded and is officially my
> bottle neck. Load is not high on the engine or kvm hosts.

I had bumped up the Upload image size from 100MB to 10gig weeks ago and
> that didn't seem to help.
>
> [root@blah-lab-engine ~]# engine-config -a |grep Upload
> UploadImageChunkSizeKB: 1024 version: general
>

This will not help, 100 MiB should be big enough.

[root@bgl-vms-engine ~]# rpm -qa |grep ovirt-image
> ovirt-imageio-proxy-1.4.6-1.el7.noarch
> ovirt-imageio-common-1.4.6-1.el7.x86_64
> ovirt-imageio-proxy-setup-1.4.6-1.el7.noarch
>
> I have seen bugs reported to redhat about this but I am running above the
> affected releases.
>
> engine software is 4.2.8.2-1.el7
>
> Any idea what we can tweak to open up this bottleneck?
>

The proxy is not meant to be used for backup and restore. It was created to
allow easy upload or
download from the UI.

For backup application you should use upload and download directly from the
host.

When you create an image transfer, you should use the image transfer
"transfer_url" instead of the "proxy_url".

When you upload or download directly from the host, you can take advantage
of keep alive connections
and fast zero support.

If your upload or download program is running on the host, you can take
advantage of unix socket support
for even faster transfers.

For upload, you should use ovirt_imageio_common.client module, using all
available features:
https://github.com/oVirt/ovirt-imageio/blob/master/examples/upload

The client does not provide a easy to use helper for download yet. A future
version will provide
download helper.

Please see the documentation:
http://ovirt.github.io/ovirt-imageio/random-io.html
http://ovirt.github.io/ovirt-imageio/unix-socket.html

And SDK examples:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py

If direct download is still too slow, please file a bug and provide logs
imageio daemon logs.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF5ULO52EDV3J5RLISL64HIBVKAH2OIS/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-06-04 Thread Punaatua PK
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-imageio-proxy-1.4.6-1.el7.noarch
ovirt-imageio-daemon-1.4.6-1.el7.noarch
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTE6AWQLYPQKNF3YUD2NKNFHULRSBIDI/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-06-04 Thread Punaatua PK
Hello,

We have the same problem. It's seems that the ovirt-imageio-proxy is the 
bottleneck for our setup. We use vprotect to backup a VM via api

We ask for support with vprotect team and we are currently tunning some 
parameters on the kernel.

We have full 10G network in ou setup.

Here is the kernel parameter that we set. Did you manage to progress on the 
speed process ?

# Maximum receive socket buffer size
net.core.rmem_max = 134217728 

# Maximum send socket buffer size
net.core.wmem_max = 134217728 

# Minimum, initial and max TCP Receive buffer size in Bytes
net.ipv4.tcp_rmem = 4096 87380 134217728 

# Minimum, initial and max buffer space allocated
net.ipv4.tcp_wmem = 4096 65536 134217728 

# Maximum number of packets queued on the input side
net.core.netdev_max_backlog = 30 

# Auto tuning
net.ipv4.tcp_moderate_rcvbuf =1

# Don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1

# The Hamilton TCP (HighSpeed-TCP) algorithm is a packet loss based congestion 
control and is more aggressive pushing up to max bandwidth (total BDP) and 
favors hosts with lower TTL / VARTTL.
net.ipv4.tcp_congestion_control=htcp

# If you are using jumbo frames set this to avoid MTU black holes.
net.ipv4.tcp_mtu_probing = 1
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKBHNHPFRZYQCYZRJMDNYGXPAXPYDIBS/