[ovirt-users] Re: Slow vm transfer speed from vmware esxi 5

2018-09-14 Thread Bernhard Dick

Hi,

it took some time to answer due to some other stuff, but now I had the 
time to look into it.


Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:

[...]

Hi Bernhard,

With the latest version of the ovirt-imageio and the v2v we are 
performing quite nicely, and without specifying


the difference is that with the integrated v2v you don’t use any of 
that. It’s going through the vCenter server which is the major slowdown.
With 10MB/s I do not expect the bottleneck is on our side in any way. 
After all the integrated v2v is writing locally directly to the target 
prepared volume so it’s probably even faster than imageio.


the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but 
supports VDDK and SSH methods of access which both should be faster

you could try to use that, but you’d need to use it on cmdline
I first tried the ssh way which already improved the speed. Afterwards I 
did some more experiments and ended up using vmfs-tools to mount the 
vmware datastore directly and I see transfer speeds of ~50-60MB/sec when 
transferring to an ovirt-export domain now. This seems to be the maximum 
the used system can handle when using the fuse-vmfs-way. That would be 
fast enough in my case (and is a huge improvement).


However I cannot use the rhv-upload method because my storage domain is 
iSCSI and I get the error that sparse filetypes are not allowed (like 
being described at https://bugzilla.redhat.com/show_bug.cgi?id=1600547 
). The solution from the Bug does also not help, because then instantly 
I get the error message that I'd need to use -oa sparse when using 
rhv-upload. This happens with the development version 1.39.9 of 
libguestfs and with the git master branch. Do you have some advice how 
to fix this / which version to use?


  Regards
Bernhard

https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help 
to use it a bit more nicely


Thanks,
michal

number I can tell you that weakest link is the read rate from the 
vmware data store. In our lab
I can say that we roughly peek ~40 MiB/sec reading a single vm and the 
rest of our components(after the read from vmds)
have no problem dealing with that - i.e buffering -> converting -> 
writing to imageio -> writing to storage


So, in short, examine the read-rate from vm datastore, let us know, 
and please specify the versions you are using.


___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FSYIY5LRUPC46CCJ2DCR/

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 


Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7Q6TMRVLYRAFQVWBQTI/





--
Dipl.-Inf. Bernhard Dick
Auf dem Anger 24
DE-46485 Wesel
www.BernhardDick.de

jabber: bernh...@jabber.bdick.de

Tel : +49.2812068620
Mobil : +49.1747607927
FAX : +49.2812068621
USt-IdNr.: DE274728845
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSIMWXZL744WEMIBPRWNZHLQLYCYCMHZ/


[ovirt-users] moving disks around gluster domain is failing

2018-09-14 Thread g . vasilopoulos
Moving disk from one gluster domain to another fails, either with the vm 
running or down..
It strikes me that it says : File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in blockCopy
if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self)
I'am sending the relevant piece of log.. 

But it should be a file copy since it's gluster am I right?.
Gluster volumes are lvm thick and have different shard sizes...

2018-09-14 15:05:53,325+0300 ERROR (jsonrpc/2) [virt.vm] 
(vmId='f90f6533-9d71-4102-9cd6-2d9960a4e585') Unable to start replication for 
sda to {u'domainID': u'd07231ca-89b8-490a
-819d-8542e1eaee19', 'volumeInfo': {'path': 
u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7-4235-aad6-345e565f3073',
 'type
': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': 
'10.252.166.129'}, {'port': '0', 'transport': 'tcp', 'name': '10.252.166.130'}, 
{'port': '0', 'transport': 'tc
p', 'name': '10.252.166.128'}], 'protocol': 'gluster'}, 'format': 'cow', 
u'poolID': u'90946184-a7bd-11e8-950b-00163e11b631', u'device': 'disk', 
'protocol': 'gluster', 'propagat
eErrors': 'off', u'diskType': u'network', 'cache': 'none', u'volumeID': 
u'5716acc8-7ee7-4235-aad6-345e565f3073', u'imageID': 
u'3d95e237-441c-4b41-b823-48d446b3eba6', 'hosts': [
{'port': '0', 'transport': 'tcp', 'name': '10.252.166.129'}], 'path': 
u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7-4235
-aad6-345e565f3073', 'volumeChain': [{'domainID': 
u'd07231ca-89b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path': 
u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237
-441c-4b41-b823-48d446b3eba6/26214e9d-1126-42a0-85e3-c21f182b582f', 'volumeID': 
u'26214e9d-1126-42a0-85e3-c21f182b582f', 'leasePath': 
u'/rhev/data-center/mnt/glusterSD/10.252.1
66.129:_vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/26214e9d-1126-42a0-85e3-c21f182b582f.lease',
 'imageID': u'3d95e237-441c-4b41-b823-
48d446b3eba6'}, {'domainID': u'd07231ca-89b8-490a-819d-8542e1eaee19', 
'leaseOffset': 0, 'path': 
u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d44
6b3eba6/2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64', 'volumeID': 
u'2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64', 'leasePath': 
u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_vol3/d07231ca
-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64.lease',
 'imageID': u'3d95e237-441c-4b41-b823-48d446b3eba6'}, {'dom
ainID': u'd07231ca-89b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path': 
u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7
-4235-aad6-345e565f3073', 'volumeID': u'5716acc8-7ee7-4235-aad6-345e565f3073', 
'leasePath': 
u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_vol3/d07231ca-89b8-490a-819d-8542e
1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7-4235-aad6-345e565f3073.lease',
 'imageID': u'3d95e237-441c-4b41-b823-48d446b3eba6'}, {'domainID': u'd07231ca-89
b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path': 
u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/579e0033-4b94-4675-af78-d017ed2698
e9', 'volumeID': u'579e0033-4b94-4675-af78-d017ed2698e9', 'leasePath': 
u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/579e0033-4b94-4675-af78-d017ed2698e9.lease',
 'imageID': u'3d95e237-441c-4b41-b823-48d446b3eba6'}]} (vm:4710)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704, in 
diskReplicateStart
self._startDriveReplication(drive)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843, in 
_startDriveReplication
self._dom.blockCopy(drive.name, destxml, flags=flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 130, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in 
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in blockCopy
if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self)
libvirtError: argument unsupported: non-file destination not supported yet
2018-09-14 15:05:53,328+0300 INFO  (jsonrpc/2) [api.virt] FINISH 
diskReplicateStart return={'status': {'message': 'Drive replication error', 
'code': 55}} from=:::10.252.162.200,38050, 
flow_id=faecfa3d-a7e1-4882-a815-dbadc1f30813, 
vmId=f90f6533-9d71-4102-9cd6-2d9960a4e585 (api:52)
2018-09-14 15:05:53,328+0300 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
VM.diskReplicateStart failed (error 55) in 0.11 seconds (__init__:573)
2018-09-14 

[ovirt-users] Re: NFS Multipathing

2018-09-14 Thread spfma . tech
Hi, It should be possible, as oVirt is able to support NFS 4.1 I have a 
Synology NAS which is also able to support this version of the protocol, but 
never found time to set this together and test it until now. Reagrds 

Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a crit: 
  Hello all,   I've been looking around but I've not found anything definitive 
on whether Ovirt can do NFS Multipathing, and if so how?   Does anyone have any 
good how tos or configuration guides?   Thanks,   Thomas 
-
 FreeMail powered by mail.fr 

-
FreeMail powered by mail.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMJP2W2ZBWEIBZ3SG3QYUOYKXHCLBCFD/


[ovirt-users] HowTo trace errors or issues ?

2018-09-14 Thread Sven Achtelik
Hi All,

I would like to understand how to find the point that fails when starting from 
the Event showing in the GUI. With that event I get an correlation ID and how 
would I trace all the following tasks, actions or events that are connected to 
that correlation ID ?

Is it something like CorrelationID -> TaskID -> CommandID or how do things 
connect ?  Is there some documentation I could look at ?

Thank you,
Sven
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEN6UMFTPMRV6WHGJYRQ5C3FPNTULSLR/


[ovirt-users] Re: HostedEngine disk size not updated after manually extending it

2018-09-14 Thread Pötter , Ulrich
Hi,

thanks for your answer. I tried letting it restart automatically but it makes 
no difference.

vdsm-client Volume getInfo shows the correct values
manually looking in the metadata file on the storage domain shows the correct 
values
ssh into the machine and lsblk shows the correct value

Only the oVirt GUI (Virtual Machines -> HostedEngine -> Disks) shows the wrong 
(= the old) virtual size. That's what I'd like to fix. Everything else works 
fine.
Isn't there any command or anything else that I can do to force the engine to 
refresh the metadata?

Regards,
Ulrich


Von: Gianluca Cecchi [gianluca.cec...@gmail.com]
Gesendet: Freitag, 14. September 2018 09:05
An: Pötter, Ulrich
Cc: users
Betreff: Re: [ovirt-users] HostedEngine disk size not updated after manually 
extending it

On Thu, Sep 13, 2018 at 5:19 PM Pötter, Ulrich 
mailto:ulrich.poet...@hhi.fraunhofer.de>> 
wrote:


This worked. The VM has now a larger disk and the metadata on the storage 
domain show the new value (vdsm-client Volume getInfo ... too)
Unfortunetaly the virtual size of the disk shown in the oVirt GUI is still the 
old value.
How can I get the engine to update those values?

Just a guess, not tried myself.
Have you tried to put environment in global maintenance, shutdown hosted engine 
vm and then exit from global maintenance and see when it restarts if detects 
the new own disk setup?

After your change, restarting the engine VM and verify that all is ok is for 
sure a thing to do anyway.
And in my opinion to be done first on a test environment where you can 
eventually tolerate problems.
HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KI7M446WP67KEAKEB26JLFAZFLY7OBRO/


[ovirt-users] Re: Failed to synchronize networks of Provider ovirt-provider-ovn

2018-09-14 Thread Dominik Holler
On Thu, 13 Sep 2018 11:08:28 +0200
Robert O'Kane  wrote:

> Hello,
> 
> I have a simmilar issue with ovirt-provider-ovn.
> 
> But in my config I see:
> 
> ovirt-sso-client-secret=to_be_set
> 
> Where do I find / how do I generate this token?
> 

Usually engine-setup will generate an appropriate automatically.
/etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf.

If you want to (or have to?) generate manually the client secrete,
follow this steps:

1. Run /usr/share/ovirt-engine/bin/ovirt-register-sso-client-tool.sh 
   with
   Client Id: ovirt-provider-ovn
   Client CA Certificate File Location: /etc/pki/ovirt-engine/certs/engine.cer
   Callback Prefix URL: https://:443/ovirt-engine/
2. Use the SSO_CLIENT_SECRET from the outfile produced by the previous
   command in
   /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
3. Restart ovirt-engine and ovirt-provider-ovn
   systemctl restart ovirt-engine
   systemctl restart ovirt-provider-ovn


> Thanks,
> 
> Robert O'Kane
> 
> 
> 
> On 09/12/2018 04:42 PM, m...@set-pro.net wrote:
> > I have a same issue with OVN provider and SSL, but certificate
> > changes not helps to resolve it. I use following
> > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/administration_guide/appe-red_hat_enterprise_virtualization_and_ssl#Replacing_the_Manager_SSL_Certificate
> > to replace my cert, and after reboot get this error.
> > ovirt-ca-file= is a same SSL file which use WebUI.
> > I restart ovirt-provider-ovn, i restart engine, i restart
> > everything what i can restart. Nothing helps...
> > 
> > Logs below.
> > 
> > [root@engine ~]# tail -n 50 /var/log/ovirt-provider-ovn.log
> > 2018-09-12 14:10:23,828 root [SSL: CERTIFICATE_VERIFY_FAILED]
> > certificate verify failed (_ssl.c:579) Traceback (most recent call
> > last): File
> > "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 133,
> > in _handle_request method, path_parts, content File
> > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
> > 175, in handle_request return self.call_response_handler(handler,
> > content, parameters) File
> > "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
> > call_response_handler return response_handler(content, parameters)
> > File
> > "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
> > line 62, in post_tokens user_password=user_password) File
> > "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
> > create_token return auth.core.plugin.create_token(user_at_domain,
> > user_password) File
> > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
> > 48, in create_token timeout=self._timeout()) File
> > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75,
> > in create_token username, password, engine_url, ca_file, timeout)
> > File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py",
> > line 91, in _get_sso_token timeout=timeout File
> > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
> > in wrapper response = func(*args, **kwargs) File
> > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
> > in wrapper raise BadGateway(e) BadGateway: [SSL:
> > CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
> > 
> > 
> > [root@engine ~]# tail -n 20 /var/log/ovirt-engine/engine.log
> > 2018-09-12 14:10:23,773+03 INFO
> > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> > (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
> > Acquired to object
> > 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
> > sharedLocks=''}' 2018-09-12 14:10:23,778+03 INFO
> > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> > (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
> > Running command: SyncNetworkProviderCommand internal: true.
> > 2018-09-12 14:10:23,836+03 ERROR
> > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> > (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
> > Command
> > 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
> > failed: EngineException: (Failed with error Bad Gateway and code
> > 5050) 2018-09-12 14:10:23,837+03 INFO
> > [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> > (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
> > freed to object
> > 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
> > sharedLocks=''}' 2018-09-12 14:14:12,477+03 INFO
> > [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
> > task-6) [] User admin@internal successfully logged in with scopes:
> > ovirt-app-admin ovirt-app-api ovirt-app-portal
> > ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
> > ovirt-ext=token-info:authz-search
> > ovirt-ext=token-info:public-authz-search
> > ovirt-ext=token-info:validate 

[ovirt-users] Re: VM poor iops

2018-09-14 Thread Paolo Margara
Hi,

but performance.strict-o-direct is not one of the option enabled by
gdeploy during installation because it's supposed to give some sort of
benefit?


Paolo


Il 14/09/2018 11:34, Leo David ha scritto:
> performance.strict-o-direct:  on
> This was the bloody option that created the botleneck ! It was ON.
> So now i get an average of 17k random writes,  which is not bad at
> all. Below,  the volume options that worked for me:
>
> performance.strict-write-ordering: off
> performance.strict-o-direct: off
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> network.ping-timeout: 30
> performance.quick-read: off
> cluster.eager-lock: enable
> performance.stat-prefetch: on
> performance.low-prio-threads: 32
> network.remote-dio: off
> user.cifs: off
> performance.io-cache: off
> server.allow-insecure: on
> features.shard: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> nfs.disable: on
>
> If any other tweaks can be done,  please let me know.
> Thank you !
>
> Leo
>
>
> On Fri, Sep 14, 2018 at 12:01 PM, Leo David  > wrote:
>
> Hi Everyone,
> So i have decided to take out all of the gluster volume custom
> options,  and add them one by one while activating/deactivating
> the storage domain & rebooting one vm after each  added option :(
>
> The default options that giving bad iops ( ~1-2k) performance are :
>
> performance.stat-prefetch on
> cluster.eager-lock enable
> performance.io-cache off
> performance.read-ahead off
> performance.quick-read off
> user.cifs off
> network.ping-timeout 30
> network.remote-dio off
> performance.strict-o-direct on
> performance.low-prio-threads 32
>
> After adding only:
>
>
> server.allow-insecure on
> features.shard on
> storage.owner-gid 36
> storage.owner-uid 36
> transport.address-family inet
> nfs.disable on
>
> The performance increased to 7k-10k iops.
>
> The problem is that i don't know if that's sufficient ( maybe it
> can be more improved ) , or even worse than this there might be
> chances to into different volume issues by taking out some volume
> really needed options... 
>
> If would have handy the default options that are applied to
> volumes as optimization in a 3way replica, I think that might help..
>
> Any thoughts ?
>
> Thank you very much !
>
>
> Leo
>
>
>
>
>
>
> On Fri, Sep 14, 2018 at 8:54 AM, Leo David  > wrote:
>
> Any thoughs on these ? Is that UI optimization only a gluster
> volume custom configuration ? If so, i guess it can be done
> from cli, but I am not aware of the corect optimized
> parameters of the volume
>
>
> On Thu, Sep 13, 2018, 18:25 Leo David  > wrote:
>
> Thank you Jayme. I am trying to do this, but I am getting
> an error, since the volume is replica 1 distribute, and it
> seems that oVirt expects a replica 3 volume.
> Would it be another way to optimize the volume in this
> situation ?
>
>
> On Thu, Sep 13, 2018, 17:49 Jayme  > wrote:
>
> I had similar problems until I clicked "optimize
> volume for vmstore" in the admin GUI for each data
> volume.  I'm not sure if this is what is causing your
> problem here but I'd recommend trying that first.  It
> is suppose to be optimized by default but for some
> reason my ovirt 4.2 cockpit deploy did not apply those
> settings automatically. 
>
> On Thu, Sep 13, 2018 at 10:21 AM Leo David
> mailto:leoa...@gmail.com>> wrote:
>
> Hi Everyone,
> I am encountering the following issue on a single
> instance hyper-converged 4.2 setup.
> The following fio test was done:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1
> --gtod_reduce=1 --name=test --filename=test
> --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
> The results are very poor doing the test inside of
> a vm with a prealocated disk on the ssd store: 
> ~2k IOPS
> Same test done on the oVirt node directly on the
> mounted ssd_lvm: ~30k IOPS
> Same test done, this time on the gluster mount
> path: ~20K IOPS 
>
> What could be the issue that the vms have this
> slow hdd performance ( 2k on ssd !! )?
> Thank you very much !
>
>
>
>
> -- 
> Best regards, Leo David
> 

[ovirt-users] Re: VM poor iops

2018-09-14 Thread Leo David
performance.strict-o-direct:  on
This was the bloody option that created the botleneck ! It was ON.
So now i get an average of 17k random writes,  which is not bad at all.
Below,  the volume options that worked for me:

performance.strict-write-ordering: off
performance.strict-o-direct: off
server.event-threads: 4
client.event-threads: 4
performance.read-ahead: off
network.ping-timeout: 30
performance.quick-read: off
cluster.eager-lock: enable
performance.stat-prefetch: on
performance.low-prio-threads: 32
network.remote-dio: off
user.cifs: off
performance.io-cache: off
server.allow-insecure: on
features.shard: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
nfs.disable: on

If any other tweaks can be done,  please let me know.
Thank you !

Leo


On Fri, Sep 14, 2018 at 12:01 PM, Leo David  wrote:

> Hi Everyone,
> So i have decided to take out all of the gluster volume custom options,
> and add them one by one while activating/deactivating the storage domain &
> rebooting one vm after each  added option :(
>
> The default options that giving bad iops ( ~1-2k) performance are :
>
> performance.stat-prefetch on
> cluster.eager-lock enable
> performance.io-cache off
> performance.read-ahead off
> performance.quick-read off
> user.cifs off
> network.ping-timeout 30
> network.remote-dio off
> performance.strict-o-direct on
> performance.low-prio-threads 32
>
> After adding only:
>
>
> server.allow-insecure on
> features.shard on
> storage.owner-gid 36
> storage.owner-uid 36
> transport.address-family inet
> nfs.disable on
> The performance increased to 7k-10k iops.
>
> The problem is that i don't know if that's sufficient ( maybe it can be
> more improved ) , or even worse than this there might be chances to into
> different volume issues by taking out some volume really needed options...
>
> If would have handy the default options that are applied to volumes as
> optimization in a 3way replica, I think that might help..
>
> Any thoughts ?
>
> Thank you very much !
>
>
> Leo
>
>
>
>
>
> On Fri, Sep 14, 2018 at 8:54 AM, Leo David  wrote:
>
>> Any thoughs on these ? Is that UI optimization only a gluster volume
>> custom configuration ? If so, i guess it can be done from cli, but I am not
>> aware of the corect optimized parameters of the volume
>>
>>
>> On Thu, Sep 13, 2018, 18:25 Leo David  wrote:
>>
>>> Thank you Jayme. I am trying to do this, but I am getting an error,
>>> since the volume is replica 1 distribute, and it seems that oVirt expects a
>>> replica 3 volume.
>>> Would it be another way to optimize the volume in this situation ?
>>>
>>>
>>> On Thu, Sep 13, 2018, 17:49 Jayme  wrote:
>>>
 I had similar problems until I clicked "optimize volume for vmstore" in
 the admin GUI for each data volume.  I'm not sure if this is what is
 causing your problem here but I'd recommend trying that first.  It is
 suppose to be optimized by default but for some reason my ovirt 4.2 cockpit
 deploy did not apply those settings automatically.

 On Thu, Sep 13, 2018 at 10:21 AM Leo David  wrote:

> Hi Everyone,
> I am encountering the following issue on a single instance
> hyper-converged 4.2 setup.
> The following fio test was done:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test --filename=test --bs=4k --iodepth=64 --size=4G
> --readwrite=randwrite
> The results are very poor doing the test inside of a vm with a
> prealocated disk on the ssd store:  ~2k IOPS
> Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k
> IOPS
> Same test done, this time on the gluster mount path: ~20K IOPS
>
> What could be the issue that the vms have this slow hdd performance (
> 2k on ssd !! )?
> Thank you very much !
>
>
>
>
> --
> Best regards, Leo David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users@ovirt.org/message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/
>

>
>
> --
> Best regards, Leo David
>



-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HKQQKX3BJLZ3HQ5SCHPLPON24OGMGSS/


[ovirt-users] Re: VM poor iops

2018-09-14 Thread Leo David
Hi Everyone,
So i have decided to take out all of the gluster volume custom options,
and add them one by one while activating/deactivating the storage domain &
rebooting one vm after each  added option :(

The default options that giving bad iops ( ~1-2k) performance are :

performance.stat-prefetch on
cluster.eager-lock enable
performance.io-cache off
performance.read-ahead off
performance.quick-read off
user.cifs off
network.ping-timeout 30
network.remote-dio off
performance.strict-o-direct on
performance.low-prio-threads 32

After adding only:


server.allow-insecure on
features.shard on
storage.owner-gid 36
storage.owner-uid 36
transport.address-family inet
nfs.disable on
The performance increased to 7k-10k iops.

The problem is that i don't know if that's sufficient ( maybe it can be
more improved ) , or even worse than this there might be chances to into
different volume issues by taking out some volume really needed options...

If would have handy the default options that are applied to volumes as
optimization in a 3way replica, I think that might help..

Any thoughts ?

Thank you very much !


Leo





On Fri, Sep 14, 2018 at 8:54 AM, Leo David  wrote:

> Any thoughs on these ? Is that UI optimization only a gluster volume
> custom configuration ? If so, i guess it can be done from cli, but I am not
> aware of the corect optimized parameters of the volume
>
>
> On Thu, Sep 13, 2018, 18:25 Leo David  wrote:
>
>> Thank you Jayme. I am trying to do this, but I am getting an error, since
>> the volume is replica 1 distribute, and it seems that oVirt expects a
>> replica 3 volume.
>> Would it be another way to optimize the volume in this situation ?
>>
>>
>> On Thu, Sep 13, 2018, 17:49 Jayme  wrote:
>>
>>> I had similar problems until I clicked "optimize volume for vmstore" in
>>> the admin GUI for each data volume.  I'm not sure if this is what is
>>> causing your problem here but I'd recommend trying that first.  It is
>>> suppose to be optimized by default but for some reason my ovirt 4.2 cockpit
>>> deploy did not apply those settings automatically.
>>>
>>> On Thu, Sep 13, 2018 at 10:21 AM Leo David  wrote:
>>>
 Hi Everyone,
 I am encountering the following issue on a single instance
 hyper-converged 4.2 setup.
 The following fio test was done:

 fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
 --name=test --filename=test --bs=4k --iodepth=64 --size=4G
 --readwrite=randwrite
 The results are very poor doing the test inside of a vm with a
 prealocated disk on the ssd store:  ~2k IOPS
 Same test done on the oVirt node directly on the mounted ssd_lvm: ~30k
 IOPS
 Same test done, this time on the gluster mount path: ~20K IOPS

 What could be the issue that the vms have this slow hdd performance (
 2k on ssd !! )?
 Thank you very much !




 --
 Best regards, Leo David
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
 guidelines/
 List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
 message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/

>>>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6D6P47677TPRUHUXS2V2VB2WTQSIGIQ/


[ovirt-users] Re: [ANN] oVirt Engine 4.2.6 async update is now available

2018-09-14 Thread p . staniforth
I've managed to upgrade them now by removing logical volumes, usually it's just 
/dev/onn/home but one I had to keep reinstalling see were it failed so had to

lvremove /dev/onn/ovirt-node-ng-4.2.6.1-0.20180913.0+1
lvremove /dev/onn/var_crash
lvremove   /dev/onn/var_log
lvremove   /dev/onn/var_log_audit

I had trouble removing because failed because it was in use since there was an 
abrt process holding onto the mount.

Thanks,
 Paul S.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LAS4IEPE2IF53QJVPMJEFLU2Q76AWQRD/


[ovirt-users] Re: Gluster Issues

2018-09-14 Thread Paolo Margara
Hi, there was a memory leak in the gluster client that is fixed in
release 3.12.13
(https://github.com/gluster/glusterdocs/blob/master/docs/release-notes/3.12.13.md).
What version of gluster are you using?


Paolo


Il 11/09/2018 16:51, Endre Karlson ha scritto:
> Hi, we are seeing some issues where our hosts oom kill glusterd after
> a while but there's plenty of memory?
>
> Running Centos 7,4.x and Ovirt 4.2.x
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SRQK74C7DPAB3GWCJQTZZZWU346CAW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XKH2HIVLY77WKCLJCRJMRPTMOBJ5LSA/


[ovirt-users] Re: HostedEngine disk size not updated after manually extending it

2018-09-14 Thread Gianluca Cecchi
On Thu, Sep 13, 2018 at 5:19 PM Pötter, Ulrich <
ulrich.poet...@hhi.fraunhofer.de> wrote:

>
>
> This worked. The VM has now a larger disk and the metadata on the storage
> domain show the new value (vdsm-client Volume getInfo ... too)
> Unfortunetaly the virtual size of the disk shown in the oVirt GUI is still
> the old value.
> How can I get the engine to update those values?
>
> Just a guess, not tried myself.
Have you tried to put environment in global maintenance, shutdown hosted
engine vm and then exit from global maintenance and see when it restarts if
detects the new own disk setup?

After your change, restarting the engine VM and verify that all is ok is
for sure a thing to do anyway.
And in my opinion to be done first on a test environment where you can
eventually tolerate problems.
HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L3BLKQOXYPU3K6EXSRYXSS5WHRLF5QI2/


[ovirt-users] Re: Best way to update dns config of ovirt node

2018-09-14 Thread Gianluca Cecchi
On Thu, Sep 13, 2018 at 7:53 AM Edward Haas  wrote:

>
>
> On Mon, Sep 10, 2018 at 5:53 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> supposing to have ovirt-ng node 4.2.6 and that ovirtmgmt config regarding
>> DNS servers has to be updated, what is the correct way to proceed?
>>
>
> DNS should be editable from the network attachment window.
>

Thanks for your answer Edward.
What do you mean with "network attachment window"?
a) Network > Networks, then select ovirtmgmt line and edit, putting DNS
info (that is empty right now)
or
b) Compute > Hosts, then select host1 line, click on host1 name, then
Network Interfaces and Setup Host Networks. then edit ovirtmgmt > DNS
Configuration (that is empty right now)
or what?
In my case they are both empty, but when I installed hosts I configured them


> They are applied only for the network which "owns" the default-route for
> that host.
>

This is typically ovirtmgmt network, as in my case, I think.


> The values are added through ifcfg, therefore its logic of adding it to
> resolv.conf is used.
>

I have also opened a case for another environment where I'm using RHV-H
hosts, that should be quite similar to ovirt-node-ng
In that environment one of the original M$ based dns server was changed
(that was the one I had as primary) creating some latency problems.
Using setup host networks generated big problems, especially if doing on
the host where hosted engine was running.
I verified that a safe way to modify DNS config is:
- evacuate VMs from host
- put host into maintenance
- change dns settings in setup host networks
- reboot the host
- activate the host

This way the "persistent" files are initially updated and then at the
reboot the ifcfg and resolv.conf files are correctly updated
If you are interested my case is 02179117

Also in that environment the config pages (hosts and logical network of
cluster) had empty DNS entries

I have not understood if the hosts in "setup host network" (and also
ovirtmgmt logical network) having empty DNS config is a bug or it's me not
understanding the workflow when adding a host to the oVirt environment.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OOW6CSGLH2BWESKCYJNJTUYGOOB3O2HI/


[ovirt-users] Re: Unable to start VM - Image is not a legal chain

2018-09-14 Thread Alex K
Hi all,

coming back on this I did experience again the issue and the referred link
was useful to restore the VMs.

https://www.mail-archive.com/users@ovirt.org/msg49300.html

I followed the steps to delete the illegal snapshot while the VM was down.
I now have again the issue with other 3 VMs which are currently running
(critical production ones).
Is it possible to follow the steps and delete the illegal snapshot while
the VMs are running?

Thanx,
Alex



On Thu, Jul 12, 2018 at 10:40 AM Alex K  wrote:

> Thank you Nicolas.
> I will keep this for future reference in case I encounter this again. Now
> I don't have the corrupted VM to experiment.
>
> Thanx,
> Alex
>
> On Wed, Jul 11, 2018 at 11:28 AM,  wrote:
>
>> Hi Alex,
>>
>> We had a bigger problem recently which involved the error you mention. I
>> sent it to the mail list and you can find the final solution we chose at
>> [1]. Not the cleanest solution of course, but we managed to recover all
>> VMs... I think in your case the relevant part is the one that mention the
>> "vdsClient setVolumeLegality" command, although I don't know the root
>> reason why you're getting the error (might be a corrupt snapshot, as in our
>> case)...
>>
>> Hope this helps.
>>
>>   [1]: https://www.mail-archive.com/users@ovirt.org/msg49300.html
>>
>>
>> El 2018-07-11 09:16, Alex K escribió:
>>
>>> Due to urgency of the case, I fetched the backup copy from weekend and
>>> proceeded to push missing data to VM (the VM is a git repo). I lost
>>> few notes, though not much damage was done...
>>> I'm starting to feel uncomfortable with this solution though and might
>>> switch (at least the production VMs) to plain KVM where I had never
>>> experienced such issues.
>>>
>>> Alex
>>>
>>> On Wed, Jul 11, 2018 at 7:27 AM, Yedidyah Bar David 
>>> wrote:
>>>
>>> (Changing subject, adding Freddy)

 On Tue, Jul 10, 2018 at 8:06 PM, Alex K 
 wrote:

 Hi all,
>
> I did a routine maintenance today (updating the hosts) to ovirt
> cluster (4.2) and I have one VM that was complaining about an
> invalid snapshot. After shutdown of VM the VM is not able to start
> again, giving the error:
>
> VM Gitlab is down with error. Exit message: Bad volume
> specification {'serial': 'b6af2856-a164-484a-afe5-9836bbdd14e8',
> 'index': 0, 'iface': 'virtio', 'apparentsize': '51838976',
> 'specParams': {}, 'cache': 'none', 'imageID':
> 'b6af2856-a164-484a-afe5-9836bbdd14e8', 'truesize': '52011008',
> 'type': 'disk', 'domainID':
> '142bbde6-ef9d-4a52-b9da-2de533c1f1bd', 'reqsize': '0', 'format':
> 'cow', 'poolID': '0001-0001-0001-0001-0311', 'device':
> 'disk', 'path':
>
>

>>> '/rhev/data-center/0001-0001-0001-0001-0311/142bbde6-ef9d-4a52-b9da-2de533c1f1bd/images/b6af2856-a164-484a-afe5-9836bbdd14e8/f3125f62-c909-472f-919c-844e0b8c156d',
>>>
 'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1',
> 'volumeID': 'f3125f62-c909-472f-919c-844e0b8c156d', 'diskType':
> 'file', 'alias': 'ua-b6af2856-a164-484a-afe5-9836bbdd14e8',
> 'discard': False}.
>
> I see also the following error:
>
> VDSM command CopyImageVDS failed: Image is not a legal chain:
> (u'b6af2856-a164-484a-afe5-9836bbdd14e8',)
>

 This error appears a few more times in the list's archive, all of
 which seem to be related to rather-old bugs (3.5/3.6 times) or
 storage problems. I assume you use 4.2. Are you sure the corruption
 happened only now? Did working with snapshots worked well before the
 upgrade?



 Seems as a corrupt VM disk?
>

 Seems so to me, but I am not a storage expert.



 The VM had 3 snapshots. I was able to delete one from GUI then am
> not able to delete the other two as the task fails. Generally I am
> not allowed to clone, export or do sth to the VM.
>
>

 Have you encountered sth similar. Any advice?
>

 The lastest post, from 2016, included a workaround, you might (very
 carefully!) try that.

 I suggest to also open a bug and attach all relevant logs (engine,
 vdsm from all relevant hosts, including SPMs at time of snapshot
 operations and any other host that ran the VM), and try to give
 accurate reproduction steps.

 Best regards,
 --

 Didi

>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYR4KMOZVEQOJ5CBTFJMRYY7XAF5YAPP/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to