[ovirt-users] Iso upload success, no GUI popup option

2018-03-19 Thread Jamie Lawrence
Hello,

I'm trying to iron out the last few oddities of this setup, and one of them is 
the iso upload process. This worked in the last rebuild, but... well.

So, uploading from one of the hosts to an ISO domain claims success, and 
manually checking shows the ISO uploaded just fine, perms set correctly to 
36:36. But it doesn't appear in the GUI popup when creating a new VM.

Verified that the VDSM user can fully traverse the directory path - presumably 
that was tested by uploading it in the first place, but I double-checked. 
Looked in various logs, but didn't see any action in ovirt-imageio-daemon or 
-proxy. Didn't see anything in engine.log that looked relevant.

What is the troubleshooting method for this? Googling, it seemed most folks' 
problems were related to permissions. I scanned DB table names for something 
that seemed like it might have ISO-related info in it, but couldn't find 
anything, and am not sure what else to check.

Thanks,

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Major Performance Issues with gluster

2018-03-19 Thread Donny Davis
Try hitting the optimize for virt option in the volumes tab on oVirt for
this volume.

This might help with some of it, but that should have been done before you
connected it as a storage domain. The sharding feature helps with
performance, and so do some of the other options that are present on your
other volumes.

On Mon, Mar 19, 2018, 12:28 PM Jim Kusznir  wrote:

> Here's gluster volume info:
>
> [root@ovirt2 ~]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> server.allow-insecure: on
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: data-hdd
> Type: Replicate
> Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 172.172.1.11:/gluster/brick3/data-hdd
> Brick2: 172.172.1.12:/gluster/brick3/data-hdd
> Brick3: 172.172.1.13:/gluster/brick3/data-hdd
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> transport.address-family: inet
> performance.readdir-ahead: on
>
> Volume Name: engine
> Type: Replicate
> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: iso
> Type: Replicate
> Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
> Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
> Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> --
>
> When I try and turn on profiling, I get:
>
> [root@ovirt2 ~]# gluster volume profile data-hdd start
> Another transaction is in progress for data-hdd. Please try again after
> sometime.
>
> I don't know what that other transaction is, but I am having some "odd
> behavior" this morning, like a vm disk move between data and data-hdd that
> stuck at 84% overnight.
>
> I've been asking on IRC how to "un-stick" this transfer, as the VM cannot
> be started, and I can't seem to do anything about it.
>
> --Jim
>
> On Mon, Mar 19, 2018 at 2:14 AM, Sahina Bose  wrote:
>
>>
>>
>> On Mon, Mar 19, 2018 at 7:39 AM, Jim Kusznir  wrote:
>>
>>> 

[ovirt-users] Change ovirtmgmt ip from dhcp to static in a

2018-03-19 Thread zois roupas
Hello everyone


I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp 
instead of a static ip configuration. Both engine and host are in the same 
machine cause of limited resources and i was so happy that everything worked so 
well that i kept configuring and installing vm's ,adding local and nfs storage 
and setting up the backup!

As you understand i must change the configuration to static ip and i can't find 
any guide describing the correct procedure. Is there an official guide to 
change configuration without causing any trouble?

I've found this thread 
http://lists.ovirt.org/pipermail/users/2014-May/024432.html but this is for a 
hosted engine and doesn't help when both engine and host are in the same machine


Thanx in advance

Best Regards

Zois
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine deployment error

2018-03-19 Thread spfma . tech
Hi,
 Thanks for your answer. No, it was configured with static ip. I checked the 
answer file from the first install, I used the same options. Regards

 Le 19-Mar-2018 17:48:41 +0100, stira...@redhat.com a crit:   

 On Mon, Mar 19, 2018 at 4:56 PM,  wrote:

 Hi,
   I wanted to rebuild a new hosted engine setup, as the old was corrupted (too 
much violent poweroff !)   So the server was not reinstalled, I just runned 
"ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be 
still in place, so I haven't changed anything there.   Then I decided to update 
the packages to the latest versions avaible, rebooted the server and run 
"ovirt-hosted-engine-setup".   But the process never succeeds, as I get an 
error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]"   
  [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
[{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], 
"auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", 
"subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": 
"/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": 
"d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, 
"topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], 
"external_network_provider_configurations": [], "external_status": "ok", 
"hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": 
"/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": 
"542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": 
"unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, 
"name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], 
"numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, 
"permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": 
true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": 
"stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": 
{"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 
22}, "statistics": [], "status": "non_responsive", 
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
"transparent_huge_pages": {"enabled": false}, "type": "rhel", 
"unmanaged_networks": [], "update_available": false}]}, "attempts": 120, 
"changed": false}
[ INFO ] TASK [Remove local vm dir]
[ INFO ] TASK [Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"} I made 
another try with Cockpit, it is the same.   Am I doing something wrong or is 
there a bug ?I suppose that your host was condifured with DHCP, if so it's 
this one: https://bugzilla.redhat.com/1549642   The fix will come with 4.2.2.   
   Regards 

-
FreeMail powered by mail.fr 
___
 Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine deployment error

2018-03-19 Thread Simone Tiraboschi
On Mon, Mar 19, 2018 at 4:56 PM,  wrote:

> Hi,
>
> I wanted to rebuild a new hosted engine setup, as the old was corrupted
> (too much violent poweroff !)
>
> So the server was not reinstalled, I just runned
> "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to
> be still in place, so I haven't changed anything there.
>
> Then I decided to update the packages to the latest versions avaible,
> rebooted the server and run "ovirt-hosted-engine-setup".
>
> But the process never succeeds, as I get an error after a long time spent
> in "[ INFO ] TASK [Wait for the host to be up]"
>
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
> [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [],
> "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc",
> "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster":
> {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701",
> "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu":
> {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false},
> "devices": [], "external_network_provider_configurations": [],
> "external_status": "ok", "hardware_information": {"supported_rng_sources":
> []}, "hooks": [], "href": "/ovirt-engine/api/hosts/
> 542566c4-fc85-4398-9402-10c8adaa9554", "id": 
> "542566c4-fc85-4398-9402-10c8adaa9554",
> "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false},
> "max_scheduling_memory": 0, "memory": 0, "name": 
> "pfm-srv-virt-1.pfm-ad.pfm.loc",
> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported":
> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port":
> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false,
> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh":
> {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8",
> "port": 22}, "statistics": [], "status": "non_responsive",
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
> "transparent_huge_pages": {"enabled": false}, "type": "rhel",
> "unmanaged_networks": [], "update_available": false}]}, "attempts": 120,
> "changed": false}
> [ INFO ] TASK [Remove local vm dir]
> [ INFO ] TASK [Notify the user about a failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> system may not be provisioned according to the playbook results: please
> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
>
>
> I made another try with Cockpit, it is the same.
>
> Am I doing something wrong or is there a bug ?
>

I suppose that your host was condifured with DHCP, if so it's this one:
https://bugzilla.redhat.com/1549642

The fix will come with 4.2.2.


>
> Regards
>
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration oVirt Engine from Dedicated Host to Self-Hosted Engine

2018-03-19 Thread FERNANDO FREDIANI
Just to add up, for the second question I am following this URL:

https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/

So the question is more of anything else that may be good to take in
attention other than what is already there.

Thanks
Fernando

2018-03-19 13:38 GMT-03:00 FERNANDO FREDIANI :

> Hello folks
>
> I currently have a oVirt Engine which runs in a Dedicated Virtual Machine
> in another ans separate environment. It is very nice to have it like that
> because every time I do a oVirt Version Upgrade I take a snapshot before
> and if it failed (and it did failed in the past several times) I just go
> back in time before the snapshot and all comes back to normal.
>
> Two quick questions:
>
> - Going to a Self-Hosted Engine will snapshots or recoverable ways be
> possible ?
>
> - To migrate the Engine from the current environment to the self-hosted
> engine is it just a question to backup the Database, restore it into the
> self-hosted engine keeping it with the same IP address ? Are there any
> special points to take in consideration when doing this migration ?
>
> Thanks
> Fernando
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Major Performance Issues with gluster

2018-03-19 Thread Jim Kusznir
Here's gluster volume info:

[root@ovirt2 ~]# gluster volume info

Volume Name: data
Type: Replicate
Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
server.allow-insecure: on
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on

Volume Name: data-hdd
Type: Replicate
Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.172.1.11:/gluster/brick3/data-hdd
Brick2: 172.172.1.12:/gluster/brick3/data-hdd
Brick3: 172.172.1.13:/gluster/brick3/data-hdd
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
performance.readdir-ahead: on

Volume Name: engine
Type: Replicate
Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on

Volume Name: iso
Type: Replicate
Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on

--

When I try and turn on profiling, I get:

[root@ovirt2 ~]# gluster volume profile data-hdd start
Another transaction is in progress for data-hdd. Please try again after
sometime.

I don't know what that other transaction is, but I am having some "odd
behavior" this morning, like a vm disk move between data and data-hdd that
stuck at 84% overnight.

I've been asking on IRC how to "un-stick" this transfer, as the VM cannot
be started, and I can't seem to do anything about it.

--Jim

On Mon, Mar 19, 2018 at 2:14 AM, Sahina Bose  wrote:

>
>
> On Mon, Mar 19, 2018 at 7:39 AM, Jim Kusznir  wrote:
>
>> Hello:
>>
>> This past week, I created a new gluster store, as I was running out of
>> disk space on my main, SSD-backed storage pool.  I used 2TB Seagate
>> FireCuda drives (hybrid SSD/spinning).  Hardware is Dell R610's with
>> integral PERC/6i cards.  I placed one disk per machine, exported the disk
>> as a single disk volume from the raid controller, formatted it XFS, mounted
>> it, and dedicated it to a new replica 3 gluster volume.
>>
>> Since doing so, I've been having major performance problems.  One of my
>> windows VMs sits at 100% disk utilization nearly continously, and its
>> painful to do anything on it.  A Zabbix install on CentOS using mysql as
>> the backing has 70%+ 

[ovirt-users] Hosted engine deployment error

2018-03-19 Thread spfma . tech
Hi,
   I wanted to rebuild a new hosted engine setup, as the old was corrupted (too 
much violent poweroff !)   So the server was not reinstalled, I just runned 
"ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be 
still in place, so I haven't changed anything there.   Then I decided to update 
the packages to the latest versions avaible, rebooted the server and run 
"ovirt-hosted-engine-setup".   But the process never succeeds, as I get an 
error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]"   
  [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
[{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], 
"auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", 
"subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": 
"/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": 
"d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, 
"topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], 
"external_network_provider_configurations": [], "external_status": "ok", 
"hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": 
"/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": 
"542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": 
"unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, 
"name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], 
"numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, 
"permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": 
true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": 
"stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": 
{"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 
22}, "statistics": [], "status": "non_responsive", 
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
"transparent_huge_pages": {"enabled": false}, "type": "rhel", 
"unmanaged_networks": [], "update_available": false}]}, "attempts": 120, 
"changed": false}
[ INFO ] TASK [Remove local vm dir]
[ INFO ] TASK [Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system 
may not be provisioned according to the playbook results: please check the logs 
for the issue, fix accordingly or re-deploy from scratch.\n"} I made 
another try with Cockpit, it is the same.   Am I doing something wrong or is 
there a bug ?   Regards 

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network issues with oVirt 4.2 and cloud-init

2018-03-19 Thread Luca 'remix_tj' Lorenzetto
Hello Sandy,

i had the same issue and the cause was cloud-init running again at
boot even if Run-Once hasn't been selected as boot option. The way i'm
using to solve the problem is to remove cloud-init after the first
run, since we don't need it anymore.

In case also disabling is enoug:

touch /etc/cloud/cloud-init.disabled

Luca

On Mon, Mar 19, 2018 at 2:17 PM, Berger, Sandy  wrote:
> We’re using cloud-init to customize VMs built from a template. We’re using
> static IPV4 settings so we’re specifying an IP address, subnet mask, and
> gateway. There is apparently a bug in the current version of cloud-init
> shipping as part of CentOS 7.4
> (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set the
> gateway properly. In the description of the bug, it says it is fixed in RHEL
> 7.5 but also says one can use
> https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm
> which is what we’re doing.
>
>
>
> When the new VM first boots, the 3 IPv4 settings are all set correctly.
> Reboots of the VM maintain the settings properly. But, if the VM is shut
> down and started again via the oVirt GUI, all of the IPV4 settings on the
> eth0 virtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0
> shows that the NIC is now set up for DHCP.
>
>
>
> Are we doing something incorrectly?
>
>
>
> Sandy Berger
>
> IT – Infrastructure Engineer II
>
>
>
> Quad/Graphics
>
> Performance through Innovation
>
>
>
> Sussex, Wisconsin
>
> 414.566.2123 phone
>
> 414.566.4010/2123 pager/PIN
>
>
>
> sandy.ber...@qg.com
>
> www.QG.com
>
>
>
> Follow Us: Facebook | Twitter | LinkedIn | YouTube
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Sizing hardware for hyperconverged with Gluster?

2018-03-19 Thread Chris Adams
I have a reasonable feel for how to size hardware for an oVirt cluster
with external storage (our current setups all use iSCSI to talk to a
SAN).  I'm looking at a hyperconverged oVirt+Gluster setup; are there
guides for figuring out the additional Gluster resource requirements?  I
assume I need to allow for additional CPU and RAM, I just don't know how
to size it (based on I/O I guess?).

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CD drive not showing

2018-03-19 Thread Alex Crow

Maybe try removing the ISO domain and then importing it.

Alex


On 19/03/18 08:17, Junaid Jadoon wrote:

Hi,
Cd drive is not showing in windows 7 VM.

Please help me out???


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot start VM after live storage migration - Bad volume specification

2018-03-19 Thread Bruckner, Simone
Hi,

  it seems that there is a broken chain - we see two "empty" parent_ids in the 
database:

engine=# SELECT b.disk_alias, s.description,s.snapshot_id, i.creation_date, 
s.status, i.imagestatus, i.size,i.parentid,i.image_group_id, i.vm_snapshot_id, 
i.image_guid, i.parentid, i.active FROM images as i JOIN snapshots AS s ON 
(i.vm_snapshot_id = s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = 
v.vm_guid) JOIN base_disks AS b ON (i.image_group_id = b.disk_id) WHERE 
v.vm_name = 'VMNAME' and disk_alias = 'VMNAME_Disk2' ORDER BY creation_date, 
description, disk_alias
;
disk_alias|description 
| snapshot_id  | creation_date  | status | 
imagestatus | size  |   parentid   |
image_group_id|vm_snapshot_id|  
image_guid  |   parentid   | active
--++--+++-+---+--+--+--+--+--+
VMNAME_Disk2 | tmp| 
3920a4e1-fc3f-45e2-84d5-0d3f1b8ad608 | 2018-01-28 10:09:37+01 | OK |
   1 | 1979979923456 | ---- | 
c1a05108-90d7-421d-a9b4-d4cc65c48429 | 3920a4e1-fc3f-45e2-84d5-0d3f1b8ad608 | 
946ee7b7-0770-49c9-ac76-0ce95a433d0f | ---- | f
VMNAME_Disk2 | VMNAME_Disk2 Auto-generated for Live Storage Migration | 
51f68304-e1a9-4400-aabc-8e3341d55fdc | 2018-03-16 15:07:35+01 | OK |
   1 | 1979979923456 | ---- | 
c1a05108-90d7-421d-a9b4-d4cc65c48429 | 51f68304-e1a9-4400-aabc-8e3341d55fdc | 
4c6475b1-352a-4114-b647-505cccbe6663 | ---- | f
VMNAME_Disk2 | Active VM  | 
d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d | 2018-03-18 20:54:23+01 | OK |
   1 | 1979979923456 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | 
c1a05108-90d7-421d-a9b4-d4cc65c48429 | d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d | 
4659b5e0-93c1-478d-97d0-ec1cf4052028 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | t

Is there a way to recover that disk?

All the best,
Simone

Von: users-boun...@ovirt.org  Im Auftrag von Bruckner, 
Simone
Gesendet: Sonntag, 18. März 2018 22:15
An: users@ovirt.org
Betreff: [ovirt-users] Cannot start VM after live storage migration - Bad 
volume specification

Hi all,

  we did a live storage migration of one of three disks of a vm that failed 
because the vm became not responding when deleting the auto-snapshot:

2018-03-16 15:07:32.084+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' creation for VM 'VMNAME' was initiated by xxx
2018-03-16 15:07:32.097+01 |0 | User xxx moving disk VMNAME_Disk2 to 
domain VMHOST_LUN_211.
2018-03-16 15:08:56.304+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' creation for VM 'VMNAME' has been completed.
2018-03-16 16:40:48.89+01  |0 | Snapshot 'VMNAME_Disk2 Auto-generated 
for Live Storage Migration' deletion for VM 'VMNAME' was initiated by xxx.
2018-03-16 16:44:44.813+01 |1 | VM VMNAME is not responding.
2018-03-18 18:40:51.258+01 |2 | Failed to delete snapshot 'VMNAME_Disk2 
Auto-generated for Live Storage Migration' for VM 'VMNAME'.
2018-03-18 18:40:54.506+01 |1 | Possible failure while deleting 
VMNAME_Disk2 from the source Storage Domain VMHOST_LUN_211 during the move 
operation. The Storage Domain may be manually cleaned-up from possible leftover
s (User:xxx).

Now we cannot start the vm anymore as long as this disk is online. Error 
message is "VM VMNAME is down with error. Exit message: Bad volume 
specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 
'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 
'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 
'apparentsize': '2147483648', 'imageID': 
'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 
'cache': 'none', 'propagateErrors': 'off', 'poolID': 
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': 
'/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028',
 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 
'block'}."

vdsm.log:
2018-03-18 21:53:33,815+0100 ERROR (vm/7d05e511) [storage.TaskManager.Task] 
(Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') Unexpected error 

Re: [ovirt-users] Ovirt with ZFS+ Gluster

2018-03-19 Thread Darrell Budic
Most of this is still valid if getting a bit long in the tooth: 
https://docs.gluster.org/en/latest/Administrator%20Guide/Gluster%20On%20ZFS/

I’ve got it running on several production clusters. I’m using the zfsol 0.7.6 
kmod installation myself. I use a zvol per brick, and only one brick per 
machine from the zpool per gluster volume. If I had more disks, I might have 
two zvols with a brick each per gluster volume, but not now. My local settings:

# zfs get all v0 | grep local
v0compression   lz4local
v0xattr sa local
v0acltype   posixacl   local
v0relatime  on local


> From: Karli Sjöberg 
> Subject: Re: [ovirt-users] Ovirt with ZFS+ Gluster
> Date: March 19, 2018 at 3:36:41 AM CDT
> To: Tal Bar-Or; users
> 
> On Sun, 2018-03-18 at 14:01 +0200, Tal Bar-Or wrote:
>> Hello,
>> 
>> I started to do  new modest system planing and the system will be
>> mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB
>> memory and 12xsas 10k 1.2tb and 3x ssd's
>> my plan is to use zfs on top of glusterfs , and my question is since
>> i didn't saw any doc on it 
>> Is this kind of deployment is done in the past and recommended.
>> any way if yes is there any doc how to ?
>> Thanks 
>> 
>> 
>> -- 
>> Tal Bar-or
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> There aren´t any specific documentation about using ZFS underneath
> Gluster together with oVirt, but there´s nothing wrong IMO about using
> ZFS with Gluster. E.g. 45 Drives are using it and posting really funny
> videos about it:
> 
> https://www.youtube.com/watch?v=A0wV4k58RIs
> 
> Are you planning this as a standalone Gluster cluster or do you want to
> use it hyperconverged?
> 
> /K___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS performance with only one drive per host?

2018-03-19 Thread Jayme
I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB.  I'm
considering storage options.  I don't have a requirement for high amounts
of storage, I have a little over 1TB to store but want some overhead so I'm
thinking 2TB of usable space would be sufficient.

I've been doing some research on Micron 1100 2TB ssd's and they seem to
offer a lot of value for the money.  I'm considering using smaller cheaper
SSDs for boot drives and using one 2TB micron SSD in each host for a
glusterFS replica 3 setup (on the fence about using an arbiter, I like the
extra redundancy replicate 3 will give me).

My question is, would I see a performance hit using only one drive in each
host with glusterFS or should I try to add more physical disks.  Such as 6
1TB drives instead of 3 2TB drives?

Also one other question.  I've read that gluster can only be done in groups
of three.  Meaning you need 3, 6, or 9 hosts.  Is this true?  If I had an
operational replicate 3 glusterFS setup and wanted to add more capacity I
would have to add 3 more hosts, or is it possible for me to add a 4th host
in to the mix for extra processing power down the road?

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt with ZFS+ Gluster

2018-03-19 Thread Tal Bar-Or
wow, that's a nice demonstration
Thanks

On Mon, Mar 19, 2018 at 10:36 AM, Karli Sjöberg  wrote:

> On Sun, 2018-03-18 at 14:01 +0200, Tal Bar-Or wrote:
> > Hello,
> >
> > I started to do  new modest system planing and the system will be
> > mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB
> > memory and 12xsas 10k 1.2tb and 3x ssd's
> > my plan is to use zfs on top of glusterfs , and my question is since
> > i didn't saw any doc on it
> > Is this kind of deployment is done in the past and recommended.
> > any way if yes is there any doc how to ?
> > Thanks
> >
> >
> > --
> > Tal Bar-or
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> There aren´t any specific documentation about using ZFS underneath
> Gluster together with oVirt, but there´s nothing wrong IMO about using
> ZFS with Gluster. E.g. 45 Drives are using it and posting really funny
> videos about it:
>
> https://www.youtube.com/watch?v=A0wV4k58RIs
>
> Are you planning this as a standalone Gluster cluster or do you want to
> use it hyperconverged?
>
> /K




-- 
Tal Bar-or
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage domain ovirt-image-repository doesn't work

2018-03-19 Thread Daniel Erez
Hi Nicolas,

Can you please try navigating to "Administration -> Providers", select
"ovirt-image-repository" provider and click "Edit" button.
Make sure that "Requires Authentication" isn't checked, and click the
"Test" button - is it accessing the provider successfully?

On Wed, Mar 14, 2018 at 1:45 AM Nicolas Vaye 
wrote:

> the logs during the test of the ovirt-image-repository provider :
>
>
> 2018-03-14 10:39:43,337+11 INFO
> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
> (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Running command:
> TestProviderConnectivityCommand internal: false. Entities affected :  ID:
> aaa0----123456789aaa Type: SystemAction group
> CREATE_STORAGE_POOL with role type ADMIN
> 2018-03-14 10:41:30,465+11 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default
> task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] transaction rolled back
> 2018-03-14 10:41:30,465+11 ERROR
> [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default
> task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] Failed to retrieve image
> list: Connection timed out (Connection timed out)
> 2018-03-14 10:41:50,560+11 ERROR
> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
> (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Command
> 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
> failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050)
>
>
>
>
>
>  Message initial 
>
> Date: Tue, 13 Mar 2018 23:36:06 +
> Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work
> Cc: users@ovirt.org  22%20%3cus...@ovirt.org%3e>>
> À: ish...@redhat.com  22%20%3cish...@redhat.com%3e>>
> Reply-to: Nicolas Vaye 
> De: Nicolas Vaye >
>
> Hi Idan,
>
> here are the logs requested :
>
> 2018-03-14 10:25:52,097+11 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default
> task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] transaction rolled back
> 2018-03-14 10:25:52,097+11 ERROR
> [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default
> task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] Failed to retrieve image
> list: Connection timed out (Connection timed out)
> 2018-03-14 10:25:57,083+11 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'commandCoordinator' is using 0 threads out of 10 and 10 tasks are waiting
> in the queue.
> 2018-03-14 10:25:57,083+11 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue.
> 2018-03-14 10:25:57,083+11 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0
> tasks in queue.
> 2018-03-14 10:25:57,084+11 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineScheduled' is using 0 threads out of 100 and 100 tasks are waiting
> in the queue.
> 2018-03-14 10:25:57,084+11 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are
> waiting in the queue.
> 2018-03-14 10:25:57,084+11 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'hostUpdatesChecker' is using 0 threads out of 5 and 4 tasks are waiting in
> the queue.
>
>
> Connection timed out seems to indicate that it doesn't use the proxy to
> get web access ? or a firewall issue ?
>
> but on each ovirt node, i try to curl the url and the result is OK :
>
> curl http://glance.ovirt.org:9292/
>
> {"versions": [{"status": "CURRENT", "id": "v2.3", "links": [{"href": "
> http://glance.ovirt.org:9292/v2/;, "rel": "self"}]}, {"status":
> "SUPPORTED", "id": "v2.2", "links": [{"href": "
> http://glance.ovirt.org:9292/v2/;, "rel": "self"}]}, {"status":
> "SUPPORTED", "id": "v2.1", "links": [{"href": "
> http://glance.ovirt.org:9292/v2/;, "rel": "self"}]}, {"status":
> "SUPPORTED", "id": "v2.0", "links": [{"href": "
> http://glance.ovirt.org:9292/v2/;, "rel": "self"}]}, {"status":
> "SUPPORTED", "id": "v1.1", "links": [{"href": "
> http://glance.ovirt.org:9292/v1/;, "rel": "self"}]}, {"status":
> "SUPPORTED", "id": "v1.0", "links": [{"href": "
> 

Re: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?!

2018-03-19 Thread Enrico Becchetti

Il 16/03/2018 15:48, Alex Crow ha scritto:

On 16/03/18 13:46, Nicolas Ecarnot wrote:

Le 16/03/2018 à 13:28, Karli Sjöberg a écrit :



Den 16 mars 2018 12:26 skrev Enrico Becchetti 
:


   Dear All,
    Does someone had seen that error ?


Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has 
insufficient workload to trigger such event).

And in every case, there was no actual lack of space.


    Enrico Becchetti Servizio di Calcolo e Reti
I think I remember something to do with thin provisioning and not 
being able to grow fast enough, so out of space. Are the VM's disk 
thick or thin?


All our storage domains are thin-prov. and served by iSCSI 
(Equallogic PS6xxx and 4xxx).


Enrico, do you know if a bug has been filed about this?

Did the VM remain paused? In my experience the VM just gets 
temporarily paused while the storage is expanded. RH confirmed to me 
in a ticket that this is expected behaviour.


If you need high write performance your VM disks should always be 
preallocated. We only use Thin Provision for VMs where we know that 
disk writes are low (eg network services, CPU-bound apps, etc).



Thanks a lot !!!
Best Regards
Enrico


Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute 
advice.
The information provided is correct to our knowledge & belief and must 
not
be used as a substitute for obtaining tax, regulatory, investment, 
legal or

any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 
7608 5300.

(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
___

Enrico BecchettiServizio di Calcolo e Reti

Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica  06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it
__

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failing to upload qcow2 disk image

2018-03-19 Thread Eyal Shenitzky
Hi Idan,

Can you please take a look?

On Mon, Mar 19, 2018 at 11:07 AM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> I am trying to upload the disk image which is in qcow2 format. After
> uploading about 38 GB the status turns to "Paused by system" and it does
> not resume at all. Any attempt to manually resume, will result back in
> paused status.
>
> Ovirt engine version : 4.2.1.6-1.el7.centos
>
> Any guidance to finish this upload task?
>
> --
>
> Thanks & Regards,
>
>
> Anantha Raghava
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Workflow after restoring engine from backup

2018-03-19 Thread Yedidyah Bar David
On Mon, Mar 19, 2018 at 11:03 AM, Sven Achtelik  wrote:
> Hi Didi,
>
> my backups where taken with the end. Backup utility. I have 3 Data centers,
> two of them with just one host and the third one with 3 hosts running the
> engine.  The backup three days old, was taken on engine version 4.1 (4.1.7)
> and the restored engine is running on 4.1.9.

Since the bug I mentioned was fixed in 4.1.3, you should be covered.

> I have three HA VMs that would
> be affected. All others are just normal vms. Sounds like it would be the
> safest to shut down the HA vm S to make sure that nothing happens ?

If you can have downtime, I agree it sounds safer to shutdown the VMs.

> Or can I
> disable the HA action in the DB for now ?

No need to. If you restored with 4.1.9 engine-backup, it should have done
this for you. If you still have the restore log, you can verify this by
checking it. It should contain 'Resetting HA VM status', and then the result
of the sql that it ran.

Best regards,

>
> Thank you,
>
> Sven
>
>
>
> Von meinem Samsung Galaxy Smartphone gesendet.
>
>
>  Ursprüngliche Nachricht 
> Von: Yedidyah Bar David 
> Datum: 19.03.18 07:33 (GMT+01:00)
> An: Sven Achtelik 
> Cc: users@ovirt.org
> Betreff: Re: [ovirt-users] Workflow after restoring engine from backup
>
> On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik 
> wrote:
>> Hi All,
>>
>>
>>
>> I had issue with the storage that hosted my engine vm. The disk got
>> corrupted and I needed to restore the engine from a backup.
>
> How did you backup, and how did you restore?
>
> Which version was used for each?
>
>> That worked as
>> expected, I just didn’t start the engine yet.
>
> OK.
>
>> I know that after the backup
>> was taken some machines where migrated around before the engine disks
>> failed.
>
> Are these machines HA?
>
>> My question is what will happen once I start the engine service
>> which has the restored backup on it ? Will it query the hosts for the
>> running VMs
>
> It will, but HA machines are handled differently.
>
> See also:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1441322
> https://bugzilla.redhat.com/show_bug.cgi?id=1446055
>
>> or will it assume that the VMs are still on the hosts as they
>> resided at the point of backup ?
>
> It does, initially, but then updates status according to what it
> gets from hosts.
>
> But polling the hosts takes time, especially if you have many, and
> HA policy might require faster handling. So if it polls first a
> host that had a machine on it during backup, and sees that it's
> gone, and didn't yet poll the new host, HA handling starts immediately,
> which eventually might lead to starting the VM on another host.
>
> To prevent that, the fixes to above bugs make the restore process
> mark HA VMs that do not have leases on the storage as "dead".
>
>> Would I need to change the DB manual to let
>> the engine know where VMs are up at this point ?
>
> You might need to, if you have HA VMs and a too-old version of restore.
>
>> What will happen to HA VMs
>> ? I feel that it might try to start them a second time.  My biggest issue
>> is
>> that I can’t get a service Windows to shutdown all VMs and then lat them
>> restart by the engine.
>>
>>
>>
>> Is there a known workflow for that ?
>
> I am not aware of a tested procedure for handling above if you have
> a too-old version, but you can check the patches linked from above bugs
> and manually run the SQL command(s) they include. They are essentially
> comment 4 of the first bug.
>
> Good luck and best regards,
> --
> Didi



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Major Performance Issues with gluster

2018-03-19 Thread Sahina Bose
On Mon, Mar 19, 2018 at 7:39 AM, Jim Kusznir  wrote:

> Hello:
>
> This past week, I created a new gluster store, as I was running out of
> disk space on my main, SSD-backed storage pool.  I used 2TB Seagate
> FireCuda drives (hybrid SSD/spinning).  Hardware is Dell R610's with
> integral PERC/6i cards.  I placed one disk per machine, exported the disk
> as a single disk volume from the raid controller, formatted it XFS, mounted
> it, and dedicated it to a new replica 3 gluster volume.
>
> Since doing so, I've been having major performance problems.  One of my
> windows VMs sits at 100% disk utilization nearly continously, and its
> painful to do anything on it.  A Zabbix install on CentOS using mysql as
> the backing has 70%+ iowait nearly all the time, and I can't seem to get
> graphs loaded from the web console.  Its also always spewing errors that
> ultimately come down to insufficient disk performance issues.
>
> All of this was working OK before the changes.  There are two:
>
> Old storage was SSD backed, Replica 2 + arb, and running on the same GigE
> network as management and main VM network.
>
> New storage was created using the dedicated Gluster network (running on
> em4 on these servers, completely different subnet (174.x vs 192.x), and was
> created replica 3 (no arb), on the FireCuda disks (seem to be the fastest I
> could afford for non-SSD, as I needed a lot more storage).
>
> My attempts to watch so far have NOT shown maxed network interfaces (using
> bwm-ng on the command line); in fact, the gluster interface is usually
> below 20% utilized.
>
> I'm not sure how to meaningfully measure the performance of the disk
> itself; I'm not sure what else to look at.  My cluster is not very usable
> currently, though.  IOWait on my hosts appears to be below 0.5%, usually
> 0.0 to 0.1.  Inside the VMs is a whole different story.
>
> My cluster is currently running ovirt 4.1.  I'm interested in going to
> 4.2, but I think I need to fix this first.
>


Can you provide the info of the volume using "gluster volume info" and also
profile the volume while running the tests where you experience the
performance issue, and share results?

For info on how to profile (server-side profiling) -
https://docs.gluster.org/en/latest/Administrator%20Guide/Performance%20Testing/


> Thanks!
> --Jim
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Workflow after restoring engine from backup

2018-03-19 Thread Sven Achtelik
Hi Didi,

my backups where taken with the end. Backup utility. I have 3 Data centers, two 
of them with just one host and the third one with 3 hosts running the engine.  
The backup three days old, was taken on engine version 4.1 (4.1.7) and the 
restored engine is running on 4.1.9. I have three HA VMs that would be 
affected. All others are just normal vms. Sounds like it would be the safest to 
shut down the HA vm S to make sure that nothing happens ? Or can I disable the 
HA action in the DB for now ?

Thank you,

Sven



Von meinem Samsung Galaxy Smartphone gesendet.


 Ursprüngliche Nachricht 
Von: Yedidyah Bar David 
Datum: 19.03.18 07:33 (GMT+01:00)
An: Sven Achtelik 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Workflow after restoring engine from backup

On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik  wrote:
> Hi All,
>
>
>
> I had issue with the storage that hosted my engine vm. The disk got
> corrupted and I needed to restore the engine from a backup.

How did you backup, and how did you restore?

Which version was used for each?

> That worked as
> expected, I just didn’t start the engine yet.

OK.

> I know that after the backup
> was taken some machines where migrated around before the engine disks
> failed.

Are these machines HA?

> My question is what will happen once I start the engine service
> which has the restored backup on it ? Will it query the hosts for the
> running VMs

It will, but HA machines are handled differently.

See also:

https://bugzilla.redhat.com/show_bug.cgi?id=1441322
https://bugzilla.redhat.com/show_bug.cgi?id=1446055

> or will it assume that the VMs are still on the hosts as they
> resided at the point of backup ?

It does, initially, but then updates status according to what it
gets from hosts.

But polling the hosts takes time, especially if you have many, and
HA policy might require faster handling. So if it polls first a
host that had a machine on it during backup, and sees that it's
gone, and didn't yet poll the new host, HA handling starts immediately,
which eventually might lead to starting the VM on another host.

To prevent that, the fixes to above bugs make the restore process
mark HA VMs that do not have leases on the storage as "dead".

> Would I need to change the DB manual to let
> the engine know where VMs are up at this point ?

You might need to, if you have HA VMs and a too-old version of restore.

> What will happen to HA VMs
> ? I feel that it might try to start them a second time.  My biggest issue is
> that I can’t get a service Windows to shutdown all VMs and then lat them
> restart by the engine.
>
>
>
> Is there a known workflow for that ?

I am not aware of a tested procedure for handling above if you have
a too-old version, but you can check the patches linked from above bugs
and manually run the SQL command(s) they include. They are essentially
comment 4 of the first bug.

Good luck and best regards,
--
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Open source backup!

2018-03-19 Thread Alex K
I was testing Open Bacchus and backups were ok.
One issue that I see is that one cannot define how many backup copies to
retain, unless I missed sth.

Alex

On Mon, Mar 5, 2018 at 5:25 PM, Niyazi Elvan  wrote:

> Hi,
>
> If you are looking for VM image backup, you may have a look at Open
> Bacchus https://github.com/openbacchus/bacchus
>
> Bacchus is backing up VMs using the oVirt python api and final image will
> reside on the Export domain (which is an NFS share or glusterfs) in your
> environment. It does not support moving the images to tapes at the moment.
> You need to use another tool to stage your backups to tape.
>
> Hope this helps.
>
>
> On 5 Mar 2018 Mon at 17:31 Nasrum Minallah Manzoor <
> nasrumminall...@hotmail.com> wrote:
>
>> HI,
>> Can you please suggest me any open source backup solution for ovirt
>> Virtual machines.
>> My backup media is FC tape library which is directly attached to my ovirt
>> node. I really appreciate your help
>>
>>
>>
>>
>>
>>
>>
>> Regards,
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
> --
> Niyazi Elvan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt with ZFS+ Gluster

2018-03-19 Thread Karli Sjöberg
On Sun, 2018-03-18 at 14:01 +0200, Tal Bar-Or wrote:
> Hello,
> 
> I started to do  new modest system planing and the system will be
> mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB
> memory and 12xsas 10k 1.2tb and 3x ssd's
> my plan is to use zfs on top of glusterfs , and my question is since
> i didn't saw any doc on it 
> Is this kind of deployment is done in the past and recommended.
> any way if yes is there any doc how to ?
> Thanks 
> 
> 
> -- 
> Tal Bar-or
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

There aren´t any specific documentation about using ZFS underneath
Gluster together with oVirt, but there´s nothing wrong IMO about using
ZFS with Gluster. E.g. 45 Drives are using it and posting really funny
videos about it:

https://www.youtube.com/watch?v=A0wV4k58RIs

Are you planning this as a standalone Gluster cluster or do you want to
use it hyperconverged?

/K

signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt with ZFS+ Gluster

2018-03-19 Thread Tal Bar-Or
Hello,

I started to do  new modest system planing and the system will be mounted
on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB memory and 12xsas
10k 1.2tb and 3x ssd's
my plan is to use zfs on top of glusterfs , and my question is since i
didn't saw any doc on it
Is this kind of deployment is done in the past and recommended.
any way if yes is there any doc how to ?
Thanks


-- 
Tal Bar-or
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CD drive not showing

2018-03-19 Thread Junaid Jadoon
Hi,
Cd drive is not showing in windows 7 VM.

Please help me out???
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to disable the QoS settings (go back to 'Unlimited' state) in the vNIC profile by using oVirt REST API?

2018-03-19 Thread Shao-Da Huang
Hi,

I have a vNIC profile with a QoS object:


  vnic_test
  
  
  
disabled
  
  false



Now I try to update this object using PUT method and set the 'pass_through'
mode to 'enabled',
But I always got the error message
"Cannot edit VM network interface profile. 'Port Mirroring' and 'Qos' are
not supported on passthrough profiles."
no matter I send the request body like:


  vnic_test
  
  
enabled
  
  false


OR


  vnic_test
  
  
enabled
  


Could anyone tell me how to disable the related QoS settings (namely go
back to 'Unlimited' state) in a vNIC profile by using REST API?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Query about running ovirt-4.2.1 engine support 3.x nodes ?

2018-03-19 Thread Yedidyah Bar David
On Mon, Mar 19, 2018 at 5:19 AM, Joseph Kelly <
joseph.ke...@tradingscreen.com> wrote:

> Sorry to ask again, But I can see from the link below the that nodes and
> engines should work between minor number
>
> upgrades.
>

Indeed.


> But is ovirt 4.2.x backward compatible with, say, 3.6 nodes. Does anyone
> know ? Is this documented  anywhere ?
>

You can search the release notes pages of 4.2.z releases for '3.6' to find
relevant bugs.

Just _using_ such hosts, should work.

Adding a 3.6 host to a 4.2 engine will likely break.

It's definitely not intended to be used for long times - you are encouraged
to upgrade your hosts too, soon after the engine.

If you plan a very long transition period, I suggest to create a list of
operations you might want/need to do, and test everything in a test
environment.


>
> [ovirt-users] compatibility relationship between datacenter, ovirt and
> cluster
> https://www.mail-archive.com/users@ovirt.org/msg17092.html
>
> Thanks,
> Joe.
>
> --
> *From:* Joseph Kelly
> *Sent:* Wednesday, March 14, 2018 5:32 PM
> *To:* users@ovirt.org
> *Subject:* Query about running ovirt-4.2.1 engine support 3.x nodes ?
>
>
> Hello All,
>
>
> I have two hopefully easy questions regarding an ovirt-4.2.1 engine
> support and 3.x nodes ?
>
>
> 1) Does an ovirt-4.2.x engine support 3.x nodes ? As This page states:
>
>
> "The cluster compatibility is set according to the version of the least
> capable host operating system in the cluster."
>
>
> https://www.ovirt.org/documentation/upgrade-guide/chap-Post-Upgrade_Tasks/
>
>
> Which seems to indicate that you can run say a 4.2.1 engine with lower
> version nodes, but is that correct ?
>
>
> 2) And can you just upgrade the nodes directly from 3.x to 4.2.x as per
> these steps ?
>
>
> 1. Move the node to maintenance
> 2. Add 4.2.x repos
> 3. yum update
> 4. reboot
> 5. Activate (exit maintenance)
>

This should work. You can also use the admin web ui for updates, which
might be better, didn't check recently. See also e.g.:

https://bugzilla.redhat.com/show_bug.cgi?id=1344020

Best regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] change CD not working

2018-03-19 Thread Junaid Jadoon
Detach and re-attach the ISO domain still no luck??

CD drive in no showing in Windows VM.

On Fri, Mar 16, 2018 at 7:50 PM, Alex Crow  wrote:

> On 15/03/18 18:55, Junaid Jadoon wrote:
>
>
>
>>   Ovirt engine and node version are 4.2.
>>
>> "Error while executing action Change CD: Failed to perform "Change CD" 
>> operation, CD might be still in use by the VM.
>> Please try to manually detach the CD from withing the VM:
>> 1. Log in to the VM
>> 2 For Linux VMs, un-mount the CD using umount command;
>> For Windows VMs, right click on the CD drive and click 'Eject';"
>>
>> Initially its working fine suddenly it giving above error.
>>
>> Logs are attached.
>>
>> please help me out
>>
>> Regards,
>>
>> Junaid
>>
>> Detach and re-attach of the ISO domain should resolve this. It worked for
> me.
>
> Alex
>
> --
> This message is intended only for the addressee and may contain
> confidential information. Unless you are that person, you may not
> disclose its contents or use it in any way and are requested to delete
> the message along with any attachments and notify us immediately.
> This email is not intended to, nor should it be taken to, constitute advice.
> The information provided is correct to our knowledge & belief and must not
> be used as a substitute for obtaining tax, regulatory, investment, legal or
> any other appropriate advice.
>
> "Transact" is operated by Integrated Financial Arrangements Ltd.
> 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
> (Registered office: as above; Registered in England and Wales under
> number: 3727592). Authorised and regulated by the Financial Conduct
> Authority (entered on the Financial Services Register; no. 190856).
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users