[ovirt-users] VM with multiple vdisks can't migrate

2018-02-13 Thread fsoyer

Hi all,
I discovered yesterday a problem when migrating VM with more than one vdisk.
On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs 
needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G 
vdisk (for this tests I didn't want to waste time to extend the existing 
vdisks... But I lost time finally...). The VMs with the 2 vdisks works well.
Now I saw some updates waiting on the host. I tried to put it in maintenance... 
But it stopped on the two VM. They were marked "migrating", but no more 
accessible. Other (small) VMs with only 1 vdisk was migrated without problem at 
the same time.
I saw that a kvm process for the (big) VMs was launched on the source AND 
destination host, but after tens of minutes, the migration and the VMs was 
always freezed. I tried to cancel the migration for the VMs : failed. The only 
way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts 
and the GUI alerted on a failed migration.
In doubt, I tried to delete the second vdisk on one of this VMs : it migrates 
then without error ! And no access problem.
I tried to extend the first vdisk of the second VM, the delete the second vdisk 
: it migrates now without problem !   

So after another test with a VM with 2 vdisks, I can say that this blocked the 
migration process :(

In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 
16:46:29,705+01 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to 
object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', 
sharedLocks=''}'
2018-02-12 16:46:29,955+01 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
Running command: MigrateVmToServerCommand internal: false. Entities affected :  
ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with 
role type USER
2018-02-12 16:46:30,261+01 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', 
hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', 
vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', 
dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]'}), log id: 14f61ee0
2018-02-12 16:46:30,262+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, 
MigrateVDSCommandParameters:{runAsync='true', 
hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', 
vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', 
dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]'}), log id: 775cd381
2018-02-12 16:46:30,277+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
FINISH, MigrateBrokerVDSCommand, log id: 775cd381
2018-02-12 16:46:30,285+01 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0
2018-02-12 16:46:30,301+01 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 
2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 

Re: [ovirt-users] [Qemu-block] qcow2 images corruption

2018-02-13 Thread John Snow


On 02/13/2018 04:41 AM, Kevin Wolf wrote:
> Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben:
>> TL; DR : qcow2 images keep getting corrupted. Any workaround?
> 
> Not without knowing the cause.
> 
> The first thing to make sure is that the image isn't touched by a second
> process while QEMU is running a VM. The classic one is using 'qemu-img
> snapshot' on the image of a running VM, which is instant corruption (and
> newer QEMU versions have locking in place to prevent this), but we have
> seen more absurd cases of things outside QEMU tampering with the image
> when we were investigating previous corruption reports.
> 
> This covers the majority of all reports, we haven't had a real
> corruption caused by a QEMU bug in ages.
> 
>> After having found (https://access.redhat.com/solutions/1173623) the right
>> logical volume hosting the qcow2 image, I can run qemu-img check on it.
>> - On 80% of my VMs, I find no errors.
>> - On 15% of them, I find Leaked cluster errors that I can correct using
>> "qemu-img check -r all"
>> - On 5% of them, I find Leaked clusters errors and further fatal errors,
>> which can not be corrected with qemu-img.
>> In rare cases, qemu-img can correct them, but destroys large parts of the
>> image (becomes unusable), and on other cases it can not correct them at all.
> 
> It would be good if you could make the 'qemu-img check' output available
> somewhere.
> 
> It would be even better if we could have a look at the respective image.
> I seem to remember that John (CCed) had a few scripts to analyse
> corrupted qcow2 images, maybe we would be able to see something there.
> 

Hi! I did write a pretty simplistic tool for trying to tell the shape of
a corruption at a glance. It seems to work pretty similarly to the other
tool you already found, but it won't hurt anything to run it:

https://github.com/jnsnow/qcheck

(Actually, that other tool looks like it has an awful lot of options.
I'll have to check it out.)

It can print a really upsetting amount of data (especially for very
corrupt images), but in the default case, the simple setting should do
the trick just fine.

You could always put the output from this tool in a pastebin too; it
might help me visualize the problem a bit more -- I find seeing the
exact offsets and locations of where all the various tables and things
to be pretty helpful.

You can also always use the "deluge" option and compress it if you want,
just don't let it print to your terminal:

jsnow@probe (dev) ~/s/qcheck> ./qcheck -xd
/home/bos/jsnow/src/qemu/bin/git/install_test_f26.qcow2 > deluge.log;
and ls -sh deluge.log
4.3M deluge.log

but it compresses down very well:

jsnow@probe (dev) ~/s/qcheck> 7z a -t7z -m0=ppmd deluge.ppmd.7z deluge.log
jsnow@probe (dev) ~/s/qcheck> ls -s deluge.ppmd.7z
316 deluge.ppmd.7z

So I suppose if you want to send along:
(1) The basic output without any flags, in a pastebin
(2) The zipped deluge output, just in case

and I will try my hand at guessing what went wrong.


(Also, maybe my tool will totally choke for your image, who knows. It
hasn't received an overwhelming amount of testing apart from when I go
to use it personally and inevitably wind up displeased with how it
handles certain situations, so ...)

>> What I read similar to my case is :
>> - usage of qcow2
>> - heavy disk I/O
>> - using the virtio-blk driver
>>
>> In the proxmox thread, they tend to say that using virtio-scsi is the
>> solution. Having asked this question to oVirt experts
>> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's
>> not clear the driver is to blame.
> 
> This seems very unlikely. The corruption you're seeing is in the qcow2
> metadata, not only in the guest data. If anything, virtio-scsi exercises
> more qcow2 code paths than virtio-blk, so any potential bug that affects
> virtio-blk should also affect virtio-scsi, but not the other way around.
> 
>> I agree with the answer Yaniv Kaul gave to me, saying I have to properly
>> report the issue, so I'm longing to know which peculiar information I can
>> give you now.
> 
> To be honest, debugging corruption after the fact is pretty hard. We'd
> need the 'qemu-img check' output and ideally the image to do anything,
> but I can't promise that anything would come out of this.
> 
> Best would be a reproducer, or at least some operation that you can link
> to the appearance of the corruption. Then we could take a more targeted
> look at the respective code.
> 
>> As you can imagine, all this setup is in production, and for most of the
>> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly
>> stopping every VM, qemu-img check them one by one, then boot.
>> So it might take some time before I find another corrupted image.
>> (which I'll preciously store for debug)
>>
>> Other informations : We very rarely do snapshots, but I'm close to imagine
>> that automated migrations of VMs could trigger similar behaviors on qcow2
>> images.
> 
> To my 

[ovirt-users] Unable to connect to the graphic server

2018-02-13 Thread Alex Bartonek
I've built and rebuilt about 4 oVirt servers.  Consider myself pretty good at 
this.  LOL.
So I am setting up a oVirt server for a friend on his r710.  CentOS 7, ovirt 
4.2.   /etc/hosts has the correct IP and FQDN setup.

When I build a VM and try to open a console session via  SPICE I am unable to 
connect to the graphic server.  I'm connecting from a Windows 10 box.   Using 
virt-manager to connect.

I've googled and I just cant seem to find any resolution to this.  Now, I did 
build the server on my home network but the subnet its on is the same.. 
internal 192.168.1.xxx.   The web interface is accessible also.

Any hints as to what else I can check?

Thanks!

Sent with [ProtonMail](https://protonmail.com) Secure Email.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine install fails on useless DHCP lookup

2018-02-13 Thread Jamie Lawrence
Hello,

I'm seeing the hosted engine install fail on an Ansible playbook step. Log 
below. I tried looking at the file specified for retry, below 
(/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry); it 
contains the word, 'localhost'. 

The log below didn't contain anything I could see that was actionable; given 
that it was an ansible error, I hunted down the config and enabled logging. On 
this run the error was different - the installer log was the same, but the 
reported error (from the installer changed). 

The first time, the installer said:

[ INFO  ] TASK [Wait for the host to become non operational]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, 
"attempts": 150, "changed": false}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up


Second:

[ INFO  ] TASK [Get local vm ip]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, 
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:11:e7:bd | awk '{ 
print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.093840", "end": "2018-02-13 
16:53:08.658556", "rc": 0, "start": "2018-02-13 16:53:08.564716", "stderr": "", 
"stderr_lines": [], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up



 Ansible log below; as with that second snippet, it appears that it was trying 
to parse out a host name from virsh's list of DHCP leases, couldn't, and died. 

Which makes sense: I gave it a static IP, and unless I'm missing something, 
setup should not have been doing that. I verified that the answer file has the 
IP:

OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.181.26.150/24

Anyone see what is wrong here?

-j


hosted-engine --deploy log:

2018-02-13 16:20:32,138-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 TASK [Force host-deploy in offline mode]
2018-02-13 16:20:33,041-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 changed: [localhost]
2018-02-13 16:20:33,342-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 TASK [include_tasks]
2018-02-13 16:20:33,443-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 ok: [localhost]
2018-02-13 16:20:33,744-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 TASK [Obtain SSO token using 
username/password credentials]
2018-02-13 16:20:35,248-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 ok: [localhost]
2018-02-13 16:20:35,550-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 TASK [Add host]
2018-02-13 16:20:37,053-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 changed: [localhost]
2018-02-13 16:20:37,355-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:100 TASK [Wait for the host to become non 
operational]
2018-02-13 16:27:48,895-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 
{u'_ansible_parsed': True, u'_ansible_no_log': False, u'changed': False, 
u'attempts': 150, u'invocation': {u'module_args': {u'pattern': 
u'name=ovirt-1.squaretrade.com', u'fetch_nested': False, u'nested_attributes': 
[]}}, u'ansible_facts': {u'ovirt_hosts': []}}
2018-02-13 16:27:48,995-0800 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 
fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, 
"attempts": 150, "changed": false}
2018-02-13 16:27:49,297-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 
PLAY RECAP [localhost] : ok: 42 changed: 17 unreachable: 0 skipped: 2 failed: 1
2018-02-13 16:27:49,397-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 
PLAY RECAP [ovirt-engine-1.squaretrade.com] : ok: 15 changed: 8 unreachable: 0 
skipped: 4 failed: 0
2018-02-13 16:27:49,498-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 
ansible-playbook rc: 2
2018-02-13 16:27:49,498-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 
ansible-playbook stdout:
2018-02-13 16:27:49,499-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189  to retry, 
use: --limit 
@/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry

2018-02-13 16:27:49,499-0800 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 
ansible-playbook stderr:
2018-02-13 16:27:49,500-0800 DEBUG otopi.context context._executeMethod:143 
method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
_executeMethod
method['method']()
  File 

Re: [ovirt-users] Ovirt backups lead to unresponsive VM

2018-02-13 Thread Alex K
Thank you Nir for the below.

I am putting some comments inline in blue.


On Tue, Feb 13, 2018 at 7:33 PM, Nir Soffer  wrote:

> On Wed, Jan 24, 2018 at 3:19 PM Alex K  wrote:
>
>> Hi all,
>>
>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
>> top glusterfs.
>> On some VMs (especially one Windows server 2016 64bit with 500 GB of
>> disk). Guest agents are installed at VMs. i almost always observe that
>> during the backup of the VM the VM is rendered unresponsive (dashboard
>> shows a question mark at the VM status and VM does not respond to ping or
>> to anything).
>>
>> For scheduled backups I use:
>>
>> https://github.com/wefixit-AT/oVirtBackup
>>
>> The script does the following:
>>
>> 1. snapshot VM (this is done ok without any failure)
>>
>
> This is a very cheap operation
>
>
>> 2. Clone snapshot (this steps renders the VM unresponsive)
>>
>
> This copy 500g of data. In gluster case, it copies 1500g of data, since in
> glusterfs, the client
> is doing the replication.
>
> Maybe your network or gluster server is too slow? Can you describe the
> network topology?
>
> Please attach also the volume info for the gluster volume, maybe it is not
> configured in the
> best way?
>

The network is 1Gbit. The hosts (3 hosts) are decent ones and new hardware
with each host having: 32GB RAM, 16 CPU cores and 2 TB of storage in
RAID10.
The VMS hosted (7 VMs) exhibit high performance. The VMs are Windows 2016
and Windows10.
The network topology is: two networks defined at ovirt: ovirtmgmt is for
the managment and access network and "storage" is a separate network, where
each server is connected with two network cables at a managed switch with
mode 6 load balancing. this storage network is used for gluster traffic.
Attached the volume configuration.

> 3. Export Clone
>>
>
> This copy 500g to the export domain. If the export domain is on glusterfs
> as well, you
> copy now another 1500g of data.
>
>
Export domain a Synology NAS with NFS share.  If the cloning succeeds then
export is completed ok.

> 4. Delete clone
>>
>> 5. Delete snapshot
>>
>
> Not clear why do you need to clone the vm before you export it, you can
> save half of
> the data copies.
>
Because I cannot export the VM while it is running. It does not provide
such option.

>
> If you 4.2, you can backup the vm *while the vm is running* by:
> - Take a snapshot
> - Get the vm ovf from the engine api
> - Download the vm disks using ovirt-imageio and store the snaphosts in
> your backup
>   storage
> - Delete a snapshot
>
> In this flow, you would copy 500g.
>
> I am not aware about this option. checking quickly at site this seems that
it is still half implemented? Is there any script that I may use and test
this? I am interested to have these backups scheduled.


> Daniel, please correct me if I'm wrong regarding doing this online.
>
> Regardless, a vm should not become non-responsive while cloning. Please
> file a bug
> for this and attach engine, vdsm, and glusterfs logs.
>
>
Nir
>
> Do you have any similar experience? Any suggestions to address this?
>>
>> I have never seen such issue with hosted Linux VMs.
>>
>> The cluster has enough storage to accommodate the clone.
>>
>>
>> Thanx,
>>
>> Alex
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
Volume Name: vms
Type: Replicate
Volume ID: 00fee7f3-76e6-42b2-8f66-606b91df4a97
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster2:/gluster/vms/brick
Brick2: gluster0:/gluster/vms/brick
Brick3: gluster1:/gluster/vms/brick
Options Reconfigured:
features.shard-block-size: 512MB
server.allow-insecure: on
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: on
performance.low-prio-threads: 32
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: off
nfs.disable: on
nfs.export-volumes: on
cluster.granular-entry-heal: enable
performance.cache-size: 1GB
server.event-threads: 4
client.event-threads: 4
[root@v0 setel]# 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt backups lead to unresponsive VM

2018-02-13 Thread Nir Soffer
On Wed, Jan 24, 2018 at 3:19 PM Alex K  wrote:

> Hi all,
>
> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
> top glusterfs.
> On some VMs (especially one Windows server 2016 64bit with 500 GB of
> disk). Guest agents are installed at VMs. i almost always observe that
> during the backup of the VM the VM is rendered unresponsive (dashboard
> shows a question mark at the VM status and VM does not respond to ping or
> to anything).
>
> For scheduled backups I use:
>
> https://github.com/wefixit-AT/oVirtBackup
>
> The script does the following:
>
> 1. snapshot VM (this is done ok without any failure)
>

This is a very cheap operation


> 2. Clone snapshot (this steps renders the VM unresponsive)
>

This copy 500g of data. In gluster case, it copies 1500g of data, since in
glusterfs, the client
is doing the replication.

Maybe your network or gluster server is too slow? Can you describe the
network topology?

Please attach also the volume info for the gluster volume, maybe it is not
configured in the
best way?


> 3. Export Clone
>

This copy 500g to the export domain. If the export domain is on glusterfs
as well, you
copy now another 1500g of data.


> 4. Delete clone
>
> 5. Delete snapshot
>

Not clear why do you need to clone the vm before you export it, you can
save half of
the data copies.

If you 4.2, you can backup the vm *while the vm is running* by:
- Take a snapshot
- Get the vm ovf from the engine api
- Download the vm disks using ovirt-imageio and store the snaphosts in your
backup
  storage
- Delete a snapshot

In this flow, you would copy 500g.

Daniel, please correct me if I'm wrong regarding doing this online.

Regardless, a vm should not become non-responsive while cloning. Please
file a bug
for this and attach engine, vdsm, and glusterfs logs.

Nir

Do you have any similar experience? Any suggestions to address this?
>
> I have never seen such issue with hosted Linux VMs.
>
> The cluster has enough storage to accommodate the clone.
>
>
> Thanx,
>
> Alex
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] qcow2 images corruption

2018-02-13 Thread Nicolas Ecarnot

Le 13/02/2018 à 16:26, Nicolas Ecarnot a écrit :
>> It would be good if you could make the 'qemu-img check' output available
>> somewhere.
>

I found this :
https://github.com/ShijunDeng/qcow2-dump

and the transcript (beautiful colors when viewed with "more") is attached :


--
Nicolas ECARNOT
Le script a débuté sur mar. 13 févr. 2018 17:31:05 CET
]0;root@serv-hv-adm13:/home[?1034hroot@serv-hv-adm13:/home#
 /root/qcow2-dump -m check serv-term-adm4-corr.qcow2.img

File: serv-term-adm4-corr.qcow2.img


magic: 0x514649fb
version: 2
backing_file_offset: 0x0
backing_file_size: 0
fs_type: xfs
virtual_size: 64424509440 / 61440M / 60G
disk_size: 36507222016 / 34816M / 34G
seek_end: 36507222016 [0x88000] / 34816M / 34G
cluster_bits: 16
cluster_size: 65536
crypt_method: 0
csize_shift: 54
csize_mask: 255
cluster_offset_mask: 0x3f
l1_table_offset: 0x76a46
l1_size: 120
l1_vm_state_index: 120
l2_size: 8192
refcount_order: 4
refcount_bits: 16
refcount_block_bits: 15
refcount_block_size: 32768
refcount_table_offset: 0x1
refcount_table_clusters: 1
snapshots_offset: 0x0
nb_snapshots: 0
incompatible_features: 
compatible_features: 
autoclear_features: 



Active Snapshot:

L1 Table:   [offset: 0x76a46, len: 120]

Result:
L1 Table:   unaligned: 0, invalid: 0, unused: 53, 
used: 67
L2 Table:   unaligned: 0, invalid: 0, unused: 20304, 
used: 528560



Refcount Table:

Refcount Table: [offset: 0x1, len: 8192]

Result:
Refcount Table: unaligned: 0, invalid: 0, unused: 
8175, used: 17
Refcount:   error: 4342, leak: 0, unused: 28426, 
used: 524288



COPIED OFLAG:


Result:
L1 Table ERROR OFLAG_COPIED: 1
L2 Table ERROR OFLAG_COPIED: 4323
Active L2 COPIED: 528560 [34639708160 / 33035M / 32G]



Active Cluster:


Result:
Active Cluster: reuse: 17



Summary:
preallocation:  off
Active Cluster: reuse: 17
Refcount Table: unaligned: 0, invalid: 0, unused: 
8175, used: 17
Refcount:   error: 4342, leak: 0, 
rebuild: 4325, unused: 28426, used: 524288
L1 Table:   unaligned: 0, invalid: 0, unused: 53, 
used: 67
oflag copied: 1
L2 Table:   unaligned: 0, invalid: 0, unused: 
20304, used: 528560
oflag copied: 4323


### qcow2 image has refcount errors!   (=_=#)###
###and qcow2 image has copied errors!  (o_0)?###
###  Sadly: refcount error cause active cluster reused! Orz  ###
### Please backup this image and contact the author! ###



]0;root@serv-hv-adm13:/homeroot@serv-hv-adm13:/home#
 exit

Script terminé sur mar. 13 févr. 2018 17:31:13 CET
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] qcow2 images corruption

2018-02-13 Thread Nicolas Ecarnot

Hello Kevin,

Le 13/02/2018 à 10:41, Kevin Wolf a écrit :

Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben:

TL; DR : qcow2 images keep getting corrupted. Any workaround?


Not without knowing the cause.


Actually, my main concern is mostly about finding the cause rather than 
correcting my corrupted VMs.


Another way to say it : I prefer to help oVirt than help myself.


The first thing to make sure is that the image isn't touched by a second
process while QEMU is running a VM.


Indeed, I read some BZ about this issue : they were raised by a user who 
ran some qemu-img commands on a "mounted" image, thus leading to some 
corruption.
In my case, I'm not playing with this, and the corrupted VMs were only 
touched by classical oVirt actions.



The classic one is using 'qemu-img
snapshot' on the image of a running VM, which is instant corruption (and
newer QEMU versions have locking in place to prevent this), but we have
seen more absurd cases of things outside QEMU tampering with the image
when we were investigating previous corruption reports.

This covers the majority of all reports, we haven't had a real
corruption caused by a QEMU bug in ages.


May I ask after what QEMU version this kind of locking has been added.
As I wrote, our oVirt setup is 3.6 so not recent.




After having found (https://access.redhat.com/solutions/1173623) the right
logical volume hosting the qcow2 image, I can run qemu-img check on it.
- On 80% of my VMs, I find no errors.
- On 15% of them, I find Leaked cluster errors that I can correct using
"qemu-img check -r all"
- On 5% of them, I find Leaked clusters errors and further fatal errors,
which can not be corrected with qemu-img.
In rare cases, qemu-img can correct them, but destroys large parts of the
image (becomes unusable), and on other cases it can not correct them at all.


It would be good if you could make the 'qemu-img check' output available
somewhere.


See attachment.



It would be even better if we could have a look at the respective image.
I seem to remember that John (CCed) had a few scripts to analyse
corrupted qcow2 images, maybe we would be able to see something there.


I just exported it like this :
qemu-img convert /dev/the_correct_path /home/blablah.qcow2.img

The resulting file is 32G and I need an idea to transfer this img to you.




What I read similar to my case is :
- usage of qcow2
- heavy disk I/O
- using the virtio-blk driver

In the proxmox thread, they tend to say that using virtio-scsi is the
solution. Having asked this question to oVirt experts
(https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's
not clear the driver is to blame.


This seems very unlikely. The corruption you're seeing is in the qcow2
metadata, not only in the guest data.


Are you saying:
- the corruption is in the metadata and in the guest data
OR
- the corruption is only in the metadata
?


If anything, virtio-scsi exercises
more qcow2 code paths than virtio-blk, so any potential bug that affects
virtio-blk should also affect virtio-scsi, but not the other way around.


I get that.




I agree with the answer Yaniv Kaul gave to me, saying I have to properly
report the issue, so I'm longing to know which peculiar information I can
give you now.


To be honest, debugging corruption after the fact is pretty hard. We'd
need the 'qemu-img check' output


Done.


and ideally the image to do anything,


I remember some Redhat people once gave me a temporary access to put 
heavy file on some dedicated server. Is it still possible?



but I can't promise that anything would come out of this.

Best would be a reproducer, or at least some operation that you can link
to the appearance of the corruption. Then we could take a more targeted
look at the respective code.


Sure.
Alas I find no obvious pattern leading to corruption :
From the guest side, it appeared with windows 2003, 2008, 2012, linux 
centOS 6 and 7. It appeared with virtio-blk; and I changed some VMs to 
used virtio-scsi but it's too soon to see appearance of corruption in 
that case.
As I said, I'm using snapshots VERY rarely, and our versions are too old 
so we do them the cold way only (VM shutdown). So very safely.
The "weirdest" thing we do is to migrate VMs : you see how conservative 
we are!



As you can imagine, all this setup is in production, and for most of the
VMs, I can not "play" with them. Moreover, we launched a campaign of nightly
stopping every VM, qemu-img check them one by one, then boot.
So it might take some time before I find another corrupted image.
(which I'll preciously store for debug)

Other informations : We very rarely do snapshots, but I'm close to imagine
that automated migrations of VMs could trigger similar behaviors on qcow2
images.


To my knowledge, oVirt only uses external snapshots and creates them
with QMP. This should be perfectly safe because from the perspective of
the qcow2 image being snapshotted, it just means that it gets no new
write 

Re: [ovirt-users] Network configuration validation error

2018-02-13 Thread Michael Burman
Thanks for the input,

It's weird that you see this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1528906 on 4.2.1.6 because it
was already tested and verified on 4.2.1.1
I will check this again..

On Tue, Feb 13, 2018 at 3:09 PM,  wrote:

>
> I did not see I had to enable another repo to get this update, so I was
> sure I had the latest version available !
> After adding it, things went a lot better and I was able to update the
> engine and all the nodes flawlessly to version 4.2.1.6-1.el7.centos
> Thanks a lot for your help !
>
> The "no default route error" has disappeared indeed.
>
> But I still couldn't validate network setup modifications on one node as I
> still had the following error in the GUI :
>
>- must match "^b((25[0-5]|2[0-4]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]
>dd|d?d)"
>- Attribute: ipConfiguration.iPv4Addresses[0].gateway
>
> So I tried a dummy thing : I put a value in the gateway field for the NIC
> which doesn't need one (NFS), was able to validate. Then I edited it again,
> removed the value and was able to validate again !
>
> Regards
>
>
> Le 12-Feb-2018 10:42:30 +0100, mbur...@redhat.com a écrit:
>
> "no default route" bug was fixed only on 4.2.1
> Your current version doesn't have the fix
>
> On Mon, Feb 12, 2018 at 11:09 AM,  wrote:
>
>>
>>
>>
>>
>> Le 12-Feb-2018 08:06:43 +0100, jbe...@redhat.com a écrit:
>>
>> > This option relevant only for the upgrade from 3.6 to 4.0(engine had
>> > different OS major versions), it all other cases the upgrade flow very
>> > similar to upgrade flow of standard engine environment.
>> >
>> >
>> > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via
>> > UI)
>> > 2. Update engine packages(# yum update -y)
>> > 3. Run engine-setup
>> > 4. Disable GlobalMaintenance
>> >
>>
>>
>> So I followed these steps connected in the engine VM and didn't get any
>> error message. But the version showed in the GUI is
>> still 4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I
>> still have the "no default route" and network validation problems.
>> Regards
>>
>> > Could someone explain me at least what "Cluster PROD is at version 4.2
>> which
>> > is not supported by this upgrade flow. Please fix it before upgrading."
>> > means ? As far as I know 4.2 is the most recent branch available, isn't
>> it ?
>>
>> I have no idea where did you get
>>
>> "Cluster PROD is at version 4.2 which is not supported by this upgrade
>> flow. Please fix it before upgrading."
>>
>> Please do not cut output and provide exact one.
>>
>> IIUC you should do 'yum update ovirt*setup*' and then 'engine-setup'
>> and only after it would finish successfully you would do 'yum -y update'.
>> Maybe that's your problem?
>>
>> Jiri
>>
>> --
>> FreeMail powered by mail.fr
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> Michael Burman
>
> Senior Quality engineer - rhv network - redhat israel
>
> Red Hat
>
> 
>
> mbur...@redhat.comM: 0545355725 IM: mburman
>
>
>
> --
> FreeMail powered by mail.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

Michael Burman

Senior Quality engineer - rhv network - redhat israel

Red Hat



mbur...@redhat.comM: 0545355725 IM: mburman

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network and disk inactive after 4.2.1 upgrade

2018-02-13 Thread Chris Adams
I upgraded my dev cluster from 4.2.0 to 4.2.1 yesterday, and I noticed
that all my VMs show the network interfaces unplugged and disks inactive
(despite the VMs being up and running just fine).  This includes the
hosted engine.

I had not rebooted VMs after upgrading, so I tried powering one off and
on; it would not start until I manually activated the disk.

I haven't seen a problem like this before (although it usually means
that I did something wrong :) ) - what should I look at?
-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CentOS 7 Hyperconverged oVirt 4.2 with Self-Hosted-Engine with glusterfs with 2 Hypervisors and 1 glusterfs-Arbiter-only

2018-02-13 Thread Philipp Richter
Hi,

> The recommended way to install this would be by using one of the
> "full" nodes and deploying hosted engine via cockpit there. The
> gdeploy plugin in cockpit should allow you to configure the arbiter
> node.
> 
> The documentation for deploying RHHI (hyper converged RH product) is
> here:
> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html-single/deploying_red_hat_hyperconverged_infrastructure/index#deploy

Thanks for the documentation pointer about RHHI.
I was able to successfully setup all three Nodes. I had to edit the final 
gdeploy File, as the Installer reserves 20GB per arbiter volume and I don't 
have that much space available for this POC.

The problem now is that I don't see the third node i.e. in the Storage / 
Volumes / Bricks view, and I get warning messages every few seconds into the 
/var/log/ovirt-engine/engine.log like:

2018-02-13 15:40:26,188+01 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler3) [5a8c68e2] Could not add brick 
'ovirtpoc03-storage:/gluster_bricks/engine/engine' to volume 
'2e7a0ac3-3a74-40ba-81ff-d45b2b35aace' - server uuid 
'0a100f2f-a9ee-4711-b997-b674ee61f539' not found in cluster 
'cab4ba5c-10ba-11e8-aed5-00163e6a7af9'
2018-02-13 15:40:26,193+01 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler3) [5a8c68e2] Could not add brick 
'ovirtpoc03-storage:/gluster_bricks/vmstore/vmstore' to volume 
'5a356223-8774-4944-9a95-3962a3c657e4' - server uuid 
'0a100f2f-a9ee-4711-b997-b674ee61f539' not found in cluster 
'cab4ba5c-10ba-11e8-aed5-00163e6a7af9'

Of course I cannot add the third node as normal oVirt Host as it is slow, has 
only minimal amount of RAM and the CPU (AMD) is different than that one of the 
two "real" Hypervisors (Intel).

Is there a way to add the third Node only for gluster management, not as 
Hypervisor? Or is there any other method to at least quieten the log?

thanks,
-- 

: Philipp Richter
: LINFORGE | Peace of mind for your IT
:
: T: +43 1 890 79 99
: E: philipp.rich...@linforge.com
: https://www.xing.com/profile/Philipp_Richter15
: https://www.linkedin.com/in/philipp-richter
:
: LINFORGE Technologies GmbH
: Brehmstraße 10
: 1110 Wien
: Österreich
:
: Firmenbuchnummer: FN 216034y
: USt.- Nummer : ATU53054901
: Gerichtsstand: Wien
:
: LINFORGE® is a registered trademark of LINFORGE, Austria.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Maor Lipchuk
On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk  wrote:

>
> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>> see the attach files please ... thanks for your attention !!!
>>
>
>
> Seems like the engine logs does not contain the entire process, can you
> please share older logs since the import operation?
>

And VDSM logs as well from your host


>
>
>> Best Regards
>> Enrico
>>
>>
>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto:
>>
>>
>>
>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>>  Dear All,
>>> I have been using ovirt for a long time with three hypervisors and an
>>> external engine running in a centos vm .
>>>
>>> This three hypervisors have HBAs and access to fiber channel storage.
>>> Until recently I used version 3.5, then I reinstalled everything from
>>> scratch and now I have 4.2.
>>>
>>> Before formatting everything, I detach the storage data domani (FC) with
>>> the virtual machines and reimported it to the new 4.2 and all went well. In
>>> this domain there were virtual machines with and without snapshots.
>>>
>>> Now I have two problems. The first is that if I try to delete a snapshot
>>> the process is not end successful and remains hanging and the second
>>> problem is that
>>> in one case I lost the virtual machine !!!
>>>
>>
>>
>> Not sure that I fully understand the scneario.'
>> How was the virtual machine got lost if you only tried to delete a
>> snapshot?
>>
>>
>>>
>>> So I need your help to kill the three running zombie tasks because with
>>> taskcleaner.sh I can't do anything and then I need to know how I can delete
>>> the old snapshots
>>> made with the 3.5 without losing other data or without having new
>>> processes that terminate correctly.
>>>
>>> If you want some log files please let me know.
>>>
>>
>>
>> Hi Enrico,
>>
>> Can you please attach the engine and VDSM logs
>>
>>
>>>
>>> Thank you so much.
>>> Best Regards
>>> Enrico
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> --
>> ___
>>
>> Enrico BecchettiServizio di Calcolo e Reti
>>
>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
>> Via Pascoli,c/o Dipartimento di Fisica  06123 Perugia (ITALY)
>> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: 
>> Enrico.Becchettipg.infn.it
>> __
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Maor Lipchuk
On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> see the attach files please ... thanks for your attention !!!
>


Seems like the engine logs does not contain the entire process, can you
please share older logs since the import operation?


> Best Regards
> Enrico
>
>
> Il 13/02/2018 14:09, Maor Lipchuk ha scritto:
>
>
>
> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>>  Dear All,
>> I have been using ovirt for a long time with three hypervisors and an
>> external engine running in a centos vm .
>>
>> This three hypervisors have HBAs and access to fiber channel storage.
>> Until recently I used version 3.5, then I reinstalled everything from
>> scratch and now I have 4.2.
>>
>> Before formatting everything, I detach the storage data domani (FC) with
>> the virtual machines and reimported it to the new 4.2 and all went well. In
>> this domain there were virtual machines with and without snapshots.
>>
>> Now I have two problems. The first is that if I try to delete a snapshot
>> the process is not end successful and remains hanging and the second
>> problem is that
>> in one case I lost the virtual machine !!!
>>
>
>
> Not sure that I fully understand the scneario.'
> How was the virtual machine got lost if you only tried to delete a
> snapshot?
>
>
>>
>> So I need your help to kill the three running zombie tasks because with
>> taskcleaner.sh I can't do anything and then I need to know how I can delete
>> the old snapshots
>> made with the 3.5 without losing other data or without having new
>> processes that terminate correctly.
>>
>> If you want some log files please let me know.
>>
>
>
> Hi Enrico,
>
> Can you please attach the engine and VDSM logs
>
>
>>
>> Thank you so much.
>> Best Regards
>> Enrico
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> --
> ___
>
> Enrico BecchettiServizio di Calcolo e Reti
>
> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
> Via Pascoli,c/o Dipartimento di Fisica  06123 Perugia (ITALY)
> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: 
> Enrico.Becchettipg.infn.it
> __
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Simone Tiraboschi
On Tue, Feb 13, 2018 at 12:28 PM, Simone Tiraboschi 
wrote:

>
>
> On Tue, Feb 13, 2018 at 12:26 PM, Stefano Danzi  wrote:
>
>> Strange thing.
>>
>> after "vdsm-client Host getCapabilities" command, cluster cpu type become
>> "Intel Sandybridge Family". Same thing for all VMs.
>>
>
> Can you please share engine.log ?
>

OK, I found a specific patch for that issue:
https://gerrit.ovirt.org/#/c/86913/
but the patch didn't landed
in ovirt-engine-dbscripts-4.2.1.6-1.el7.centos.noarch so every 4.2.0 ->
4.2.1 upgrade will result in that issue if the cluster CPU family is not in
  Intel Nehalem Family-IBRS
  Intel Nehalem-IBRS Family
  Intel Westmere-IBRS Family
  Intel SandyBridge-IBRS Family
  Intel Haswell-noTSX-IBRS Family
  Intel Haswell-IBRS Family
  Intel Broadwell-noTSX-IBRS Family
  Intel Broadwell-IBRS Family
  Intel Skylake Family
  Intel Skylake-IBRS Family
as in your case.

Let's see if we can have a quick respin.


>
>
>> Now I can run VMs.
>>
>> Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:
>>
>> Ciao Stefano,
>> we have to properly indagate this: thanks for the report.
>>
>> Can you please attach from your host the output of
>> - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
>> - vdsm-client Host getCapabilities
>>
>> Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1
>> upgrade?
>>
>>
>>
>> On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi  wrote:
>>
>>> Hello!
>>>
>>> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any
>>> VM.
>>> Hosted engine starts regularly.
>>>
>>> I have a sigle host with Hosted Engine.
>>>
>>> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>>>
>>> When I start any VM I get this error: "The CPU type of the cluster is
>>> unknown. Its possible to change the cluster cpu or set a different one per
>>> VM."
>>>
>>> All VMs have " Guest CPU Type: N/D"
>>>
>>> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu
>>> type before the upgrade), my CPU should be Ivy Bridge but it isn't in the
>>> dropdown list.
>>>
>>> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
>>> can't chage cluster cpu type when I have running hosts with a lower CPU
>>> type.
>>> I can't put host in maintenance because  hosted engine is running on it.
>>>
>>> How I can solve?
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>> --
>>
>> Stefano Danzi
>> Responsabile ICT
>>
>> HAWAI ITALIA S.r.l.
>> Via Forte Garofolo, 16
>> 37057 S. Giovanni Lupatoto Verona Italia
>>
>> P. IVA 01680700232
>>
>> tel. +39/045/8266400 <+39%20045%20826%206400>
>> fax +39/045/8266401 <+39%20045%20826%206401>
>> Web www.hawai.it
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Maor Lipchuk
On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

>  Dear All,
> I have been using ovirt for a long time with three hypervisors and an
> external engine running in a centos vm .
>
> This three hypervisors have HBAs and access to fiber channel storage.
> Until recently I used version 3.5, then I reinstalled everything from
> scratch and now I have 4.2.
>
> Before formatting everything, I detach the storage data domani (FC) with
> the virtual machines and reimported it to the new 4.2 and all went well. In
> this domain there were virtual machines with and without snapshots.
>
> Now I have two problems. The first is that if I try to delete a snapshot
> the process is not end successful and remains hanging and the second
> problem is that
> in one case I lost the virtual machine !!!
>


Not sure that I fully understand the scneario.'
How was the virtual machine got lost if you only tried to delete a snapshot?


>
> So I need your help to kill the three running zombie tasks because with
> taskcleaner.sh I can't do anything and then I need to know how I can delete
> the old snapshots
> made with the 3.5 without losing other data or without having new
> processes that terminate correctly.
>
> If you want some log files please let me know.
>


Hi Enrico,

Can you please attach the engine and VDSM logs


>
> Thank you so much.
> Best Regards
> Enrico
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network configuration validation error

2018-02-13 Thread spfma . tech
I did not see I had to enable another repo to get this update, so I was sure I 
had the latest version available !
 After adding it, things went a lot better and I was able to update the engine 
and all the nodes flawlessly to version 4.2.1.6-1.el7.centos Thanks a lot for 
your help !   The "no default route error" has disappeared indeed.   But I 
still couldn't validate network setup modifications on one node as I still had 
the following error in the GUI :  

* must match 
"^b((25[0-5]|2[0-4]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]dd|d?d)"
* Attribute: ipConfiguration.iPv4Addresses[0].gateway

 So I tried a dummy thing : I put a value in the gateway field for the NIC 
which doesn't need one (NFS), was able to validate. Then I edited it again, 
removed the value and was able to validate again !Regards 

 Le 12-Feb-2018 10:42:30 +0100, mbur...@redhat.com a crit: 
  "no default route" bug was fixed only on 4.2.1 Your current version doesn't 
have the fix 
 On Mon, Feb 12, 2018 at 11:09 AM,  wrote:

 Le 12-Feb-2018 08:06:43 +0100, jbe...@redhat.com a crit: 
> This option relevant only for the upgrade from 3.6 to 4.0(engine had
 > different OS major versions), it all other cases the upgrade flow very
 > similar to upgrade flow of standard engine environment.
 > 
 > 
 > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via
 > UI)
 > 2. Update engine packages(# yum update -y)
 > 3. Run engine-setup
 > 4. Disable GlobalMaintenance
 >   So I followed these steps connected in the engine VM and didn't get any 
 > error message. But the version showed in the GUI is  still 
 > 4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I still have 
 > the "no default route" and network validation problems. Regards   
 > Could someone explain me at least what "Cluster PROD is at version 4.2 which
 > is not supported by this upgrade flow. Please fix it before upgrading."
 > means ? As far as I know 4.2 is the most recent branch available, isn't it ?

 I have no idea where did you get

 "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. 
Please fix it before upgrading."

 Please do not cut output and provide exact one.

 IIUC you should do 'yum update ovirt*setup*' and then 'engine-setup'
 and only after it would finish successfully you would do 'yum -y update'.
 Maybe that's your problem?

 Jiri 

-
FreeMail powered by mail.fr  
___
 Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

-- 

Michael Burman 

Senior Quality engineer - rhv network - redhat israel 

Red Hat 

 mbur...@redhat.com  M: 0545355725 IM: mburman 

-
FreeMail powered by mail.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Enrico Becchetti

 Dear All,
I have been using ovirt for a long time with three hypervisors and an 
external engine running in a centos vm .


This three hypervisors have HBAs and access to fiber channel storage. 
Until recently I used version 3.5, then I reinstalled everything from 
scratch and now I have 4.2.


Before formatting everything, I detach the storage data domani (FC) with 
the virtual machines and reimported it to the new 4.2 and all went well. In

this domain there were virtual machines with and without snapshots.

Now I have two problems. The first is that if I try to delete a snapshot 
the process is not end successful and remains hanging and the second 
problem is that

in one case I lost the virtual machine !!!

So I need your help to kill the three running zombie tasks because with 
taskcleaner.sh I can't do anything and then I need to know how I can 
delete the old snapshots
made with the 3.5 without losing other data or without having new 
processes that terminate correctly.


If you want some log files please let me know.

Thank you so much.
Best Regards
Enrico




smime.p7s
Description: Firma crittografica S/MIME
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Simone Tiraboschi
On Tue, Feb 13, 2018 at 12:26 PM, Stefano Danzi  wrote:

> Strange thing.
>
> after "vdsm-client Host getCapabilities" command, cluster cpu type become
> "Intel Sandybridge Family". Same thing for all VMs.
>

Can you please share engine.log ?


> Now I can run VMs.
>
> Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:
>
> Ciao Stefano,
> we have to properly indagate this: thanks for the report.
>
> Can you please attach from your host the output of
> - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
> - vdsm-client Host getCapabilities
>
> Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1
> upgrade?
>
>
>
> On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi  wrote:
>
>> Hello!
>>
>> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
>> Hosted engine starts regularly.
>>
>> I have a sigle host with Hosted Engine.
>>
>> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>>
>> When I start any VM I get this error: "The CPU type of the cluster is
>> unknown. Its possible to change the cluster cpu or set a different one per
>> VM."
>>
>> All VMs have " Guest CPU Type: N/D"
>>
>> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type
>> before the upgrade), my CPU should be Ivy Bridge but it isn't in the
>> dropdown list.
>>
>> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
>> can't chage cluster cpu type when I have running hosts with a lower CPU
>> type.
>> I can't put host in maintenance because  hosted engine is running on it.
>>
>> How I can solve?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> --
>
> Stefano Danzi
> Responsabile ICT
>
> HAWAI ITALIA S.r.l.
> Via Forte Garofolo, 16
> 37057 S. Giovanni Lupatoto Verona Italia
>
> P. IVA 01680700232
>
> tel. +39/045/8266400 <+39%20045%20826%206400>
> fax +39/045/8266401 <+39%20045%20826%206401>
> Web www.hawai.it
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Stefano Danzi

Strange thing.

after "vdsm-client Host getCapabilities" command, cluster cpu type 
become "Intel Sandybridge Family". Same thing for all VMs.

Now I can run VMs.

Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:

Ciao Stefano,
we have to properly indagate this: thanks for the report.

Can you please attach from your host the output of
- grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
- vdsm-client Host getCapabilities

Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 
upgrade?




On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi > wrote:


Hello!

In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start
any VM.
Hosted engine starts regularly.

I have a sigle host with Hosted Engine.

Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

When I start any VM I get this error: "The CPU type of the cluster
is unknown. Its possible to change the cluster cpu or set a
different one per VM."

All VMs have " Guest CPU Type: N/D"

Cluster now has CPU Type "Intel Conroe Family" (I don't remember
cpu type before the upgrade), my CPU should be Ivy Bridge but it
isn't in the dropdown list.

If I try to select a similar cpu (SandyBridge IBRS) I get an
error. I can't chage cluster cpu type when I have running hosts
with a lower CPU type.
I can't put host in maintenance because  hosted engine is running
on it.

How I can solve?

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--

Stefano Danzi
Responsabile ICT

HAWAI ITALIA S.r.l.
Via Forte Garofolo, 16
37057 S. Giovanni Lupatoto Verona Italia

P. IVA 01680700232

tel. +39/045/8266400
fax +39/045/8266401
Web www.hawai.it

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available

2018-02-13 Thread Alex K
Thanx!



On Tue, Feb 13, 2018 at 12:54 PM, Sandro Bonazzola 
wrote:

>
>
> 2018-02-13 11:42 GMT+01:00 Alex K :
>
>> Hi all,
>>
>> Is this version considered production ready?
>>
>
> Yes, 4.2.1 is considered production ready
>
>
>
>
>>
>> Thanx,
>> Alex
>>
>>
>> On Mon, Feb 12, 2018 at 7:14 PM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi :
>>>
 On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde  wrote:

> The oVirt Project is pleased to announce the availability of the oVirt
> 4.2.1 Release, as of February 12th, 2018
>
> This update is a release of the first in a series of stabilization
> updates to the 4.2
> series.
>
> This release is available now for:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
>
> This release supports Hypervisor Hosts running:
> * Red Hat Enterprise Linux 7.4 or later
> * CentOS Linux (or similar) 7.4 or later
> * oVirt Node 4.2
>
>
 Hello,
 could you confirm that for plain CentOS 7.4 hosts there was no changes
 between rc3 and final 4.2.1?

>>>
>>> We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/pi
>>> permail/announce/2018-February/000387.html
>>>
>>>
>>>
 I just updated an environment that was in RC3 and while the engine has
 been updated, the host says no updates:

  [root@ov42 ~]# yum update
 Loaded plugins: fastestmirror, langpacks, product-id,
 search-disabled-repos
 Loading mirror speeds from cached hostfile
  * base: artfiles.org
  * extras: ba.mirror.garr.it
  * ovirt-4.2: ftp.nluug.nl
  * ovirt-4.2-epel: epel.besthosting.ua
  * updates: ba.mirror.garr.it
 No packages marked for update

>>>
>>> I think mirrors are still syncing, but resources.ovirt.org is updated.
>>> You can switch from mirrorlist to baseurl in your yum config file if you
>>> don't want to wait for the mirror to finish the sync.
>>>
>>>
>>>
 [root@ov42 ~]#

 The mirrors seem the same that the engine has used some minutes before,
 so they should be ok...

 engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch
 to ovirt-engine-4.2.1.6-1.el7.centos.noarch

 Base oVirt related packages on host are currently of this type, since
 4.2.1rc3:

 libvirt-daemon-3.2.0-14.el7_4.7.x86_64
 ovirt-host-4.2.1-1.el7.centos.x86_64
 ovirt-vmconsole-1.0.4-1.el7.noarch
 qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64
 sanlock-3.5.0-1.el7.x86_64
 vdsm-4.20.17-1.el7.centos.x86_64
 virt-v2v-1.36.3-6.el7_4.3.x86_64


>>> yes, most of the changes in the last 3 rcs were related to cockpit-ovirt
>>> / ovirt-node / hosted engine
>>>
>>> $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf
>>> ovirt-4.2.1.conf
>>>
>>> # ovirt-engine-4.2.1.4
>>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artif
>>> acts-el7-x86_64/6483/
>>>
>>> # cockpit-ovirt-0.11.7-0.1
>>> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac
>>> ts-el7-x86_64/84/
>>>
>>> # ovirt-release42-4.2.1_rc4
>>> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti
>>> facts-el7-x86_64/649/
>>> # otopi-1.7.7
>>> http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/
>>>
>>> # ovirt-host-deploy-1.7.2
>>> http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build-art
>>> ifacts-el7-x86_64/4/
>>>
>>> # ovirt-hosted-engine-setup-2.2.9
>>> http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_b
>>> uild-artifacts-el7-x86_64/5/
>>>
>>> # cockpit-ovirt-0.11.11-0.1
>>> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac
>>> ts-el7-x86_64/96/
>>>
>>> # ovirt-release42-4.2.1_rc5
>>> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti
>>> facts-el7-x86_64/651/
>>>
>>> # ovirt-engine-appliance-4.2-20180202.1
>>> http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_b
>>> uild-artifacts-el7-x86_64/62/
>>>
>>> # ovirt-node-ng-4.2.0-0.20180205.0
>>> http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_bui
>>> ld-artifacts-el7-x86_64/212/
>>>
>>> # ovirt-engine-4.2.1.5
>>> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact
>>> s-el7-x86_64/3/
>>>
>>> # ovirt-engine-4.2.1.6
>>> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact
>>> s-el7-x86_64/12/
>>>
>>> # ovirt-release42-4.2.1_rc6
>>> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac
>>> ts-el7-x86_64/164/
>>>
>>> # ovirt-release42-4.2.1
>>> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac
>>> ts-el7-x86_64/173/
>>>
>>>
>>>
 Thanks,
 Gianluca

 ___
 Announce mailing list
 annou...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/announce


>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, 

Re: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available

2018-02-13 Thread Sandro Bonazzola
2018-02-13 11:42 GMT+01:00 Alex K :

> Hi all,
>
> Is this version considered production ready?
>

Yes, 4.2.1 is considered production ready




>
> Thanx,
> Alex
>
>
> On Mon, Feb 12, 2018 at 7:14 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi :
>>
>>> On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde  wrote:
>>>
 The oVirt Project is pleased to announce the availability of the oVirt
 4.2.1 Release, as of February 12th, 2018

 This update is a release of the first in a series of stabilization
 updates to the 4.2
 series.

 This release is available now for:
 * Red Hat Enterprise Linux 7.4 or later
 * CentOS Linux (or similar) 7.4 or later

 This release supports Hypervisor Hosts running:
 * Red Hat Enterprise Linux 7.4 or later
 * CentOS Linux (or similar) 7.4 or later
 * oVirt Node 4.2


>>> Hello,
>>> could you confirm that for plain CentOS 7.4 hosts there was no changes
>>> between rc3 and final 4.2.1?
>>>
>>
>> We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/pi
>> permail/announce/2018-February/000387.html
>>
>>
>>
>>> I just updated an environment that was in RC3 and while the engine has
>>> been updated, the host says no updates:
>>>
>>>  [root@ov42 ~]# yum update
>>> Loaded plugins: fastestmirror, langpacks, product-id,
>>> search-disabled-repos
>>> Loading mirror speeds from cached hostfile
>>>  * base: artfiles.org
>>>  * extras: ba.mirror.garr.it
>>>  * ovirt-4.2: ftp.nluug.nl
>>>  * ovirt-4.2-epel: epel.besthosting.ua
>>>  * updates: ba.mirror.garr.it
>>> No packages marked for update
>>>
>>
>> I think mirrors are still syncing, but resources.ovirt.org is updated.
>> You can switch from mirrorlist to baseurl in your yum config file if you
>> don't want to wait for the mirror to finish the sync.
>>
>>
>>
>>> [root@ov42 ~]#
>>>
>>> The mirrors seem the same that the engine has used some minutes before,
>>> so they should be ok...
>>>
>>> engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch
>>> to ovirt-engine-4.2.1.6-1.el7.centos.noarch
>>>
>>> Base oVirt related packages on host are currently of this type, since
>>> 4.2.1rc3:
>>>
>>> libvirt-daemon-3.2.0-14.el7_4.7.x86_64
>>> ovirt-host-4.2.1-1.el7.centos.x86_64
>>> ovirt-vmconsole-1.0.4-1.el7.noarch
>>> qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64
>>> sanlock-3.5.0-1.el7.x86_64
>>> vdsm-4.20.17-1.el7.centos.x86_64
>>> virt-v2v-1.36.3-6.el7_4.3.x86_64
>>>
>>>
>> yes, most of the changes in the last 3 rcs were related to cockpit-ovirt
>> / ovirt-node / hosted engine
>>
>> $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf
>> ovirt-4.2.1.conf
>>
>> # ovirt-engine-4.2.1.4
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artif
>> acts-el7-x86_64/6483/
>>
>> # cockpit-ovirt-0.11.7-0.1
>> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac
>> ts-el7-x86_64/84/
>>
>> # ovirt-release42-4.2.1_rc4
>> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti
>> facts-el7-x86_64/649/
>> # otopi-1.7.7
>> http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/
>>
>> # ovirt-host-deploy-1.7.2
>> http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build-art
>> ifacts-el7-x86_64/4/
>>
>> # ovirt-hosted-engine-setup-2.2.9
>> http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_b
>> uild-artifacts-el7-x86_64/5/
>>
>> # cockpit-ovirt-0.11.11-0.1
>> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac
>> ts-el7-x86_64/96/
>>
>> # ovirt-release42-4.2.1_rc5
>> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti
>> facts-el7-x86_64/651/
>>
>> # ovirt-engine-appliance-4.2-20180202.1
>> http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_b
>> uild-artifacts-el7-x86_64/62/
>>
>> # ovirt-node-ng-4.2.0-0.20180205.0
>> http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_bui
>> ld-artifacts-el7-x86_64/212/
>>
>> # ovirt-engine-4.2.1.5
>> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact
>> s-el7-x86_64/3/
>>
>> # ovirt-engine-4.2.1.6
>> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact
>> s-el7-x86_64/12/
>>
>> # ovirt-release42-4.2.1_rc6
>> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac
>> ts-el7-x86_64/164/
>>
>> # ovirt-release42-4.2.1
>> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac
>> ts-el7-x86_64/173/
>>
>>
>>
>>> Thanks,
>>> Gianluca
>>>
>>> ___
>>> Announce mailing list
>>> annou...@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/announce
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>> ___
>> Users mailing list
>> 

Re: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf

2018-02-13 Thread Simone Tiraboschi
On Tue, Feb 13, 2018 at 7:43 AM, Reznikov Alexei 
wrote:

> 10.02.2018 00:48, reznikov...@soskol.com пишет:
>
> Simone Tiraboschi писал 2018-02-09 15:17:
>
> It shouldn't happen.
> I suspect that something went wrong creating the configuration volume
> on the shared storage at the end of the deployment.
>
> Alexei, can both of you attach you hosted-engine-setup logs?
> Can you please check what happens on
>   hosted-engine --get-shared-config gateway
>
> Thanks
>
>
> Simone, my ovirt cluster upgrade from 3.4... and i have too old logs.
>
> I'm confused by the execution of the hosted-engine --get-shared-config
> gateway ...
> I get the output "gateway: 10.245.183.1, type: he_conf", but my current
> hosted-engine.conf is overwritten by the other hosted-engine.conf.
>
> old file:
>
> fqdn = eng.lan
> vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749
> vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228
> storage = ssd.lan:/ovirt
> service_start_time = 0
> host_id = 3
> console = vnc
> domainType = nfs3
> sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763
> connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4
> ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> ca_subject = "C = EN, L = Test, O = Test, CN = Test"
> vdsm_use_ssl = true
> gateway = 10.245.183.1
> bridge = ovirtmgmt
> metadata_volume_UUID =
> metadata_image_UUID =
> lockspace_volume_UUID =
> lockspace_image_UUID =
>
> The following are used only for iSCSI storage
> iqn =
> portal =
> user =
> password =
> port =
>
> conf_volume_UUID = a20d9700-1b9a-41d8-bb4b-f2b7c168104f
> conf_image_UUID = b5f353f5-9357-4aad-b1a3-751d411e6278
> conf = /var/run/ovirt-hosted-engine-ha/vm.conf
> vm_disk_vol_id = cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b
> spUUID = ----
>
> new rewrite file
>
> fqdn = eng.lan
> vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749
> vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228
> storage = ssd.lan:/ovirt
> conf = /etc/ovirt-hosted-engine/vm.conf
> service_start_time = 0
> host_id = 3
> console = vnc
> domainType = nfs3
> spUUID = 036f83d7-39f7-48fd-a73a-3c9ffb3dbe6a
> sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763
> connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4
> ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> ca_subject = "C = EN, L = Test, O = Test, CN = Test"
> vdsm_use_ssl = true
> gateway = 10.245.183.1
> bridge = ovirtmgmt
> metadata_volume_UUID =
> metadata_image_UUID =
> lockspace_volume_UUID =
> lockspace_image_UUID =
>
> The following are used only for iSCSI storage
> iqn =
> portal =
> user =
> password =
> port =
>
> And this in all hosts in cluster!
> It seems to me that these are some remnants of versions 3.4, 3.5 ...
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> BUMP
>
> I resolved error "KeyError: 'Configuration value not found:
> file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'".
>
> This error was caused... "*VDSGenericException: VDSErrorException:
> received downloaded data size is wrong (requested 20480, received 10240)*",
> the solution is here https://access.redhat.com/solutions/3106231
>
> But in my case there is still a problem with the inappropriate parameters
> in hosted-engine.conf ... I think I should use "hosted-engine
> --set-shared-config" to change the values on the shared storage. This is
> right?
>

Yes, ufortunately you are absolutely right on that: there is a bug there.
As a side effect, hosted-engine --set-shared-config and hosted-engine
--get-shared-config always refresh the local copy of hosted-engine
configuration files with the copy on the shared storage and so you will
always end with host_id=1 in /etc/ovirt-hosted-engine/hosted-engine.conf
which can lead to SPM conflicts.
I'd suggest to manually fix host_id parameter in
/etc/ovirt-hosted-engine/hosted-engine.conf to its original value (double
check with engine DB with 'sudo -u postgres psql engine -c "SELECT
vds_spm_id, vds.vds_name FROM vds"' on the engine VM) to avoid that.
https://bugzilla.redhat.com/1543988


> Guru help to solve this.
>
> Regards,
>
> Alex.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available

2018-02-13 Thread Alex K
Hi all,

Is this version considered production ready?

Thanx,
Alex


On Mon, Feb 12, 2018 at 7:14 PM, Sandro Bonazzola 
wrote:

>
>
> 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi :
>
>> On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde  wrote:
>>
>>> The oVirt Project is pleased to announce the availability of the oVirt 4.
>>> 2.1 Release, as of February 12th, 2018
>>>
>>> This update is a release of the first in a series of stabilization
>>> updates to the 4.2
>>> series.
>>>
>>> This release is available now for:
>>> * Red Hat Enterprise Linux 7.4 or later
>>> * CentOS Linux (or similar) 7.4 or later
>>>
>>> This release supports Hypervisor Hosts running:
>>> * Red Hat Enterprise Linux 7.4 or later
>>> * CentOS Linux (or similar) 7.4 or later
>>> * oVirt Node 4.2
>>>
>>>
>> Hello,
>> could you confirm that for plain CentOS 7.4 hosts there was no changes
>> between rc3 and final 4.2.1?
>>
>
> We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/
> pipermail/announce/2018-February/000387.html
>
>
>
>> I just updated an environment that was in RC3 and while the engine has
>> been updated, the host says no updates:
>>
>>  [root@ov42 ~]# yum update
>> Loaded plugins: fastestmirror, langpacks, product-id,
>> search-disabled-repos
>> Loading mirror speeds from cached hostfile
>>  * base: artfiles.org
>>  * extras: ba.mirror.garr.it
>>  * ovirt-4.2: ftp.nluug.nl
>>  * ovirt-4.2-epel: epel.besthosting.ua
>>  * updates: ba.mirror.garr.it
>> No packages marked for update
>>
>
> I think mirrors are still syncing, but resources.ovirt.org is updated.
> You can switch from mirrorlist to baseurl in your yum config file if you
> don't want to wait for the mirror to finish the sync.
>
>
>
>> [root@ov42 ~]#
>>
>> The mirrors seem the same that the engine has used some minutes before,
>> so they should be ok...
>>
>> engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch
>> to ovirt-engine-4.2.1.6-1.el7.centos.noarch
>>
>> Base oVirt related packages on host are currently of this type, since
>> 4.2.1rc3:
>>
>> libvirt-daemon-3.2.0-14.el7_4.7.x86_64
>> ovirt-host-4.2.1-1.el7.centos.x86_64
>> ovirt-vmconsole-1.0.4-1.el7.noarch
>> qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64
>> sanlock-3.5.0-1.el7.x86_64
>> vdsm-4.20.17-1.el7.centos.x86_64
>> virt-v2v-1.36.3-6.el7_4.3.x86_64
>>
>>
> yes, most of the changes in the last 3 rcs were related to cockpit-ovirt /
> ovirt-node / hosted engine
>
> $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf
> ovirt-4.2.1.conf
>
> # ovirt-engine-4.2.1.4
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
> artifacts-el7-x86_64/6483/
>
> # cockpit-ovirt-0.11.7-0.1
> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-
> artifacts-el7-x86_64/84/
>
> # ovirt-release42-4.2.1_rc4
> http://jenkins.ovirt.org/job/ovirt-release_master_build-
> artifacts-el7-x86_64/649/
> # otopi-1.7.7
> http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/
>
> # ovirt-host-deploy-1.7.2
> http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build-
> artifacts-el7-x86_64/4/
>
> # ovirt-hosted-engine-setup-2.2.9
> http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_
> build-artifacts-el7-x86_64/5/
>
> # cockpit-ovirt-0.11.11-0.1
> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-
> artifacts-el7-x86_64/96/
>
> # ovirt-release42-4.2.1_rc5
> http://jenkins.ovirt.org/job/ovirt-release_master_build-
> artifacts-el7-x86_64/651/
>
> # ovirt-engine-appliance-4.2-20180202.1
> http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_
> build-artifacts-el7-x86_64/62/
>
> # ovirt-node-ng-4.2.0-0.20180205.0
> http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_
> build-artifacts-el7-x86_64/212/
>
> # ovirt-engine-4.2.1.5
> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-
> artifacts-el7-x86_64/3/
>
> # ovirt-engine-4.2.1.6
> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-
> artifacts-el7-x86_64/12/
>
> # ovirt-release42-4.2.1_rc6
> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-
> artifacts-el7-x86_64/164/
>
> # ovirt-release42-4.2.1
> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-
> artifacts-el7-x86_64/173/
>
>
>
>> Thanks,
>> Gianluca
>>
>> ___
>> Announce mailing list
>> annou...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/announce
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-02-13 Thread Gianluca Cecchi
On Wed, Jan 31, 2018 at 5:01 PM, Elad Ben Aharon 
wrote:

> Just delete the image directory 
> (remove_me_8eb435f3-e8c1-4042-8180-e9f342b2e449)
> located under  /rhev/data-center/%spuuid%/%sduuid%/images/
>
> As for the LV, please try the following:
>
> dmsetup remove /dev/mapper/%device_name% --> device name could be fetched
> by 'dmsetup table'
>

Hello,
for that oVirt environment I finished moving the disks form source to
target, so I could power off all test infra and at node reboot I didn't
have the problem again (also because I force removed the source storage
domain), so I could not investigate more.

But I have "sort of" reproduced the problem inside another FC SAN storage
based environment.
The problem happened with a VM having 4 disks: one boot disk of 50Gb and
other 3 disks of 100Gb, 200Gb, 200Gb.
The VM has been powered off and the 3 "big" disks deletion (tried both
deactivating and not the disk before removal) originated for all the same
error as in my oVirt environment above during move:

command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume: (['
Cannot remove Logical Volume:

So I think the problem is related to SAN itself and when you work with
relatively "big" disks perhaps.
One suspect is also a problem at hypervisor LVM filtering, because all 3
disks had a PV/VG/LV structure inside, created on the whole virtual disk at
VM level.

As this new environment is in RHEV with RHV-H hosts (layer
rhvh-4.1-0.20171002.0+1)
I opened the case #02034032 if interested.

The big problem is that the disk has been removed at VM side, but at
storage domain side the space has not been released, so that if you have to
create other "big" disks, you could go into lack of space because of this.

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Simone Tiraboschi
Ciao Stefano,
we have to properly indagate this: thanks for the report.

Can you please attach from your host the output of
- grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
- vdsm-client Host getCapabilities

Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1
upgrade?



On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi  wrote:

> Hello!
>
> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
> Hosted engine starts regularly.
>
> I have a sigle host with Hosted Engine.
>
> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>
> When I start any VM I get this error: "The CPU type of the cluster is
> unknown. Its possible to change the cluster cpu or set a different one per
> VM."
>
> All VMs have " Guest CPU Type: N/D"
>
> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type
> before the upgrade), my CPU should be Ivy Bridge but it isn't in the
> dropdown list.
>
> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
> can't chage cluster cpu type when I have running hosts with a lower CPU
> type.
> I can't put host in maintenance because  hosted engine is running on it.
>
> How I can solve?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Stefano Danzi

Hello!

In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
Hosted engine starts regularly.

I have a sigle host with Hosted Engine.

Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

When I start any VM I get this error: "The CPU type of the cluster is 
unknown. Its possible to change the cluster cpu or set a different one 
per VM."


All VMs have " Guest CPU Type: N/D"

Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu 
type before the upgrade), my CPU should be Ivy Bridge but it isn't in 
the dropdown list.


If I try to select a similar cpu (SandyBridge IBRS) I get an error. I 
can't chage cluster cpu type when I have running hosts with a lower CPU 
type.

I can't put host in maintenance because  hosted engine is running on it.

How I can solve?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Qemu-block] qcow2 images corruption

2018-02-13 Thread Kevin Wolf
Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben:
> TL; DR : qcow2 images keep getting corrupted. Any workaround?

Not without knowing the cause.

The first thing to make sure is that the image isn't touched by a second
process while QEMU is running a VM. The classic one is using 'qemu-img
snapshot' on the image of a running VM, which is instant corruption (and
newer QEMU versions have locking in place to prevent this), but we have
seen more absurd cases of things outside QEMU tampering with the image
when we were investigating previous corruption reports.

This covers the majority of all reports, we haven't had a real
corruption caused by a QEMU bug in ages.

> After having found (https://access.redhat.com/solutions/1173623) the right
> logical volume hosting the qcow2 image, I can run qemu-img check on it.
> - On 80% of my VMs, I find no errors.
> - On 15% of them, I find Leaked cluster errors that I can correct using
> "qemu-img check -r all"
> - On 5% of them, I find Leaked clusters errors and further fatal errors,
> which can not be corrected with qemu-img.
> In rare cases, qemu-img can correct them, but destroys large parts of the
> image (becomes unusable), and on other cases it can not correct them at all.

It would be good if you could make the 'qemu-img check' output available
somewhere.

It would be even better if we could have a look at the respective image.
I seem to remember that John (CCed) had a few scripts to analyse
corrupted qcow2 images, maybe we would be able to see something there.

> What I read similar to my case is :
> - usage of qcow2
> - heavy disk I/O
> - using the virtio-blk driver
> 
> In the proxmox thread, they tend to say that using virtio-scsi is the
> solution. Having asked this question to oVirt experts
> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's
> not clear the driver is to blame.

This seems very unlikely. The corruption you're seeing is in the qcow2
metadata, not only in the guest data. If anything, virtio-scsi exercises
more qcow2 code paths than virtio-blk, so any potential bug that affects
virtio-blk should also affect virtio-scsi, but not the other way around.

> I agree with the answer Yaniv Kaul gave to me, saying I have to properly
> report the issue, so I'm longing to know which peculiar information I can
> give you now.

To be honest, debugging corruption after the fact is pretty hard. We'd
need the 'qemu-img check' output and ideally the image to do anything,
but I can't promise that anything would come out of this.

Best would be a reproducer, or at least some operation that you can link
to the appearance of the corruption. Then we could take a more targeted
look at the respective code.

> As you can imagine, all this setup is in production, and for most of the
> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly
> stopping every VM, qemu-img check them one by one, then boot.
> So it might take some time before I find another corrupted image.
> (which I'll preciously store for debug)
> 
> Other informations : We very rarely do snapshots, but I'm close to imagine
> that automated migrations of VMs could trigger similar behaviors on qcow2
> images.

To my knowledge, oVirt only uses external snapshots and creates them
with QMP. This should be perfectly safe because from the perspective of
the qcow2 image being snapshotted, it just means that it gets no new
write requests.

Migration is something more involved, and if you could relate the
problem to migration, that would certainly be something to look into. In
that case, it would be important to know more about the setup, e.g. is
it migration with shared or non-shared storage?

> Last point about the versions we use : yes that's old, yes we're planning to
> upgrade, but we don't know when.

That would be helpful, too. Nothing is more frustrating that debugging a
bug in an old version only to find that it's already fixed in the
current version (well, except maybe debugging and finding nothing).

Kevin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Defining custom network filter or editing existing

2018-02-13 Thread Michael Burman
Thanks Dominik,

Just spoke with Dan and we decided to open a RFE to add an option to set a
custom nwfilter in the engine UI See -
https://bugzilla.redhat.com/show_bug.cgi?id=1544666

Cheers)

On Tue, Feb 13, 2018 at 10:24 AM, Dominik Holler  wrote:

> On Mon, 12 Feb 2018 16:09:59 -0800
> Tim Thompson  wrote:
>
> > All,
> >
> > I was wondering if someone can point me in the direction of the
> > documentation related to defining custom network filters (nwfilter)
> > in 4.2. I found the docs on assigning a network filter to a vNIC
> > profile, but I cannot find any mention of how you can create your
> > own. Normally you'd use 'virst nwfilter-define', but that is locked
> > out since vdsm manages everything. I need to expand clean-traffic's
> > scope to include ipv6, since it doesn't handle ipv6 at all by
> > default, it seems.
> >
>
> Custom network filters are not supported.
> If you still want to use custom network filters, you would have to:
> - add custom network properties on oVirt-engine level,
> - add a hook like vdsm_hooks/noipspoof/noipspoof.py which modifies
>   libvirt's domain XML to activate the custom network filter and
> - be yourself responsible to deploy the custom network filter
>   definition to all nodes
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 

Michael Burman

Senior Quality engineer - rhv network - redhat israel

Red Hat



mbur...@redhat.comM: 0545355725 IM: mburman

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Defining custom network filter or editing existing

2018-02-13 Thread Dominik Holler
On Mon, 12 Feb 2018 16:09:59 -0800
Tim Thompson  wrote:

> All,
> 
> I was wondering if someone can point me in the direction of the 
> documentation related to defining custom network filters (nwfilter)
> in 4.2. I found the docs on assigning a network filter to a vNIC
> profile, but I cannot find any mention of how you can create your
> own. Normally you'd use 'virst nwfilter-define', but that is locked
> out since vdsm manages everything. I need to expand clean-traffic's
> scope to include ipv6, since it doesn't handle ipv6 at all by
> default, it seems.
> 

Custom network filters are not supported.
If you still want to use custom network filters, you would have to:
- add custom network properties on oVirt-engine level,
- add a hook like vdsm_hooks/noipspoof/noipspoof.py which modifies
  libvirt's domain XML to activate the custom network filter and
- be yourself responsible to deploy the custom network filter
  definition to all nodes
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4.2 aaa LDAP setup issue

2018-02-13 Thread Ondra Machacek

Hello,

On 02/09/2018 08:17 PM, Jamie Lawrence wrote:

Hello,

I'm bringing up a new 4.2 cluster and would like to use LDAP auth. Our LDAP 
servers are fine and function normally for a number of other services, but I 
can't get this working.

Our LDAP setup requires startTLS and a login. That last bit seems to be where 
the trouble is. After ovirt-engine-extension-aaa-ldap-setup asks for the cert 
and I pass it the path to the same cert used via nslcd/PAM for logging in to 
the host, it replies:

[ INFO  ] Connecting to LDAP using 'ldap://x.squaretrade.com:389'
[ INFO  ] Executing startTLS
[WARNING] Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 
'authentication required', 'desc': 'Server is unwilling to perform'}
[ ERROR ] Cannot connect using any of available options

"Unwilling to perform" makes me think -aaa-ldap-setup is trying something the 
backend doesn't support, but I'm having trouble guessing what that could be since the 
tool hasn't gathered sufficient information to connect yet - it asks for a DN/pass later 
in the script. And the log isn't much more forthcoming.

I double-checked the cert with openssl; it is a valid, PEM-encoded cert.

Before I head in to the code, has anyone seen this?


Looks like you have disallowed anonymous bind on your LDAP.
We are trying to estabilish anonymous bind to test the connection.

I would recommend to try to do a manual configuration, the documentation
is here:


https://github.com/oVirt/ovirt-engine-extension-aaa-ldap/blob/master/README#L17

Then in your /etc/ovirt-engine/aaa/profile1.properties add following
line:

pool.default.auth.type = simple

Then test the configuration using ovirt-engine-extensions-tool.
If it's OK just restart ovirt-engine and all should be fine.



Thanks,

-j

- - - - snip - - - -

Relevant log details:

2018-02-08 15:15:08,625-0800 DEBUG 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._getURLs:281 
URLs: ['ldap://x.squaretrade.com:389']
2018-02-08 15:15:08,626-0800 INFO 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._connectLDAP:391 Connecting to LDAP using 'ldap://x.squaretrade.com:389'
2018-02-08 15:15:08,627-0800 INFO 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._connectLDAP:442 Executing startTLS
2018-02-08 15:15:08,640-0800 DEBUG 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._connectLDAP:445 Perform search
2018-02-08 15:15:08,641-0800 DEBUG 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._connectLDAP:459 Exception
Traceback (most recent call last):
   File 
"/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py",
 line 451, in _connectLDAP
 timeout=60,
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 555, in 
search_st
 return 
self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout)
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 546, in 
search_ext_s
 return self.result(msgid,all=1,timeout=timeout)[1]
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 458, in 
result
 resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout)
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 462, in 
result2
 resp_type, resp_data, resp_msgid, resp_ctrls = 
self.result3(msgid,all,timeout)
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 469, in 
result3
 resp_ctrl_classes=resp_ctrl_classes
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 476, in 
result4
 ldap_result = 
self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
   File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in 
_ldap_call
 result = func(*args,**kwargs)
UNWILLING_TO_PERFORM: {'info': 'authentication required', 'desc': 'Server is 
unwilling to perform'}
2018-02-08 15:15:08,642-0800 WARNING 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._connectLDAP:463 Cannot connect using 'ldap://x.squaretrade.com:389': 
{'info': 'authentication required', 'desc': 'Server is unwilling to perform'}
2018-02-08 15:15:08,643-0800 ERROR 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._customization_late:787 Cannot connect using any of available options
2018-02-08 15:15:08,644-0800 DEBUG 
otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common 
common._customization_late:788 Exception
Traceback (most recent call last):
   File 
"/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py",
 line 782, in _customization_late
 insecure=insecure,
   File 
"/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py",
 line 468, in _connectLDAP
 _('Cannot connect using any of available options')
SoftRuntimeError: Cannot connect using any of