Re: [ovirt-users] Memory for Engine Machine

2017-02-20 Thread Roy Golan
On Tue, Feb 21, 2017 at 12:12 AM Simone Tiraboschi 
wrote:

> On Mon, Feb 20, 2017 at 9:39 PM, Roy Golan  wrote:
>
>
>
> On Feb 20, 2017 9:35 PM, "Doug Ingham"  wrote:
>
> 16GB is just the recommended amount of memory. The more items your Engine
> has to manage, the more memory it will consume, so whilst it might not be
> using that amount of memory at the moment, it will do as you expand your
> cluster.
>
>
> It will never really need this amount of memory. There is no reason to
> configure the heap for 16gb.
>
>
>
> On 20 February 2017 at 16:22, FERNANDO FREDIANI  > wrote:
>
> Hello folks
>
> I have a Engine dedicated machine running with 4GB of memory. It has been
> working fine without any apparent issues.
>
> If I check the system memory usage it rarely goes over 1.5GB.
>
> But when I upgrade oVirt Engine it complains with the following message:
> "[WARNING] Less than 16384MB of memory is available".
>
>
> Please open a bug with all the details, we should address that.
>
>
> Currently we are tuning the java heap size to 1/4 of the system memory.
> On hosted-engine-setup side we propose to assign to the engine VM 16GB (so
> java heap = 4GB) as recommended and 4GB as a minimum suggested value.
>
>
So for a machine with 64Gb we would put 16Gb doesn't make sense.(for non
hosted engine but in the future this may change) . we should elaborate the
calculation to top 4Gb.

Also what is the purpose of the warning that you have less than 16Gb of
available ram?


>
>
> Why is all that required if the real usage doesn't show that need ? Or am
> I missing anything ?
>
> Fernando Frediani
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
> Doug
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving disk failed.. remained locked

2017-02-20 Thread Gianluca Cecchi
On Mon, Feb 20, 2017 at 10:51 PM, Gianluca Cecchi  wrote:

> On Mon, Feb 20, 2017 at 8:46 PM, Fred Rolland  wrote:
>
>> Can you please send the whole logs ? (Engine, vdsm and sanlock)
>>
>>
> vdsm.log.1.xz:
> https://drive.google.com/file/d/0BwoPbcrMv8mvWTViWEUtNjRtLTg/
> view?usp=sharing
>
> sanlock.log
> https://drive.google.com/file/d/0BwoPbcrMv8mvcVM4YzZ4aUZLYVU/
> view?usp=sharing
>
> engine.log (gzip format);
> https://drive.google.com/file/d/0BwoPbcrMv8mvdW80RlFIYkpzenc/
> view?usp=sharing
>
> Thanks,
> Gianluca
>
>
I didn't say that size of disk is 430Gb and target storage domain is 1Tb,
almost empty (950Gb free)
I received a message about problems from the storage where the the disk is
and so I'm trying to move it so that I can put under maintenance the
original one and see.
The errors seem about destination creation of volume and not source...
thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] actual size greater than virtual size?

2017-02-20 Thread Nir Soffer
On Tue, Feb 21, 2017 at 12:17 AM, Gianluca Cecchi  wrote:

>
>
> On Mon, Feb 20, 2017 at 11:08 PM, Nir Soffer  wrote:
>
>> On Tue, Feb 14, 2017 at 12:07 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> Hello,
>>> I'm on 4.1 with 2 FC SAN storage domains and testing live migration of
>>> disk.
>>>
>>> I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb.
>>> It seems that I can see actual disk size in VM--> Snapshots--> Active VM
>>> line and the right sub pane.
>>> If I remember correctly before the live storage migration the actual
>>> size was 16Gb.
>>> But after the migration it shows actual size greater than virtual
>>> (33Gb)??
>>>
>>
>> Hi Gianluca,
>>
>> This is possible for thin disks.
>>
>> With thin disk we have some overhead for qcow2 metadata, which can be
>> up to 10% in the worst case. So when doing operations on thin disk, we
>> may enlarge it to the virtual size * 1.1, which is 33G for 30G disk.
>>
>> Can you share the output of this commands?
>>
>> (If the vm is running, you should skip the lvchange commands)
>>
>> lvchange -ay 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906
>> -0970ad09b660he
>>
>> qemu-img check /dev/5ed04196-87f1-480e-9fee-9
>> dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660he
>>
>> lvchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906
>> -0970ad09b660he
>>
>> Nir
>>
>>
>>>
>>>
> Yes, the VM is active.
> In this moment the disk name seems to be without the final "he" letters
>

Seems to be an issue between my keyboard and chair :-)


> [root@ovmsrv07 ~]# qemu-img check /dev/5ed04196-87f1-480e-9fee-
> 9dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660
> No errors were found on the image.
> 41734/491520 = 8.49% allocated, 3.86% fragmented, 0.00% compressed clusters
> Image end offset: 2736128000
>

So you have 2.5 G image in 33G lv?

If this is internal volume, you can reduce it to the next power
of 128m - 2688m

If this is active volume, you want to leave empty space at the end.
since you are using 4G extent size, you can reduce it to 6784m.

To reduce the lv:

1. Move storage domain to maintenance

2. Check again the image end offset using qemu-img check

lvchange -ay vg-name/lv-name
qemu-img check /dev/vg-name/lv-name
lvchange -an vg-name/lv-name

2. use lvreduce (update the size if needed)

lvreduce -L 2688m vg-name/lv-name

3. Actvate the storage domain

We are working now on integrating this into the flows
like cold and live merge.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] actual size greater than virtual size?

2017-02-20 Thread Gianluca Cecchi
On Mon, Feb 20, 2017 at 11:08 PM, Nir Soffer  wrote:

> On Tue, Feb 14, 2017 at 12:07 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I'm on 4.1 with 2 FC SAN storage domains and testing live migration of
>> disk.
>>
>> I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb.
>> It seems that I can see actual disk size in VM--> Snapshots--> Active VM
>> line and the right sub pane.
>> If I remember correctly before the live storage migration the actual size
>> was 16Gb.
>> But after the migration it shows actual size greater than virtual (33Gb)??
>>
>
> Hi Gianluca,
>
> This is possible for thin disks.
>
> With thin disk we have some overhead for qcow2 metadata, which can be
> up to 10% in the worst case. So when doing operations on thin disk, we
> may enlarge it to the virtual size * 1.1, which is 33G for 30G disk.
>
> Can you share the output of this commands?
>
> (If the vm is running, you should skip the lvchange commands)
>
> lvchange -ay 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-
> b906-0970ad09b660he
>
> qemu-img check /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-
> 485f-b906-0970ad09b660he
>
> lvchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-
> b906-0970ad09b660he
>
> Nir
>
>
>>
>>
Yes, the VM is active.
In this moment the disk name seems to be without the final "he" letters

[root@ovmsrv07 ~]# qemu-img check
/dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906-0970ad09b660

No errors were found on the image.
41734/491520 = 8.49% allocated, 3.86% fragmented, 0.00% compressed clusters
Image end offset: 2736128000
[root@ovmsrv07 ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Memory for Engine Machine

2017-02-20 Thread Simone Tiraboschi
On Mon, Feb 20, 2017 at 9:39 PM, Roy Golan  wrote:

>
>
> On Feb 20, 2017 9:35 PM, "Doug Ingham"  wrote:
>
> 16GB is just the recommended amount of memory. The more items your Engine
> has to manage, the more memory it will consume, so whilst it might not be
> using that amount of memory at the moment, it will do as you expand your
> cluster.
>
>
> It will never really need this amount of memory. There is no reason to
> configure the heap for 16gb.
>
>
>
> On 20 February 2017 at 16:22, FERNANDO FREDIANI  > wrote:
>
>> Hello folks
>>
>> I have a Engine dedicated machine running with 4GB of memory. It has been
>> working fine without any apparent issues.
>>
>> If I check the system memory usage it rarely goes over 1.5GB.
>>
>> But when I upgrade oVirt Engine it complains with the following message:
>> "[WARNING] Less than 16384MB of memory is available".
>>
>
> Please open a bug with all the details, we should address that.
>

Currently we are tuning the java heap size to 1/4 of the system memory.
On hosted-engine-setup side we propose to assign to the engine VM 16GB (so
java heap = 4GB) as recommended and 4GB as a minimum suggested value.



>
>> Why is all that required if the real usage doesn't show that need ? Or am
>> I missing anything ?
>>
>> Fernando Frediani
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> Doug
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] actual size greater than virtual size?

2017-02-20 Thread Nir Soffer
On Tue, Feb 14, 2017 at 12:07 AM, Gianluca Cecchi  wrote:

> Hello,
> I'm on 4.1 with 2 FC SAN storage domains and testing live migration of
> disk.
>
> I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb.
> It seems that I can see actual disk size in VM--> Snapshots--> Active VM
> line and the right sub pane.
> If I remember correctly before the live storage migration the actual size
> was 16Gb.
> But after the migration it shows actual size greater than virtual (33Gb)??
>

Hi Gianluca,

This is possible for thin disks.

With thin disk we have some overhead for qcow2 metadata, which can be
up to 10% in the worst case. So when doing operations on thin disk, we
may enlarge it to the virtual size * 1.1, which is 33G for 30G disk.

Can you share the output of this commands?

(If the vm is running, you should skip the lvchange commands)

lvchange -ay 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906-
0970ad09b660he

qemu-img check /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/
9af3574d-dc83-485f-b906-0970ad09b660he

lvchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b/9af3574d-dc83-485f-b906-
0970ad09b660he

Nir


> There are no snapshots on the VM right now.
> Is this true? In case could it depend on temporary snapshots done by th
> esystem for storage migration? But in this case does it mean that a live
> merge generates a biggest image at the end..?
>
> The command I saw during conversion was:
>
> vdsm 17197  2707  3 22:39 ?00:00:01 /usr/bin/qemu-img convert
> -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/
> 922b5269-ab56-4c4d-838f-49d33427e2ab/images/6af3dfe5-
> 6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 -O qcow2
> -o compat=1.1 /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-
> 9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/
> 9af3574d-dc83-485f-b906-0970ad09b660he
>
> See currenti situation here:
> https://drive.google.com/file/d/0BwoPbcrMv8mvMnhib2NCQlJfRVE/
> view?usp=sharing
>
> thanks for clarification,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] actual size greater than virtual size?

2017-02-20 Thread Gianluca Cecchi
On Mon, Feb 20, 2017 at 8:39 PM, Adam Litke  wrote:

> Ok, thanks for clarifying that.  So it looks like the temporary snapshot
> was already merged/deleted.  As part of that operation, vdsm calculates how
> large the original volume would have to be (LV size) in order to guarantee
> that the base volume has enough space for the merge to complete.  It looks
> like in this case we decided that value was the virtual size + 10% for
> qcow2 metadata.  If your base volume was only about 16G allocated prior to
> all of this than it seems we were way too conservative.  Did you remove the
> snapshot quickly after the move completed?  One thing that could cause this
> is if you wrote about 12-15 GB of data into the that disk after moving but
> before deleting the temporary snapshot.
>
> Do you happen to have the vdsm logs from the host running the VM and the
> host that was acting SPM (if different) from during the time you
> experienced this issue?
>
>
I executed some snapshot creation to test backup so with following live
merge after a couple of minutes without using that much VM in the mean time.
And I also tested storage migration.
I have not the logs but I can try to reproduce similar operations and then
send new log files if the anomaly persists...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving disk failed.. remained locked

2017-02-20 Thread Gianluca Cecchi
On Mon, Feb 20, 2017 at 8:46 PM, Fred Rolland  wrote:

> Can you please send the whole logs ? (Engine, vdsm and sanlock)
>
>
vdsm.log.1.xz:
https://drive.google.com/file/d/0BwoPbcrMv8mvWTViWEUtNjRtLTg/view?usp=sharing

sanlock.log
https://drive.google.com/file/d/0BwoPbcrMv8mvcVM4YzZ4aUZLYVU/view?usp=sharing

engine.log (gzip format);
https://drive.google.com/file/d/0BwoPbcrMv8mvdW80RlFIYkpzenc/view?usp=sharing

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm issues between engine and host

2017-02-20 Thread cmc
>
> VDSM should not be running on the engine.

Not sure why it was enabled in systemd...I have disabled it now. AFAIK
it wasn't before.

> I decided to try to restart vdsmd on the host and that did allow the
> state of the VMs to be discovered, and the engine listed the host as
> up again. However, there are still errors with vdsmd on both the host
> and the engine, and the engine cannot start vdsmd. I guess it is able
> to monitor the hosts in a limited way as it says they are both up.
> There are communication errors between one of the hosts and the
> engine: the host is refusing connections by the look of it
>
>
> Is iptables / firewalld set up correctly?
> Y.

Ports 54321 and 22 are open from engine to host. Is there an easy way to check
the config is valid? It certainly used to work ok.

Thanks,

C
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Troubles after resizing the iscsi storage.

2017-02-20 Thread Nir Soffer
On Mon, Feb 20, 2017 at 11:54 AM, Arman Khalatyan  wrote:

> Hi,
> I just have 1TB iscsi  storage with virtual machine with  several disks.
> After shutdown the vm I just detached storage and removed from the cluster
> and ovirt totally.
> 1) Then on the target resized the exported volume to 13TB,
> 2) iscsiadm -m node -l on the host I resized
> iscsiadm -m node -l
> pvresize /dev/mapper/.
> iscsiadm -m node -u
>
> After importing back to the ovirt over the gui, I can see the vm is in the
> import field but I cannot see separate disks
> Importing failing in the stage "Executing".
>
are there way to fix it?
> On the host with "lvs" I can see the disks are there.
>

Hi Arman,

You don't need to remove storage from ovirt to resize a LUN, and you don't
need
to resize the pv, we can do this for you.

To resize a LUN:

1. Resize the LUN on the storage server
2. Open the "manage domain", dialog
3. Select the LUN and click the "resize" button in the table
(we should detect the new size and display button suggesting to add 11T)
4. Click "OK"

The system will make sure all hosts see the new size of the LUN, and then
it will resize the PV for you, and update engine database with the new LUN
size.

Now, in your case, we don't have any information about the issue, but if
you restart vdsm on the SPM host your issue may be resolved.

If not, please attach engine and vdsm logs.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs HA with cinder volumes

2017-02-20 Thread Nir Soffer
On Mon, Feb 20, 2017 at 12:54 PM, Matteo Dacrema  wrote:

> Hi all,
>
> I’ve a setup that uses Ceph through ovirt for VM storage.
> I tested out VM HA and all works well when IPMI is reachable.
> When it’s not reachable because of network or power failure VM states
> switch to unknown and it’s not restarted to other nodes.
>
> Is it a specific setup that I’ve missed?
>

Hi Matteo,

When power management is not available, we cannot do failover to other host.

In 4.1 you can use a VM lease for your HA VM, enabling failover without
power
management.

Note that the lease cannot be on Ceph storage domain, you will have to add
other type of storage for the leases, since we don't support leases on ceph
yet.

Please check the vm-leases feature page for more info:
https://www.ovirt.org/develop/release-management/features/storage/vm-leases/

Nir


>
> Thank you
> Regards
> Matteo
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm issues between engine and host

2017-02-20 Thread Yaniv Kaul
On Feb 20, 2017 10:48 PM, "cmc"  wrote:

Hi,

Due to networking and DNS issues. our engine was offlined (it is
physical machine currently, will be converting it to a VM in the
future when time allows). When service was restored, I noticed that
all the VMs were listed as being in an unknown state on one host. The
VMs were fine, but the engine could not ascertain their status as the
host itself was in an unknown state. vdsm was reporting errors and was
not running on the engine (or at least was in status 'failed' in
systemd). I tried starting vdsmd on the engine but it would not start.


VDSM should not be running on the engine.

I decided to try to restart vdsmd on the host and that did allow the
state of the VMs to be discovered, and the engine listed the host as
up again. However, there are still errors with vdsmd on both the host
and the engine, and the engine cannot start vdsmd. I guess it is able
to monitor the hosts in a limited way as it says they are both up.
There are communication errors between one of the hosts and the
engine: the host is refusing connections by the look of it


Is iptables / firewalld set up correctly?
Y.


from the engine log:

2017-02-20 18:41:51,226Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler2) [f8aa18b3-97b9-48e2-a681-cf3aaed330a5]
Command 'GetCapabilitiesVDSCommand(HostName = k
vm-ldn-01, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='e050c27f-8709-404c-b03e-59c0167a824b',
vds='Host[kvm-ldn-01,e050c27f-8709-404c-b03e-59c0167a824b]'})'
execution failed: java.net.ConnectExce
ption: Connection refused
2017-02-20 18:41:51,226Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler2) [f8aa18b3-97b9-48e2-a681-cf3aaed330a5]
Failure to refresh host 'kvm-ldn-01' runtime info: java.n
et.ConnectException: Connection refused
2017-02-20 18:41:52,772Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler6) [f8aa18b3-97b9-48e2-a681-cf3aaed330a5]
Command 'GetAllVmStatsVDSCommand(HostName = kvm-ldn-01,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='e050c27f-8709-404c-b03e-59c0167a824b'})' execution failed:
VDSGenericException: VDSNetworkException: Connection reset by peer
2017-02-20 18:41:54,256Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler7) [f8aa18b3-97b9-48e2-a681-cf3aaed330a5]
Command 'GetCapabilitiesVDSCommand(HostName = kvm-ldn-01,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='e050c27f-8709-404c-b03e-59c0167a824b',
vds='Host[kvm-ldn-01,e050c27f-8709-404c-b03e-59c0167a824b]'})'
execution failed: java.net.ConnectException: Connection refused

from the vdsm.log on the host:


Feb 20 18:44:20 kvm-ldn-01 vdsm[42308]: vdsm vds.dispatcher ERROR SSL
error receiving from : unexpected eof
Feb 20 18:44:24 kvm-ldn-01 vdsm[42308]: vdsm jsonrpc.JsonRpcServer
ERROR Internal server error
Traceback (most recent call last):
  File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 547, in
_handle_request...

Any ideas what might be going on here?

Thanks,

Cam

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Memory for Engine Machine

2017-02-20 Thread Roy Golan
On Feb 20, 2017 9:35 PM, "Doug Ingham"  wrote:

16GB is just the recommended amount of memory. The more items your Engine
has to manage, the more memory it will consume, so whilst it might not be
using that amount of memory at the moment, it will do as you expand your
cluster.


It will never really need this amount of memory. There is no reason to
configure the heap for 16gb.



On 20 February 2017 at 16:22, FERNANDO FREDIANI 
wrote:

> Hello folks
>
> I have a Engine dedicated machine running with 4GB of memory. It has been
> working fine without any apparent issues.
>
> If I check the system memory usage it rarely goes over 1.5GB.
>
> But when I upgrade oVirt Engine it complains with the following message:
> "[WARNING] Less than 16384MB of memory is available".
>

Please open a bug with all the details, we should address that.


> Why is all that required if the real usage doesn't show that need ? Or am
> I missing anything ?
>
> Fernando Frediani
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Doug

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to export VM

2017-02-20 Thread Pat Riehecky
The disks dropped off when I was doing A LOT of disk IO on the VM - 
untaring a 2TB archive.  I'd done a snapshot earlier (about 30m) of the VM.


The storage shows no faults at this time.  The switch fabric was online 
for all 4 paths.


Pat

On 02/20/2017 02:08 PM, Adam Litke wrote:
There definitely seems to be a problem with your storage domain 
81f19871...  The host is unable to join that domain's sanlock 
lockspace.  Also, it seems that some metadata for the disk with id 
e17ebd7c... was corrupted or lost in translation somehow.  Can you 
provide more details about what happened when "the disk images got 
'unregistered' from oVirt"? Were you performing any particular 
operations (such as moving disks, snapshot create/delete, etc)?  Was 
there a problem with the storage at that time?


On Mon, Feb 20, 2017 at 9:51 AM, Pat Riehecky > wrote:


Hi Adam,

Thanks for looking!  The storage is fibre attached and I've
verified with the SAN folks nothing went wonky during this window
on their side.

Here is what I've got from vdsm.log during the window (and a bit
surrounding it for context):

libvirtEventLoop::WARNING::2017-02-16
08:35:17,435::utils::140::root::(rmFile) File:

/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.com.redhat.rhevm.vdsm
already removed
libvirtEventLoop::WARNING::2017-02-16
08:35:17,435::utils::140::root::(rmFile) File:

/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.org.qemu.guest_agent.0
already removed
periodic/2::WARNING::2017-02-16
08:35:18,144::periodic::295::virt.vm::(__call__)
vmId=`ba806b93-b6fe-4873-99ec-55bb34c12e5f`::could not run on
ba806b93-b6fe-4873-99ec-55bb34c12e5f: domain not connected
periodic/3::WARNING::2017-02-16
08:35:18,305::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run  on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
Thread-23021::ERROR::2017-02-16
09:28:33,096::task::866::Storage.TaskManager.Task::(_setError)
Task=`ecab8086-261f-44b9-8123-eefb9bbf5b05`::Unexpected error
Thread-23021::ERROR::2017-02-16
09:28:33,097::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-23783::ERROR::2017-02-16
10:13:32,876::task::866::Storage.TaskManager.Task::(_setError)
Task=`ff628204-6e41-4e5e-b83a-dad6ec94d0d3`::Unexpected error
Thread-23783::ERROR::2017-02-16
10:13:32,877::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-24542::ERROR::2017-02-16
10:58:32,578::task::866::Storage.TaskManager.Task::(_setError)
Task=`f5111200-e980-46bb-bbc3-898ae312d556`::Unexpected error
Thread-24542::ERROR::2017-02-16
10:58:32,579::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,049::sdc::139::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,049::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,305::sdc::145::Storage.StorageDomainCache::(_findDomain)
domain 13127103-3f59-418a-90f1-5b1ade8526b1 not found
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
11:29:19,402::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
11:29:19,403::task::866::Storage.TaskManager.Task::(_setError)
Task=`6e31bf97-458c-4a30-9df5-14f475db3339`::Unexpected error
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
11:29:20,649::image::205::Storage.Image::(getChain) There is no
leaf in the image b4c4b53e-3813-4959-a145-16f1dfcf1838
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
11:29:20,650::task::866::Storage.TaskManager.Task::(_setError)
Task=`79ed31a2-5ac7-4304-ab4d-d05f72694860`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,063::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,064::task::866::Storage.TaskManager.Task::(_setError)
Task=`62f20e22-e850-44c8-8943-faa4ce71e973`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,065::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Image is not a legal chain:

Re: [ovirt-users] Unable to export VM

2017-02-20 Thread Adam Litke
There definitely seems to be a problem with your storage domain
81f19871...  The host is unable to join that domain's sanlock lockspace.
Also, it seems that some metadata for the disk with id e17ebd7c... was
corrupted or lost in translation somehow.  Can you provide more details
about what happened when "the disk images got 'unregistered' from oVirt"?
Were you performing any particular operations (such as moving disks,
snapshot create/delete, etc)?  Was there a problem with the storage at that
time?

On Mon, Feb 20, 2017 at 9:51 AM, Pat Riehecky  wrote:

> Hi Adam,
>
> Thanks for looking!  The storage is fibre attached and I've verified with
> the SAN folks nothing went wonky during this window on their side.
>
> Here is what I've got from vdsm.log during the window (and a bit
> surrounding it for context):
>
> libvirtEventLoop::WARNING::2017-02-16 08:35:17,435::utils::140::root::(rmFile)
> File: /var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-
> 99ec-55bb34c12e5f.com.redhat.rhevm.vdsm already removed
> libvirtEventLoop::WARNING::2017-02-16 08:35:17,435::utils::140::root::(rmFile)
> File: /var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-
> 99ec-55bb34c12e5f.org.qemu.guest_agent.0 already removed
> periodic/2::WARNING::2017-02-16 
> 08:35:18,144::periodic::295::virt.vm::(__call__)
> vmId=`ba806b93-b6fe-4873-99ec-55bb34c12e5f`::could not run on
> ba806b93-b6fe-4873-99ec-55bb34c12e5f: domain not connected
> periodic/3::WARNING::2017-02-16 08:35:18,305::periodic::261::
> virt.periodic.VmDispatcher::(__call__) could not run  'virt.periodic.DriveWatermarkMonitor'> on ['ba806b93-b6fe-4873-99ec-
> 55bb34c12e5f']
> Thread-23021::ERROR::2017-02-16 09:28:33,096::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`ecab8086-261f-44b9-8123-eefb9bbf5b05`::Unexpected
> error
> Thread-23021::ERROR::2017-02-16 
> 09:28:33,097::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> Thread-23783::ERROR::2017-02-16 10:13:32,876::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`ff628204-6e41-4e5e-b83a-dad6ec94d0d3`::Unexpected
> error
> Thread-23783::ERROR::2017-02-16 
> 10:13:32,877::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> Thread-24542::ERROR::2017-02-16 10:58:32,578::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`f5111200-e980-46bb-bbc3-898ae312d556`::Unexpected
> error
> Thread-24542::ERROR::2017-02-16 
> 10:58:32,579::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Storage domain is member of pool:
> 'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
> jsonrpc.Executor/4::ERROR::2017-02-16 11:28:24,049::sdc::139::
> Storage.StorageDomainCache::(_findDomain) looking for unfetched domain
> 13127103-3f59-418a-90f1-5b1ade8526b1
> jsonrpc.Executor/4::ERROR::2017-02-16 11:28:24,049::sdc::156::
> Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain
> 13127103-3f59-418a-90f1-5b1ade8526b1
> jsonrpc.Executor/4::ERROR::2017-02-16 11:28:24,305::sdc::145::
> Storage.StorageDomainCache::(_findDomain) domain 
> 13127103-3f59-418a-90f1-5b1ade8526b1
> not found
> 6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
> 11:29:19,402::image::205::Storage.Image::(getChain) There is no leaf in
> the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> 6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
> 11:29:19,403::task::866::Storage.TaskManager.Task::(_setError)
> Task=`6e31bf97-458c-4a30-9df5-14f475db3339`::Unexpected error
> 79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
> 11:29:20,649::image::205::Storage.Image::(getChain) There is no leaf in
> the image b4c4b53e-3813-4959-a145-16f1dfcf1838
> 79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
> 11:29:20,650::task::866::Storage.TaskManager.Task::(_setError)
> Task=`79ed31a2-5ac7-4304-ab4d-d05f72694860`::Unexpected error
> jsonrpc.Executor/5::ERROR::2017-02-16 
> 11:30:17,063::image::205::Storage.Image::(getChain)
> There is no leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> jsonrpc.Executor/5::ERROR::2017-02-16 11:30:17,064::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`62f20e22-e850-44c8-8943-faa4ce71e973`::Unexpected
> error
> jsonrpc.Executor/5::ERROR::2017-02-16 
> 11:30:17,065::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Image is not a legal chain:
> ('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
> jsonrpc.Executor/4::ERROR::2017-02-16 
> 11:33:18,487::image::205::Storage.Image::(getChain)
> There is no leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
> jsonrpc.Executor/4::ERROR::2017-02-16 11:33:18,488::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`e4d893f2-7be6-4f84-9ac6-58b5a5d1364e`::Unexpected
> error
> 

Re: [ovirt-users] moving disk failed.. remained locked

2017-02-20 Thread Fred Rolland
Can you please send the whole logs ? (Engine, vdsm and sanlock)

On Mon, Feb 20, 2017 at 4:49 PM, Gianluca Cecchi 
wrote:

> Hello,
> I'm trying to move a disk from one storage domain A to another B in oVirt
> 4.1
> The corresponding VM is powered on in the mean time
>
> When executing the action, there was already in place a disk move from
> storage domain C to A (this move was for a disk of a powered off VM and
> then completed ok)
> I got this in events of webadmin gui for the failed move A -> B:
>
> Feb 20, 2017 2:42:00 PM Failed to complete snapshot 'Auto-generated for
> Live Storage Migration' creation for VM 'dbatest6'.
> Feb 20, 2017 2:40:51 PM VDSM ovmsrv06 command HSMGetAllTasksStatusesVDS
> failed: Error creating a new volume
> Feb 20, 2017 2:40:51 PM Snapshot 'Auto-generated for Live Storage
> Migration' creation for VM 'dbatest6' was initiated by admin@internal-authz.
>
>
> And in relevant vdsm.log of referred host ovmsrv06
>
> 2017-02-20 14:41:44,899 ERROR (tasks/8) [storage.Volume] Unexpected error
> (volume:1087)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/volume.py", line 1081, in create
> cls.newVolumeLease(metaId, sdUUID, volUUID)
>   File "/usr/share/vdsm/storage/volume.py", line 1361, in newVolumeLease
> return cls.manifestClass.newVolumeLease(metaId, sdUUID, volUUID)
>   File "/usr/share/vdsm/storage/blockVolume.py", line 310, in
> newVolumeLease
> sanlock.init_resource(sdUUID, volUUID, [(leasePath, leaseOffset)])
> SanlockException: (-202, 'Sanlock resource init failure', 'Sanlock
> exception')
> 2017-02-20 14:41:44,900 ERROR (tasks/8) [storage.TaskManager.Task]
> (Task='d694b892-b078-4d86-a035-427ee4fb3b13') Unexpected error (task:870)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 877, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/storage/task.py", line 333, in run
> return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
> 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 1929, in createVolume
> initialSize=initialSize)
>   File "/usr/share/vdsm/storage/sd.py", line 762, in createVolume
> initialSize=initialSize)
>   File "/usr/share/vdsm/storage/volume.py", line 1089, in create
> (volUUID, e))
> VolumeCreationError: Error creating a new volume: (u"Volume creation
> d0d938bd-1479-49cb-93fb-85b6a32d6cb4 failed: (-202, 'Sanlock resource
> init failure', 'Sanlock exception')",)
> 2017-02-20 14:41:44,941 INFO  (tasks/8) [storage.Volume] Metadata rollback
> for sdUUID=900b1853-e192-4661-a0f9-7c7c396f6f49 offs=8 (blockVolume:448)
>
>
> Was the error generated due to the other migration still in progress?
> Is there a limit of concurrent migrations from/to a particular storage
> domain?
>
> Now I would like to retry, but I see that the disk is in state locked with
> hourglass.
> The autogenerated snapshot of the failed action was apparently removed
> with success as I don't see it.
>
> How can I proceed to move the disk?
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unmanaged VMs in Task List since overt 4

2017-02-20 Thread Christophe TREFOIS
Dear all,

Since our upgrade to oVirt 4, we notice the following task being executed every 
15-20 seconds.

"Adding unmanaged VMs running on Host bio2_engine to Cluster Default”. The 
status is successful, but I’m not sure what it is or which VMs engine is 
talking about.

Any ideas?

Thanks,
Christophe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] actual size greater than virtual size?

2017-02-20 Thread Adam Litke
Ok, thanks for clarifying that.  So it looks like the temporary snapshot
was already merged/deleted.  As part of that operation, vdsm calculates how
large the original volume would have to be (LV size) in order to guarantee
that the base volume has enough space for the merge to complete.  It looks
like in this case we decided that value was the virtual size + 10% for
qcow2 metadata.  If your base volume was only about 16G allocated prior to
all of this than it seems we were way too conservative.  Did you remove the
snapshot quickly after the move completed?  One thing that could cause this
is if you wrote about 12-15 GB of data into the that disk after moving but
before deleting the temporary snapshot.

Do you happen to have the vdsm logs from the host running the VM and the
host that was acting SPM (if different) from during the time you
experienced this issue?

On Mon, Feb 20, 2017 at 10:02 AM, Gianluca Cecchi  wrote:

> On Mon, Feb 20, 2017 at 3:38 PM, Adam Litke  wrote:
>
>> Hi Gianluca.  This is most likely caused by the temporary snapshot we
>> create when performing live storage migration.  Please check your VM for
>> snapshots and delete the temporary one.
>>
>>
> Hi Adam,
> as you see from my attachment, there is one line for snapshots, the
> current one, and no others.
> So the VM is without snapshots.
> See bigger frame here:
> https://drive.google.com/file/d/0BwoPbcrMv8mvbDZaVFpvd1Ayd0k/
> view?usp=sharing
>
> Gianluca
>
>


-- 
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Memory for Engine Machine

2017-02-20 Thread Doug Ingham
16GB is just the recommended amount of memory. The more items your Engine
has to manage, the more memory it will consume, so whilst it might not be
using that amount of memory at the moment, it will do as you expand your
cluster.

On 20 February 2017 at 16:22, FERNANDO FREDIANI 
wrote:

> Hello folks
>
> I have a Engine dedicated machine running with 4GB of memory. It has been
> working fine without any apparent issues.
>
> If I check the system memory usage it rarely goes over 1.5GB.
>
> But when I upgrade oVirt Engine it complains with the following message:
> "[WARNING] Less than 16384MB of memory is available".
>
> Why is all that required if the real usage doesn't show that need ? Or am
> I missing anything ?
>
> Fernando Frediani
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Memory for Engine Machine

2017-02-20 Thread FERNANDO FREDIANI

Hello folks

I have a Engine dedicated machine running with 4GB of memory. It has 
been working fine without any apparent issues.


If I check the system memory usage it rarely goes over 1.5GB.

But when I upgrade oVirt Engine it complains with the following message: 
"[WARNING] Less than 16384MB of memory is available".


Why is all that required if the real usage doesn't show that need ? Or 
am I missing anything ?


Fernando Frediani

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.1.1 Second Test compose

2017-02-20 Thread Jeff Burns
Nik,

The link was ok, but the text as you noted was missing the hyphen.   I
corrected and submitted the following for update.
https://github.com/oVirt/ovirt-site/pull/811

Cheers,
Jeff
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM in oVirt 4 sometimes does not have proper CPU guest count

2017-02-20 Thread Christophe TREFOIS
Dear all,

We upgraded today engine to 4.0.6 and a host to CentOS 7.3 and all packages 
updated.

Most VMs start up with correct number of guest CPUs except 2 or 3 that only 
spawn with 1 guest CPU count.
We set for instance a CPU profile of 1:4:1 but only 1 vCPU is available to VM.

The VM itself is running Trusty, but other Trusty VM seem fine.

Do you have any idea what could be the issue and how to perhaps fix this?

Kind regards,
Christophe

--

Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine
6, avenue du Swing
L-4367 Belvaux
T: +352 46 66 44 6124
F: +352 46 66 44 6949
http://www.uni.lu/lcsb

[Facebook]  [Twitter] 
   [Google Plus] 
   [Linkedin] 
   [skype] 



This message is confidential and may contain privileged information.
It is intended for the named recipient only.
If you receive it in error please notify me and permanently delete the original 
message and any copies.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to set up host networks

2017-02-20 Thread Michael Watters
Thanks.  I managed to get it working after changing the bonding options
in the /etc/sysconfig/network-scripts/bond0 file.


On 02/20/2017 11:31 AM, Yaniv Kaul wrote:
>
>
> On Mon, Feb 20, 2017 at 6:00 PM, Michael Watters  > wrote:
>
> I am building a new ovirt host running Ovirt 4.0 however I am
> receiving
>
>
> I suggest trying 4.1.
>  
>
> UI errors when I attempt to define the host networks in the
> engine.  The
> engine logs show errors as follows.
>
> 2017-02-20 10:53:44,478 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
> (default task-358) [653da31a] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to
> HostSetupNetworksVDS,
> error = Error parsing bonding options: 'miimon=1 updelay=0 downdelay-0
> mode=802.3ad', code = 25
>
> UI logs also show errors as follows.
>
> 2017-02-20 10:58:49,092 ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-239) [] Permutation name:
> 36FFE9E683BD2C616FFB067DABA3A81E
> 2017-02-20 10:58:49,092 ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-239) [] Uncaught exception:
> com.google.gwt.event.shared.UmbrellaException: Exception caught:
> (TypeError)
>  __gwt$exception: : Cannot read property 'k' of undefined
> at Unknown.Ev(webadmin-0.js@25721)
>
>
> Without webadmin-debuginfo, we can't decipher the stack. Please
> install it, restart the Engine service and try again to reproduce.
> Ensure that you have the exact version of debuginfo that matches the
> webadmin package.
> The exception should appear on the engine's ui.log file.
>
> TIA,
> Y.
>  
>
> at Unknown.Mv(webadmin-0.js@41)
> at Unknown.d8(webadmin-0.js@19)
> at Unknown.g8(webadmin-0.js@19)
> at Unknown.q7(webadmin-0.js@117)
> at Unknown.oq(webadmin-0.js@26)
> at Unknown.yq(webadmin-0.js@24441)
> at Unknown.$2(webadmin-0.js@149)
> at Unknown.qq(webadmin-0.js@112)
> at Unknown.qaf(webadmin-0.js@1781)
> at Unknown.$$e(webadmin-0.js@85)
> at Unknown.a1e(webadmin-0.js@46)
> at Unknown.Sx(webadmin-0.js@29)
> at Unknown.Wx(webadmin-0.js@57)
> at Unknown.eval(webadmin-0.js@54)
> at Unknown.PC(webadmin-0.js@20)
> at Unknown.W9e(webadmin-0.js@98)
> at Unknown.fnf(webadmin-0.js@56)
> at Unknown.jnf(webadmin-0.js@7413)
> at Unknown.qaf(webadmin-0.js@1399)
> at Unknown.$$e(webadmin-0.js@85)
> at Unknown.Z$e(webadmin-0.js@60)
> at Unknown.$0e(webadmin-0.js@52)
> at Unknown.Sx(webadmin-0.js@29)
> at Unknown.Wx(webadmin-0.js@57)
> at Unknown.eval(webadmin-0.js@54)
> Caused by: com.google.gwt.core.client.JavaScriptException: (TypeError)
>  __gwt$exception: : Cannot read property 'k' of undefined
> at Unknown.K5p(webadmin-151.js@499431)
> at Unknown.K6p(webadmin-151.js@43)
> at Unknown.v6p(webadmin-151.js@120)
> at Unknown.P6p(webadmin-151.js@500852)
> at Unknown.b6r(webadmin-0.js@189)
> at Unknown.EFo(webadmin-0.js@71)
> at Unknown.UFo(webadmin-0.js@23730)
> at Unknown.m3p(webadmin-151.js@870)
> at Unknown.rbq(webadmin-151.js@17)
>
> Is there a way to resolve this?  Is it possible to define host
> networks manually without using the engine?
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VMs HA with cinder volumes

2017-02-20 Thread Matteo Dacrema
Hi all,

I’ve a setup that uses Ceph through ovirt for VM storage.
I tested out VM HA and all works well when IPMI is reachable.
When it’s not reachable because of network or power failure VM states switch to 
unknown and it’s not restarted to other nodes.

Is it a specific setup that I’ve missed?

Thank you
Regards
Matteo

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to set up host networks

2017-02-20 Thread Yaniv Kaul
On Mon, Feb 20, 2017 at 6:00 PM, Michael Watters 
wrote:

> I am building a new ovirt host running Ovirt 4.0 however I am receiving
>

I suggest trying 4.1.


> UI errors when I attempt to define the host networks in the engine.  The
> engine logs show errors as follows.
>
> 2017-02-20 10:53:44,478 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
> (default task-358) [653da31a] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to HostSetupNetworksVDS,
> error = Error parsing bonding options: 'miimon=1 updelay=0 downdelay-0
> mode=802.3ad', code = 25
>
> UI logs also show errors as follows.
>
> 2017-02-20 10:58:49,092 ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-239) [] Permutation name: 36FFE9E683BD2C616FFB067DABA3A81E
> 2017-02-20 10:58:49,092 ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-239) [] Uncaught exception: 
> com.google.gwt.event.shared.UmbrellaException:
> Exception caught: (TypeError)
>  __gwt$exception: : Cannot read property 'k' of undefined
> at Unknown.Ev(webadmin-0.js@25721)
>

Without webadmin-debuginfo, we can't decipher the stack. Please install it,
restart the Engine service and try again to reproduce. Ensure that you have
the exact version of debuginfo that matches the webadmin package.
The exception should appear on the engine's ui.log file.

TIA,
Y.


> at Unknown.Mv(webadmin-0.js@41)
> at Unknown.d8(webadmin-0.js@19)
> at Unknown.g8(webadmin-0.js@19)
> at Unknown.q7(webadmin-0.js@117)
> at Unknown.oq(webadmin-0.js@26)
> at Unknown.yq(webadmin-0.js@24441)
> at Unknown.$2(webadmin-0.js@149)
> at Unknown.qq(webadmin-0.js@112)
> at Unknown.qaf(webadmin-0.js@1781)
> at Unknown.$$e(webadmin-0.js@85)
> at Unknown.a1e(webadmin-0.js@46)
> at Unknown.Sx(webadmin-0.js@29)
> at Unknown.Wx(webadmin-0.js@57)
> at Unknown.eval(webadmin-0.js@54)
> at Unknown.PC(webadmin-0.js@20)
> at Unknown.W9e(webadmin-0.js@98)
> at Unknown.fnf(webadmin-0.js@56)
> at Unknown.jnf(webadmin-0.js@7413)
> at Unknown.qaf(webadmin-0.js@1399)
> at Unknown.$$e(webadmin-0.js@85)
> at Unknown.Z$e(webadmin-0.js@60)
> at Unknown.$0e(webadmin-0.js@52)
> at Unknown.Sx(webadmin-0.js@29)
> at Unknown.Wx(webadmin-0.js@57)
> at Unknown.eval(webadmin-0.js@54)
> Caused by: com.google.gwt.core.client.JavaScriptException: (TypeError)
>  __gwt$exception: : Cannot read property 'k' of undefined
> at Unknown.K5p(webadmin-151.js@499431)
> at Unknown.K6p(webadmin-151.js@43)
> at Unknown.v6p(webadmin-151.js@120)
> at Unknown.P6p(webadmin-151.js@500852)
> at Unknown.b6r(webadmin-0.js@189)
> at Unknown.EFo(webadmin-0.js@71)
> at Unknown.UFo(webadmin-0.js@23730)
> at Unknown.m3p(webadmin-151.js@870)
> at Unknown.rbq(webadmin-151.js@17)
>
> Is there a way to resolve this?  Is it possible to define host networks
> manually without using the engine?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to set up host networks

2017-02-20 Thread Michael Watters
I am building a new ovirt host running Ovirt 4.0 however I am receiving
UI errors when I attempt to define the host networks in the engine.  The
engine logs show errors as follows.

2017-02-20 10:53:44,478 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
(default task-358) [653da31a] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to HostSetupNetworksVDS,
error = Error parsing bonding options: 'miimon=1 updelay=0 downdelay-0
mode=802.3ad', code = 25

UI logs also show errors as follows.

2017-02-20 10:58:49,092 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
task-239) [] Permutation name: 36FFE9E683BD2C616FFB067DABA3A81E
2017-02-20 10:58:49,092 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
task-239) [] Uncaught exception: com.google.gwt.event.shared.UmbrellaException: 
Exception caught: (TypeError)
 __gwt$exception: : Cannot read property 'k' of undefined
at Unknown.Ev(webadmin-0.js@25721)
at Unknown.Mv(webadmin-0.js@41)
at Unknown.d8(webadmin-0.js@19)
at Unknown.g8(webadmin-0.js@19)
at Unknown.q7(webadmin-0.js@117)
at Unknown.oq(webadmin-0.js@26)
at Unknown.yq(webadmin-0.js@24441)
at Unknown.$2(webadmin-0.js@149)
at Unknown.qq(webadmin-0.js@112)
at Unknown.qaf(webadmin-0.js@1781)
at Unknown.$$e(webadmin-0.js@85)
at Unknown.a1e(webadmin-0.js@46)
at Unknown.Sx(webadmin-0.js@29)
at Unknown.Wx(webadmin-0.js@57)
at Unknown.eval(webadmin-0.js@54)
at Unknown.PC(webadmin-0.js@20)
at Unknown.W9e(webadmin-0.js@98)
at Unknown.fnf(webadmin-0.js@56)
at Unknown.jnf(webadmin-0.js@7413)
at Unknown.qaf(webadmin-0.js@1399)
at Unknown.$$e(webadmin-0.js@85)
at Unknown.Z$e(webadmin-0.js@60)
at Unknown.$0e(webadmin-0.js@52)
at Unknown.Sx(webadmin-0.js@29)
at Unknown.Wx(webadmin-0.js@57)
at Unknown.eval(webadmin-0.js@54)
Caused by: com.google.gwt.core.client.JavaScriptException: (TypeError)
 __gwt$exception: : Cannot read property 'k' of undefined
at Unknown.K5p(webadmin-151.js@499431)
at Unknown.K6p(webadmin-151.js@43)
at Unknown.v6p(webadmin-151.js@120)
at Unknown.P6p(webadmin-151.js@500852)
at Unknown.b6r(webadmin-0.js@189)
at Unknown.EFo(webadmin-0.js@71)
at Unknown.UFo(webadmin-0.js@23730)
at Unknown.m3p(webadmin-151.js@870)
at Unknown.rbq(webadmin-151.js@17)

Is there a way to resolve this?  Is it possible to define host networks 
manually without using the engine?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ova conversion errors: where to search?

2017-02-20 Thread Gianluca Cecchi
On Sat, Feb 18, 2017 at 12:42 AM, Tomáš Golembiovský 
wrote:

>
> >
> > where to find more details regarding reasons for failure? Other files on
> > engine or should I see at host side?
>
> There are two places you should look. One is vdsm.log on the host
> performing the import. And if you have oVirt 4.1 another place to look
> at is /var/log/vdsm/import where the import logs are stored.
>
>
Thanks.
BTW: the permissions of the ova were ok.

I see this text near the end of the file generated into
/var/log/vdsm/import

libguestfs: trace: v2v: vfs_type = "ext4"
[   8.6] Checking for sufficient free disk space in the guest
[   8.6] Estimating space required on target for each disk
mpstats:
mountpoint statvfs /dev/sda2 / (ext4):
  bsize=4096 blocks=3650143 bfree=3324165 bavail=3132985
estimate_target_size: fs_total_size = 14950985728 [13.9G]
estimate_target_size: source_total_size = 19327352832 [18.0G]
estimate_target_size: ratio = 0.774
estimate_target_size: fs_free = 13615779840 [12.7G]
estimate_target_size: scaled_saving = 10532706254 [9.8G]
estimate_target_size: sda: 8794646578 [8.2G]
[   8.6] Converting Ubuntu precise (12.04.5 LTS) to run on KVM
virt-v2v: error: virt-v2v is unable to convert this guest type
(linux/ubuntu)
rm -rf '/var/tmp/ova.vUHL91'
rm -rf '/var/tmp/null.LZo6Xr'
libguestfs: trace: v2v: close
libguestfs: closing guestfs handle 0x1b79170 (state 2)

Can we take a sort of generic linux in case? I don't think there could be
anything special in Ubuntu... or at least let the admin fix anything trying
some resuce operation after the VM has been created.


>
> > In case I have to import a windows VM ova, which iso should I attach near
> > the "Attach VirtIO-Drivers" checkbox? Any "official" iso? I
> > downloaded oVirt-toolsSetup-4.1-3.fc24.iso but I don't know if it is the
> > right iso for this.
>
> Install virtio-win package somewhere. In it, inside
> /usr/share/virtio-win directory, you will find virtio-win-*.iso file.
>
>
Is virtio-win package still current? In case from where to download it?
I read here:
https://www.ovirt.org/develop/release-management/features/engine/windows-guest-tools/
and installied ovirt-guest-tools-iso package that gives:

[g.cecchi@ovmsrv05 ~]$ ll /usr/share/ovirt-guest-tools-iso/
total 215316
-rw-r--r--. 1 root root 220481536 Jan 26 16:13
oVirt-toolsSetup_4.1-3.fc24.iso
lrwxrwxrwx. 1 root root31 Feb 20 16:13 ovirt-tools-setup.iso ->
oVirt-toolsSetup_4.1-3.fc24.iso
[g.cecchi@ovmsrv05 ~]$

and mounting as a loop device the iso I have inside it:

-r--r--r--.  1 root root  290 Jan 12 17:59 default.ini
-r--r--r--.  1 root root  667 Jan 12 17:59 default-logger.ini
-r--r--r--.  1 root root 1006 Jan 12 17:59 ovirt-guest-agent.ini
-r-xr-xr-x.  1 root root 11659498 Jan 12 17:59 OVirtGuestService.exe
-r--r--r--.  1 root root 30836592 Jan 26 16:13 ovirt-guest-tools-setup.exe
-r--r--r--.  1 root root  4216840 Aug  2  2016 vcredist_x86.exe
dr-xr-xr-x.  2 root root 2048 Jan 26 16:12 vdagent_x64
dr-xr-xr-x.  2 root root 2048 Jan 26 16:12 vdagent_x86
dr-xr-xr-x. 15 root root 4096 Jan 26 16:12 virtio

[g.cecchi@ovmsrv05 ~]$ ll /isomount/virtio/vioscsi/
total 18
dr-xr-xr-x. 3 root root 2048 Jan 16 14:45 2k12
dr-xr-xr-x. 3 root root 2048 Jan 16 14:45 2k12R2
dr-xr-xr-x. 3 root root 2048 Jan 16 14:45 2k16
dr-xr-xr-x. 4 root root 2048 Jan 16 14:45 2k8
dr-xr-xr-x. 3 root root 2048 Jan 16 14:45 2k8R2
dr-xr-xr-x. 4 root root 2048 Jan 16 14:45 w10
dr-xr-xr-x. 4 root root 2048 Jan 16 14:45 w7
dr-xr-xr-x. 4 root root 2048 Jan 16 14:45 w8
dr-xr-xr-x. 4 root root 2048 Jan 16 14:45 w8.1
[g.cecchi@ovmsrv05 ~]$

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to export VM

2017-02-20 Thread Pat Riehecky

Hi Adam,

Thanks for looking!  The storage is fibre attached and I've verified 
with the SAN folks nothing went wonky during this window on their side.


Here is what I've got from vdsm.log during the window (and a bit 
surrounding it for context):


libvirtEventLoop::WARNING::2017-02-16 
08:35:17,435::utils::140::root::(rmFile) File: 
/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.com.redhat.rhevm.vdsm 
already removed
libvirtEventLoop::WARNING::2017-02-16 
08:35:17,435::utils::140::root::(rmFile) File: 
/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.org.qemu.guest_agent.0 
already removed
periodic/2::WARNING::2017-02-16 
08:35:18,144::periodic::295::virt.vm::(__call__) 
vmId=`ba806b93-b6fe-4873-99ec-55bb34c12e5f`::could not run on 
ba806b93-b6fe-4873-99ec-55bb34c12e5f: domain not connected
periodic/3::WARNING::2017-02-16 
08:35:18,305::periodic::261::virt.periodic.VmDispatcher::(__call__) 
could not run  on 
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
Thread-23021::ERROR::2017-02-16 
09:28:33,096::task::866::Storage.TaskManager.Task::(_setError) 
Task=`ecab8086-261f-44b9-8123-eefb9bbf5b05`::Unexpected error
Thread-23021::ERROR::2017-02-16 
09:28:33,097::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Storage domain is member of pool: 
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-23783::ERROR::2017-02-16 
10:13:32,876::task::866::Storage.TaskManager.Task::(_setError) 
Task=`ff628204-6e41-4e5e-b83a-dad6ec94d0d3`::Unexpected error
Thread-23783::ERROR::2017-02-16 
10:13:32,877::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Storage domain is member of pool: 
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-24542::ERROR::2017-02-16 
10:58:32,578::task::866::Storage.TaskManager.Task::(_setError) 
Task=`f5111200-e980-46bb-bbc3-898ae312d556`::Unexpected error
Thread-24542::ERROR::2017-02-16 
10:58:32,579::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Storage domain is member of pool: 
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
jsonrpc.Executor/4::ERROR::2017-02-16 
11:28:24,049::sdc::139::Storage.StorageDomainCache::(_findDomain) 
looking for unfetched domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16 
11:28:24,049::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain) 
looking for domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16 
11:28:24,305::sdc::145::Storage.StorageDomainCache::(_findDomain) domain 
13127103-3f59-418a-90f1-5b1ade8526b1 not found
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16 
11:29:19,402::image::205::Storage.Image::(getChain) There is no leaf in 
the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16 
11:29:19,403::task::866::Storage.TaskManager.Task::(_setError) 
Task=`6e31bf97-458c-4a30-9df5-14f475db3339`::Unexpected error
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16 
11:29:20,649::image::205::Storage.Image::(getChain) There is no leaf in 
the image b4c4b53e-3813-4959-a145-16f1dfcf1838
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16 
11:29:20,650::task::866::Storage.TaskManager.Task::(_setError) 
Task=`79ed31a2-5ac7-4304-ab4d-d05f72694860`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16 
11:30:17,063::image::205::Storage.Image::(getChain) There is no leaf in 
the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/5::ERROR::2017-02-16 
11:30:17,064::task::866::Storage.TaskManager.Task::(_setError) 
Task=`62f20e22-e850-44c8-8943-faa4ce71e973`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16 
11:30:17,065::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Image is not a legal chain: 
('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
jsonrpc.Executor/4::ERROR::2017-02-16 
11:33:18,487::image::205::Storage.Image::(getChain) There is no leaf in 
the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/4::ERROR::2017-02-16 
11:33:18,488::task::866::Storage.TaskManager.Task::(_setError) 
Task=`e4d893f2-7be6-4f84-9ac6-58b5a5d1364e`::Unexpected error
jsonrpc.Executor/4::ERROR::2017-02-16 
11:33:18,489::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Image is not a legal chain: 
('e17ebd7c-0763-42b2-b344-5ad7f9cf448e',)", 'code': 262}}
3132106a-ce35-4b12-9a72-812e415eff7f::ERROR::2017-02-16 
11:34:47,595::image::205::Storage.Image::(getChain) There is no leaf in 
the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
3132106a-ce35-4b12-9a72-812e415eff7f::ERROR::2017-02-16 
11:34:47,596::task::866::Storage.TaskManager.Task::(_setError) 
Task=`3132106a-ce35-4b12-9a72-812e415eff7f`::Unexpected error
112fb772-a497-4788-829f-190d6d008d95::ERROR::2017-02-16 
11:34:48,517::image::205::Storage.Image::(getChain) There is no leaf in 
the image b4c4b53e-3813-4959-a145-16f1dfcf1838

[ovirt-users] moving disk failed.. remained locked

2017-02-20 Thread Gianluca Cecchi
Hello,
I'm trying to move a disk from one storage domain A to another B in oVirt
4.1
The corresponding VM is powered on in the mean time

When executing the action, there was already in place a disk move from
storage domain C to A (this move was for a disk of a powered off VM and
then completed ok)
I got this in events of webadmin gui for the failed move A -> B:

Feb 20, 2017 2:42:00 PM Failed to complete snapshot 'Auto-generated for
Live Storage Migration' creation for VM 'dbatest6'.
Feb 20, 2017 2:40:51 PM VDSM ovmsrv06 command HSMGetAllTasksStatusesVDS
failed: Error creating a new volume
Feb 20, 2017 2:40:51 PM Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'dbatest6' was initiated by admin@internal-authz.


And in relevant vdsm.log of referred host ovmsrv06

2017-02-20 14:41:44,899 ERROR (tasks/8) [storage.Volume] Unexpected error
(volume:1087)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/volume.py", line 1081, in create
cls.newVolumeLease(metaId, sdUUID, volUUID)
  File "/usr/share/vdsm/storage/volume.py", line 1361, in newVolumeLease
return cls.manifestClass.newVolumeLease(metaId, sdUUID, volUUID)
  File "/usr/share/vdsm/storage/blockVolume.py", line 310, in newVolumeLease
sanlock.init_resource(sdUUID, volUUID, [(leasePath, leaseOffset)])
SanlockException: (-202, 'Sanlock resource init failure', 'Sanlock
exception')
2017-02-20 14:41:44,900 ERROR (tasks/8) [storage.TaskManager.Task]
(Task='d694b892-b078-4d86-a035-427ee4fb3b13') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 333, in run
return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1929, in createVolume
initialSize=initialSize)
  File "/usr/share/vdsm/storage/sd.py", line 762, in createVolume
initialSize=initialSize)
  File "/usr/share/vdsm/storage/volume.py", line 1089, in create
(volUUID, e))
VolumeCreationError: Error creating a new volume: (u"Volume creation
d0d938bd-1479-49cb-93fb-85b6a32d6cb4 failed: (-202, 'Sanlock resource init
failure', 'Sanlock exception')",)
2017-02-20 14:41:44,941 INFO  (tasks/8) [storage.Volume] Metadata rollback
for sdUUID=900b1853-e192-4661-a0f9-7c7c396f6f49 offs=8 (blockVolume:448)


Was the error generated due to the other migration still in progress?
Is there a limit of concurrent migrations from/to a particular storage
domain?

Now I would like to retry, but I see that the disk is in state locked with
hourglass.
The autogenerated snapshot of the failed action was apparently removed with
success as I don't see it.

How can I proceed to move the disk?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to export VM

2017-02-20 Thread Adam Litke
Hi Pat.  I'd like to help you investigate this issue further.  Could you
send a snippet of the vdsm.log on slam-vmnode-03 that covers the time
period during this failure?  Engine is reporting that vdsm has likely
thrown an exception while acquiring locks associated with the VM disk you
are exporting.

On Thu, Feb 16, 2017 at 12:40 PM, Pat Riehecky  wrote:

> Any attempts to export my VM error out.  Last night the disk images got
> 'unregistered' from oVirt and I had to rescan the storage domain to find
> them again.  Now I'm just trying to get a backup of the VM.
>
> The snapshots off of the old disks are still listed, but I don't know if
> the lvm slices are still real or if that is even what is wrong.
>
> steps I followed ->
> Halt VM
> Click Export
> leave things unchecked and click OK
>
> oVirt version:
> ovirt-engine-4.0.3-1.el7.centos.noarch
> ovirt-engine-backend-4.0.3-1.el7.centos.noarch
> ovirt-engine-cli-3.6.9.2-1.el7.noarch
> ovirt-engine-dashboard-1.0.3-1.el7.centos.noarch
> ovirt-engine-dbscripts-4.0.3-1.el7.centos.noarch
> ovirt-engine-dwh-4.0.2-1.el7.centos.noarch
> ovirt-engine-dwh-setup-4.0.2-1.el7.centos.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.0-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-1.2.1-1.el7.noarch
> ovirt-engine-extension-aaa-ldap-setup-1.2.1-1.el7.noarch
> ovirt-engine-extensions-api-impl-4.0.3-1.el7.centos.noarch
> ovirt-engine-lib-4.0.3-1.el7.centos.noarch
> ovirt-engine-restapi-4.0.3-1.el7.centos.noarch
> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
> ovirt-engine-setup-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-base-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.0.3-1.el7.centos.noarch
> ovirt-engine-tools-4.0.3-1.el7.centos.noarch
> ovirt-engine-tools-backup-4.0.3-1.el7.centos.noarch
> ovirt-engine-userportal-4.0.3-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-4.0.3-1.el7.centos.noarch
> ovirt-engine-websocket-proxy-4.0.3-1.el7.centos.noarch
> ovirt-engine-wildfly-10.0.0-1.el7.x86_64
> ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
> ovirt-guest-agent-common-1.0.12-4.el7.noarch
> ovirt-host-deploy-1.5.1-1.el7.centos.noarch
> ovirt-host-deploy-java-1.5.1-1.el7.centos.noarch
> ovirt-imageio-common-0.3.0-1.el7.noarch
> ovirt-imageio-proxy-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
> ovirt-imageio-proxy-setup-0.3.0-0.201606191345.git9f3d6d4.el
> 7.centos.noarch
> ovirt-image-uploader-4.0.0-1.el7.centos.noarch
> ovirt-iso-uploader-4.0.0-1.el7.centos.noarch
> ovirt-setup-lib-1.0.2-1.el7.centos.noarch
> ovirt-vmconsole-1.0.4-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
>
>
>
>
> log snippet:
> 2017-02-16 11:34:44,959 INFO [org.ovirt.engine.core.vdsbrok
> er.irsbroker.GetVmsInfoVDSCommand] (default task-28) [] START,
> GetVmsInfoVDSCommand( GetVmsInfoVDSCommandParameters:{runAsync='true',
> storagePoolId='0001-0001-0001-0001-01a5',
> ignoreFailoverLimit='false', 
> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmIdList='null'}), log id: 3c406c84
> 2017-02-16 11:34:45,967 INFO [org.ovirt.engine.core.vdsbrok
> er.irsbroker.GetVmsInfoVDSCommand] (default task-28) [] FINISH,
> GetVmsInfoVDSCommand, log id: 3c406c84
> 2017-02-16 11:34:46,178 INFO 
> [org.ovirt.engine.core.bll.exportimport.ExportVmCommand]
> (default task-24) [50b27eef] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-02-16 11:34:46,221 INFO [org.ovirt.engine.core.vdsbrok
> er.irsbroker.GetVmsInfoVDSCommand] (default task-24) [50b27eef] START,
> GetVmsInfoVDSCommand( GetVmsInfoVDSCommandParameters:{runAsync='true',
> storagePoolId='0001-0001-0001-0001-01a5',
> ignoreFailoverLimit='false', 
> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmIdList='null'}), log id: 61bfd908
> 2017-02-16 11:34:47,227 INFO [org.ovirt.engine.core.vdsbrok
> er.irsbroker.GetVmsInfoVDSCommand] (default task-24) [50b27eef] FINISH,
> GetVmsInfoVDSCommand, log id: 61bfd908
> 2017-02-16 11:34:47,242 INFO [org.ovirt.engine.core.vdsbrok
> er.irsbroker.GetVmsInfoVDSCommand] (default task-24) [50b27eef] START,
> GetVmsInfoVDSCommand( GetVmsInfoVDSCommandParameters:{runAsync='true',
> storagePoolId='0001-0001-0001-0001-01a5',
> ignoreFailoverLimit='false', 
> storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
> vmIdList='null'}), log id: 7cd19381
> 2017-02-16 11:34:47,276 INFO [org.ovirt.engine.core.vdsbrok
> er.irsbroker.GetVmsInfoVDSCommand] (default task-24) [50b27eef] FINISH,
> GetVmsInfoVDSCommand, log id: 7cd19381
> 2017-02-16 11:34:47,294 INFO 
> 

Re: [ovirt-users] actual size greater than virtual size?

2017-02-20 Thread Adam Litke
Hi Gianluca.  This is most likely caused by the temporary snapshot we
create when performing live storage migration.  Please check your VM for
snapshots and delete the temporary one.

On Mon, Feb 13, 2017 at 5:07 PM, Gianluca Cecchi 
wrote:

> Hello,
> I'm on 4.1 with 2 FC SAN storage domains and testing live migration of
> disk.
>
> I started with a CentOS 7.3 VM with a thin provisioned disk of 30Gb.
> It seems that I can see actual disk size in VM--> Snapshots--> Active VM
> line and the right sub pane.
> If I remember correctly before the live storage migration the actual size
> was 16Gb.
> But after the migration it shows actual size greater than virtual (33Gb)??
> There are no snapshots on the VM right now.
> Is this true? In case could it depend on temporary snapshots done by th
> esystem for storage migration? But in this case does it mean that a live
> merge generates a biggest image at the end..?
>
> The command I saw during conversion was:
>
> vdsm 17197  2707  3 22:39 ?00:00:01 /usr/bin/qemu-img convert
> -p -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/
> 922b5269-ab56-4c4d-838f-49d33427e2ab/images/6af3dfe5-
> 6da7-48e3-9eb0-1e9596aca9d3/9af3574d-dc83-485f-b906-0970ad09b660 -O qcow2
> -o compat=1.1 /rhev/data-center/mnt/blockSD/5ed04196-87f1-480e-9fee-
> 9dd450a3b53b/images/6af3dfe5-6da7-48e3-9eb0-1e9596aca9d3/
> 9af3574d-dc83-485f-b906-0970ad09b660he
>
> See currenti situation here:
> https://drive.google.com/file/d/0BwoPbcrMv8mvMnhib2NCQlJfRVE/
> view?usp=sharing
>
> thanks for clarification,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I need help! Can not run ovirt-engine

2017-02-20 Thread Денис Мишанин
Thank you for your advice, yesterday, after 5 hours vm ovirt-engine starts.
I suffered all the virtual machines on the old server Xen., except for one
large volume. Later, when I can move and it will be to understand what
happened to the ISCSI storage.

пн, 20 февр. 2017 г. в 9:44, Yedidyah Bar David :

> On Sun, Feb 19, 2017 at 12:42 PM, Денис Мишанин 
> wrote:
> > Hello.
> > After restarting the ISCSI storage I can not run ovirt-engine in
> prodation
> > use I am looking for help
>
> You have this in vdsm.log:
>
> Thread-640::DEBUG::2017-02-19
> 13:40:35,756::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
> --cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices
> { preferred_names = [
> "^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> '\''a|/dev/mapper/3600605b00a20bf201e29c6bf236a6651|'\'',
> '\''r|.*|'\'' ] }  global {
>  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
> use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
> --noheadings --units b --nosuffix --separator '|' --ignoreskippe
> dcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> c7c466c5-983d-40b2-8f8c-d569755281f5 (cwd None)
> Thread-640::DEBUG::2017-02-19
> 13:40:35,784::lvm::288::Storage.Misc.excCmd::(cmd) FAILED:  = '
> WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
> WARNING: To avoid cor
> ruption, rescan devices to make changes visible (pvscan --cache).\n
> Volume group "c7c466c5-983d-40b2-8f8c-d569755281f5" not found\n
> Cannot process volume group c7c466c5-983d-40b2-8f8c-d56
> 9755281f5\n';  = 5
> Thread-640::WARNING::2017-02-19
> 13:40:35,785::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
> ['  WARNING: Not using lvmetad because config setting use_lvmetad=0.',
> '  WARNING: To
> avoid corruption, rescan devices to make changes visible (pvscan
> --cache).', '  Volume group "c7c466c5-983d-40b2-8f8c-d569755281f5" not
> found', '  Cannot process volume group c7c466c5-983d-
> 40b2-8f8c-d569755281f5']
>
> Not sure what might have caused this.
>
> Is your iSCSI storage ok after the reboot?
>
> You might try to reboot the client (host) or further debug it on
> kernel/iSCSI level. Check e.g. 'lsblk', iscsiadm, etc.
>
> Best,
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VMs HA with ceph and cinder

2017-02-20 Thread Matteo Dacrema
Hi all,

I’ve a setup that uses Ceph through ovirt for VM storage.
I tested out VM HA and all works well when IPMI is reachable.
When it’s not reachable because of network or power failure VM states switch to 
unknown and it’s not restarted to other nodes.

Is it a specific setup that I’ve missed?

Thank you
Regards
Matteo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Troubles after resizing the iscsi storage.

2017-02-20 Thread Arman Khalatyan
Hi,
I just have 1TB iscsi  storage with virtual machine with  several disks.
After shutdown the vm I just detached storage and removed from the cluster
and ovirt totally.
1) Then on the target resized the exported volume to 13TB,
2) iscsiadm -m node -l on the host I resized
iscsiadm -m node -l
pvresize /dev/mapper/.
iscsiadm -m node -u

After importing back to the ovirt over the gui, I can see the vm is in the
import field but I cannot see separate disks
Importing failing in the stage "Executing".
are there way to fix it?
On the host with "lvs" I can see the disks are there.
thanks,
Arman.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [SOLVED] Re: VMs are not running/booting on one host

2017-02-20 Thread Peter Hudec
oVirt 4.1. run with the same issue.

So the problem was not oVirt related.

I tried the ELREPO kernel and that solved my issue. Kernel
4.9.11-1.el7.elrepo.x86_64 seems to be working without issues.

Conclusion: We triggered some bug in 3.10.0 kernel, but I was unable to
find out which one. We have 3 same hypervisors, but the problem was only
on the third none.

Peter

On 16/02/2017 15:35, Peter Hudec wrote:
> memtest run without issues.
> I'm thinking about install on this host latest ovirt and try running the vm.
> It may solve the issues if it's ovirt related and not hw/kvm.
> 
> latest stable version is 4.1.
> 
>   Peter
> 
> I'm thinking to try lastest oVirt
> On 15/02/2017 21:46, Peter Hudec wrote:
>> On 15/02/2017 21:20, Nir Soffer wrote:
>>> On Wed, Feb 15, 2017 at 10:05 PM, Peter Hudec  wrote:
 Hi,

 so theproblem is little bit different. When I wait for a long time, the
 VM boots ;(
>>>
>>> Is this an issue only with old vms imported from the old setup, or
>>> also with new vms?
>> I do not have new VMs, so with the OLD one.But I did not import them
>> from old setup.
>> The Host OS upgrade I did by our docs, creating new cluster, host
>> upgrade and vm migrations. There was no outage until now.
>>
>> I tried to install new VM, but the installer hangs on that host.
>>
>>
>>>

 But ... /see the log/. I'm invetigating the reason.
 The difference between the dipovirt0{1,2} and the dipovirt03 isthe
 installation time. The first 2 was migrated last week, the last one
 yesterday. There some newer packages, but nothing related to KVM.

 [  292.429622] INFO: rcu_sched self-detected stall on CPU { 0}  (t=72280
 jiffies g=393 c=392 q=35)
 [  292.430294] sending NMI to all CPUs:
 [  292.430305] NMI backtrace for cpu 0
 [  292.430309] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64
 #1 Debian 3.16.39-1
 [  292.430311] Hardware name: oVirt oVirt Node, BIOS 0.5.1 01/01/2011
 [  292.430313] task: 8181a460 ti: 8180 task.ti:
 8180
 [  292.430315] RIP: 0010:[]  []
 native_write_msr_safe+0x6/0x10
 [  292.430323] RSP: 0018:88001fc03e08  EFLAGS: 0046
 [  292.430325] RAX: 0400 RBX:  RCX:
 0830
 [  292.430326] RDX:  RSI: 0400 RDI:
 0830
 [  292.430327] RBP: 818e2a80 R08: 818e2a80 R09:
 01e8
 [  292.430329] R10:  R11: 88001fc03b96 R12:
 
 [  292.430330] R13: a0ea R14: 0002 R15:
 0008
 [  292.430335] FS:  () GS:88001fc0()
 knlGS:
 [  292.430337] CS:  0010 DS:  ES:  CR0: 8005003b
 [  292.430339] CR2: 01801000 CR3: 1c6de000 CR4:
 06f0
 [  292.430343] Stack:
 [  292.430344]  8104b30d 0002 0082
 88001fc0d6a0
 [  292.430347]  81853800  818e2fe0
 0023
 [  292.430349]  81853800 81047d63 88001fc0d6a0
 810c73fa
 [  292.430352] Call Trace:
 [  292.430354]  

 [  292.430360]  [] ? __x2apic_send_IPI_mask+0xad/0xe0
 [  292.430365]  [] ?
 arch_trigger_all_cpu_backtrace+0xc3/0x140
 [  292.430369]  [] ? rcu_check_callbacks+0x42a/0x670
 [  292.430373]  [] ? account_process_tick+0xde/0x180
 [  292.430376]  [] ? tick_sched_handle.isra.16+0x60/0x60
 [  292.430381]  [] ? update_process_times+0x40/0x70
 [  292.430404]  [] ? tick_sched_handle.isra.16+0x20/0x60
 [  292.430407]  [] ? tick_sched_timer+0x3c/0x60
 [  292.430410]  [] ? __run_hrtimer+0x67/0x210
 [  292.430412]  [] ? hrtimer_interrupt+0xe9/0x220
 [  292.430416]  [] ? smp_apic_timer_interrupt+0x3b/0x50
 [  292.430420]  [] ? apic_timer_interrupt+0x6d/0x80
 [  292.430422]  

 [  292.430425]  [] ? sched_clock_local+0x15/0x80
 [  292.430428]  [] ? mwait_idle+0xa0/0xa0
 [  292.430431]  [] ? native_safe_halt+0x2/0x10
 [  292.430434]  [] ? default_idle+0x19/0xd0
 [  292.430437]  [] ? cpu_startup_entry+0x374/0x470
 [  292.430440]  [] ? start_kernel+0x497/0x4a2
 [  292.430442]  [] ? set_init_arg+0x4e/0x4e
 [  292.430445]  [] ? early_idt_handler_array+0x120/0x120
 [  292.430447]  [] ? x86_64_start_kernel+0x14d/0x15c
 [  292.430448] Code: c2 48 89 d0 c3 89 f9 0f 32 31 c9 48 c1 e2 20 89 c0
 89 0e 48 09 c2 48 89 d0 c3 66 66 2e 0f 1f 84 00 00 00 00 00 89 f0 89 f9
 0f 30 <31> c0 c3 0f 1f 80 00 00 00 00 89 f9 0f 33 48 c1 e2 20 89 c0 48
 [  292.430579] Clocksource tsc unstable (delta = -289118137838 ns)


 On 15/02/2017 20:39, Peter Hudec wrote:
> Hi,
>
> I did already, but not find any suspicious, see attached logs and the
> spice