[ovirt-users] Re: error failed to execute stage 'misc configuration': failed to start service 'openvswitch'

2019-03-08 Thread Dafna Ron
infra list is for infra issues only.
forwarding to users group email which should be the correct address for
your question.

On Thu, Mar 7, 2019 at 9:05 PM  wrote:

> I'm starting in the world of ovirt, so I would appreciate the help
> I'm trying to install ovirt-engine and it gives me the following error
>
> error failed to execute stage 'misc configuration': failed to start
> service 'openvswitch'
>
> Could someone guide me to find the solution?
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/CTIAQUY2ZXFPIZSD3YV6OISQ3BQ77NKB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HPWGI3ZJXMHOU2LZSQIWWAHJJYU54US7/


[ovirt-users] Re: How to add vnc console for ovirt-engine VM

2019-01-07 Thread Dafna Ron
Hi Xu,

please direct question ovirt user's list rather then directly to a specific
person as others may be able to assist you.
adding users list and Galit from lago to help

Thanks,
Dafna



On Mon, Dec 24, 2018 at 8:49 AM Tian Xu  wrote:

> Hi Dron,
>
> I'm try to run ovirt system tests on my centos7.5 host, but my test becase
> ovirt engine VM lost network connection. I want to look at what happens in
> my ovirt engine VM but the VM has not VNC or spice console when create. Is
> there any way I can add vnc console when create engine VM with lago ?
>
> Thanks,
> Xu
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSYB7RWWQZ3RR7OTMMVW5HB2Y5PQTTXZ/


[ovirt-users] Re: Ovirt host addition failed

2018-12-14 Thread Dafna Ron
Correct email for this would be users@ovirt.org (added)

I also suggest that you attach the logs from the engine and the host
machines


On Thu, Dec 13, 2018 at 9:30 PM Shoeb Chowdhury  wrote:

> I've installed oVirt Node 4.2.0 in one physical server and oVirt Engine
> 4.2.7.5-1.el7 in another physical server. When I wanted to add host from
> ovirt engine administrative portal I'm getting the follwing errors -
> 1. When I use authenticatio with root password - Error while executing
> action: Cannot add Host. Connecting to host via SSH has failed, verify that
> the host is reachable (IP address, routable address etc.) You may refer to
> the engine.log file for further details.
> 2. When I use authentication wiht SSH publc key - Error while executing
> action: Cannot add Host. SSH authentication failed, verify authentication
> parameters are correct (Username/Password, public-key etc.) You may refer
> to the engine.log file for further details.
>
> Note that IP connectivity from Engine to Host is ok. I checked manaully
> SSH login from engine to node ip.
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/TWRNNWFILPP7LUFN6TDWSHEBHVHIOV52/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A26G7YHOWCZ5PTVZQIXJLR5OWRMDDZFJ/


Re: [ovirt-users] MoM is failing!!!

2017-10-16 Thread Dafna Ron
Hi,

Can you please tell us what is the issue that you are actually facing?
:) it would be easier to debug an issue and not an error message that
can be cause by several things.

Also, can you provide the engine and the vdsm logs?

thank you,
Dafna


On 10/16/2017 02:30 PM, Erekle Magradze wrote:
>
> It's was a typo in the failure message, 
>
> that's what I was getting:
>
> *VDSM hostname command GetStatsVDS failed: Connection reset by peer*
>
>
> On 10/16/2017 03:21 PM, Erekle Magradze wrote:
>>
>> Hi,
>>
>> It's getting clear now, indeed momd service is disabled
>>
>> ● momd.service - Memory Overcommitment Manager Daemon
>>Loaded: loaded (/usr/lib/systemd/system/momd.service; static;
>> vendor preset: disabled)
>>Active: inactive (dead)
>>
>> mom-vdsm is enable and running.
>>
>> ● mom-vdsm.service - MOM instance configured for VDSM purposes
>>Loaded: loaded (/usr/lib/systemd/system/mom-vdsm.service; enabled;
>> vendor preset: enabled)
>>Active: active (running) since Mon 2017-10-16 15:14:35 CEST; 1min
>> 3s ago
>>  Main PID: 27638 (python)
>>CGroup: /system.slice/mom-vdsm.service
>>└─27638 python /usr/sbin/momd -c /etc/vdsm/mom.conf
>>
>> The reason why I came up with digging in mom problems is the
>> following problem
>>
>>
>> *VDSM hostname command GetStatsVDSThanks failed: Connection reset by
>> peer*
>>
>> that is causing fencing of the node where the failure is happening,
>> what could be the reason of GetStatsVDS failure?
>>
>> Best Regards
>> Erekle
>>
>>
>> On 10/16/2017 03:11 PM, Martin Sivak wrote:
>>> Hi,
>>>
>>> how do you start MOM? MOM is supposed to talk to vdsm, we do not talk
>>> to libvirt directly. The line you posted comes from vdsm and vdsm is
>>> telling you it can't talk to MOM.
>>>
>>> Which MOM service is enabled? Because there are two momd and mom-vdsm,
>>> the second one is the one that should be enabled.
>>>
>>> Best regards
>>>
>>> Martin Sivak
>>>
>>>
>>> On Mon, Oct 16, 2017 at 3:04 PM, Erekle Magradze
>>>  wrote:
 Hi Martin,

 Thanks for the answer, unfortunately this warning message persists, does it
 mean that mom cannot communicate with libvirt? how critical is it?

 Best

 Erekle



 On 10/16/2017 03:03 PM, Martin Sivak wrote:
> Hi,
>
> it is just a warning, there is nothing you have to solve unless it
> does not resolve itself within a minute or so. If it happens only once
> or twice after vdsm or mom restart then you are fine.
>
> Best regards
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Mon, Oct 16, 2017 at 2:44 PM, Erekle Magradze
>  wrote:
>> Hi,
>>
>> after running
>>
>> systemctl status vdsm I am getting that it's running and this message at
>> the
>> end.
>>
>> Oct 16 14:26:52 hostname vdsmd[2392]: vdsm throttled WARN MOM not
>> available.
>> Oct 16 14:26:52 hostname vdsmd[2392]: vdsm throttled WARN MOM not
>> available,
>> KSM stats will be missing.
>> Oct 16 14:26:57 hostname vdsmd[2392]: vdsm root WARN ping was deprecated
>> in
>> favor of ping2 and confirmConnectivity
>>
>> how critical it is? and how to solve that warning?
>>
>> I am using libvirt
>>
>> Cheers
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> -- 
> Recogizer Group GmbH
>
> Dr.rer.nat. Erekle Magradze
> Lead Big Data Engineering & DevOps
> Rheinwerkallee 2, 53227 Bonn
> Tel: +49 228 29974555
>
> E-Mail erekle.magra...@recogizer.de
> Web: www.recogizer.com
>  
> Recogizer auf LinkedIn https://www.linkedin.com/company-beta/10039182/
> Folgen Sie uns auf Twitter https://twitter.com/recogizer
>  
> -
> Recogizer Group GmbH
> Geschäftsführer: Oliver Habisch, Carsten Kreutze
> Handelsregister: Amtsgericht Bonn HRB 20724
> Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993
>  
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben,
> informieren Sie bitte sofort den Absender und löschen Sie diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der 
> darin enthaltenen Informationen ist nicht gestattet.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host fencing issues

2017-10-13 Thread Dafna Ron
this suggests that libvirt is down.
can you please check libvirtd service status and get the log for it?


Thanks,
Dafna

On 10/13/2017 04:48 PM, Fernando Fuentes wrote:
> Team,
>
> I went to the log and capture the messages from when the host did the
> update all the way down to the failure.
>
> https://pastebin.com/AwP1gh5g
>
>
> I hope that helps narrowing down the issue
> Ideas, thoughts, and comments are welcome!
>
> Regards,
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
>
>
> On Fri, Oct 13, 2017, at 10:03 AM, Fernando Fuentes wrote:
>> Thanks for your reply.
>>
>> As requested I got this from the messages log:
>>
>> https://pastebin.com/t0HRhvT9
>>
>> This one is from the host engine:
>>
>> https://pastebin.com/8vji6MGs
>>
>> And this one is from the host vdsm:
>>
>> https://pastebin.com/GgnqRvTE
>>
>> The funny part is that right when I did the update it seems that vdsm
>> died and there is no log after the update.
>>
>> On the messages lo you can see the errors and the attempts of me
>> trying to restart  manually but it dies.
>>
>> Any ideas?
>>
>>
>> --
>> Fernando Fuentes
>> ffuen...@txweather.org
>> http://www.txweather.org
>>
>>
>>
>> On Fri, Oct 13, 2017, at 01:42 AM, Tomas Jelinek wrote:
>>> can you please provide some logs to the issue?
>>> /var/log/ovirt-engine/engine.log from engine machine and
>>> /var/log/vdsm/vdsm.log from the affected host would be great start.
>>>
>>> thank you
>>>
>>> On Fri, Oct 13, 2017 at 2:47 AM, Fernando Fuentes
>>> > wrote:
>>>
>>> Hello Team,
>>>
>>> I updated one of my host on my cluster and after it finish and
>>> try to
>>> activate the host it quickly claimed that the host was
>>> unresponsive and
>>> it fenced the host... Now every time I try to activate the host it
>>> clames that is unresponsive and proceeds to fence it... This was not
>>> happening before the update
>>> The host is reachable with no problems nor issues...
>>>
>>> Any ideas?
>>>
>>> Centos 7.4 x86_64 host.
>>> Attached is the vdsm log
>>>
>>> engine is oVirt Engine Version: 4.0.2.6-1.el7.centos
>>>
>>> Regards,
>>>
>>> --
>>> Fernando Fuentes
>>> ffuen...@txweather.org 
>>> http://www.txweather.org
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>>
>>
>> _
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host install common error

2017-06-08 Thread Dafna Ron

On 06/08/2017 04:52 PM, FERNANDO FREDIANI wrote:
> Hello folks.
>
> One of the most (if not the more) annoying problems of oVirt is the
> known message "... installation failed. Command returned failure code
> 1 during SSH session ..." which happens quiet often in several situations.
>
> Scrubbing installation logs it seems that most stuff goes well, but
> then it stops in a message saying: "ERROR otopi.context
> context._executeMethod:151 Failed to execute stage 'Setup validation':
> Cannot locate vdsm package, possible cause is incorrect channels" -
> followed by another message: "DEBUG otopi.context
> context.dumpEnvironment:770 ENV BASE/exceptionInfo=list:'[( 'exceptions.RuntimeError'>, RuntimeError('Cannot locate vdsm package,
> possible cause is incorrect channels',),  0x3131ef0>)]'"
>
> I am not sure why would it complain about the repositories as this is
> a Minimal CentOS 7 Install and the oVirt repository is added by
> oVirt-Engine itself so I assumed it added the most appropriate to its
> own version.
> I even tried to copy over the same repositories used on the Hosts that
> are installed and working fine but that message shows up again on the
> Install retries.
>
I am not sure what you mean by Minimal Centos 7 install. do you mean
it's hosted engine install?
can you please check if you can see the package by running yum search vdsm?
Also, looking at the complete log would be good
> Does anyone have any other hints where to look ?
>
> For reference my engine version running is: 4.1.1.6-1.el7.centos.
>
> Fernando
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem with VM Export

2015-06-02 Thread Dafna Ron
-4a9c-8fd9-3a914e6f2bc3`ReqID=`6f861566-b1c9-45c7-9181-452b0bc014d0`::Tried
to cancel a processed request
0a3c909d-0737-492e-a47c-bc0ab5e1a603::ERROR::2015-06-02
20:36:38,523::task::866::Storage.TaskManager.Task::(_setError)
Task=`0a3c909d-0737-492e-a47c-bc0ab5e1a603`::Unexpected error
Traceback (most recent call last):
   File /usr/share/vdsm/storage/task.py, line 873, in _run
 return fn(*args, **kargs)
   File /usr/share/vdsm/storage/task.py, line 334, in run
 return self.cmd(*self.argslist, **self.argsdict)
   File /usr/share/vdsm/storage/securable.py, line 77, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 1549, in moveImage
 imgUUID, srcLock),
   File /usr/share/vdsm/storage/resourceManager.py, line 523, in
acquireResource
 raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource
factory threw an exception.: ()


-- snip --

i thought it might be caused by enforcing selinux, but changing selinux
to permissive didnt really change anything.

node versions are:

vdsm-cli-4.16.14-0.el7.noarch
vdsm-python-4.16.14-0.el7.noarch
vdsm-python-zombiereaper-4.16.14-0.el7.noarch
vdsm-jsonrpc-4.16.14-0.el7.noarch
vdsm-yajsonrpc-4.16.14-0.el7.noarch
vdsm-xmlrpc-4.16.14-0.el7.noarch
vdsm-4.16.14-0.el7.x86_64

libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64


sanlock-3.2.2-2.el7.x86_64
sanlock-lib-3.2.2-2.el7.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
sanlock-python-3.2.2-2.el7.x86_64



CentOS Linux release 7.1.1503 (Core)

3.10.0-229.4.2.el7.x86_64

#

this happens on every node in the cluster. i tried the vdsm rpm from
3.5.3-pre, which didnt change anything. problem still exists.

the export domain so far can be accessed fine, already existing
templates / exported vm on the nfs share can be deleted. permissions are
set correctly uid/gid 36, nfsvers 3.


anyone got a hint for me ?


Cheers

Juergen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM paused unexpectedly

2014-09-09 Thread Dafna Ron

On 09/09/2014 03:21 PM, Frank Wall wrote:

On Tue, Sep 09, 2014 at 03:09:02PM +0100, Dafna Ron wrote:

qemu would pause a vm when doing extend on the vm disk and this would
result in INFO messages on vm's pause.

looks like this is what you are seeing.

For the records, I'm using thin provisioned disks here.
Do you mean an internal qemu task which is triggered to
extend a thin provisioned disk to the required size?

yes


This process shouldn't permanently pause the VM, right?

no



Or do you mean something else?


Regards
- Frank



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 答复: Error after changing IP of Node (FQDN is still the same)

2014-08-01 Thread Dafna Ron
the host ip should also be in the certificates created when we install 
it and in the db.

re-installing the host would be easier and quicker.

On 08/01/2014 02:47 PM, xiec.f...@cn.fujitsu.com wrote:


Hi ml

 If u change your node’s host ip ,maybe u must re-add to 
rhevm?( if somebody have other way without re-adding?)


*发件人:*users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *代 
表 *ml ml

*发送时间:*2014年8月1日21:24
*收件人:*users@ovirt.org
*主题:*[ovirt-users] Error after changing IP of Node (FQDN is still 
the same)


Hello List,

i on my ovirt engine i am getting:

ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] 
(DefaultQuartzScheduler_Worker-73) [412fc539] Start SPM Task failed - 
result: cleanSuccess, message: VDSGenericException: VDSErrorException: 
Failed to HSMGetTaskStatusVDS, error = Cannot acquire host id, code = 661


The FQDN is still the same. I just changed the ips in /etc/hosts

Any idea?

Thanks,
Mario



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing Storage domain

2014-07-17 Thread Dafna Ron
even if the other hosts can see the domain, it doesn't mean that there 
is no problem from that particular host.
if you checked everything and you are positive that the host can see and 
connect to the domain please restart vdsm to see that there is no cache 
issue.


Dafna

On 07/16/2014 07:21 PM, Maurice James wrote:
What do I do when a host in a cluster cant find a storage domain that 
it thinks doesnt exist? The storage domain is in the db and is 
online because one of the other hosts is working just fine. I pulled 
this out of the vdsm.log. I even tried rebooting



Thread-30::ERROR::2014-07-16 
14:19:10,522::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) 
Error while collecting domain b7663d70-e658-41fa-b9f0-8da83c9eddce 
monitoring information

Traceback (most recent call last):
  File /usr/share/vdsm/storage/domainMonitor.py, line 204, in 
_monitorDomain

self.domain = sdCache.produce(self.sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 98, in produce
domain.getRealDomain()
  File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
domain = self._findDomain(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
('b7663d70-e658-41fa-b9f0-8da83c9eddce',)




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Missing Storage domain

2014-07-17 Thread Dafna Ron

x10 :)
thanks for letting us know it was resolved

have a nice one.
Dafna

On 07/17/2014 02:20 PM, Maurice James wrote:

I ended putting the problematic storage domain into maintenance mode. That 
allowed the other hosts to come online. I then rebooted that storage domain 
host. That seemed to clear up the problem




- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com, users users@ovirt.org
Sent: Thursday, July 17, 2014 2:53:46 AM
Subject: Re: [ovirt-users] Missing Storage domain

even if the other hosts can see the domain, it doesn't mean that there
is no problem from that particular host.
if you checked everything and you are positive that the host can see and
connect to the domain please restart vdsm to see that there is no cache
issue.

Dafna

On 07/16/2014 07:21 PM, Maurice James wrote:

What do I do when a host in a cluster cant find a storage domain that
it thinks doesnt exist? The storage domain is in the db and is
online because one of the other hosts is working just fine. I pulled
this out of the vdsm.log. I even tried rebooting


Thread-30::ERROR::2014-07-16
14:19:10,522::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain b7663d70-e658-41fa-b9f0-8da83c9eddce
monitoring information
Traceback (most recent call last):
   File /usr/share/vdsm/storage/domainMonitor.py, line 204, in
_monitorDomain
 self.domain = sdCache.produce(self.sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
('b7663d70-e658-41fa-b9f0-8da83c9eddce',)



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] admin@internal login failure

2014-07-14 Thread Dafna Ron

the error you are looking for is in the vm log:

kvm: unhandled exit 8021
kvm_run returned -22

this seems to have been fixed in the past (if you do a google search you 
can find related bugs).

can you please make sure you are using the latest packages?

Thanks,
Dafna

On 07/11/2014 07:04 PM, Darcy Hodgson wrote:

Hey Everyone,

I am no longer able to log in with the internal admin account. I'm not
sure if this is related to me doing updates, or re-attaching the
engine to my ipa server.

For both webpage login and the ovirt shell I get the following
messaged in the enging.log file.

2014-07-11 11:01:16,685 WARN
[org.ovirt.engine.core.bll.LoginUserCommand] (ajp--127.0.0.1-8702-8)
CanDoAction of action LoginUser failed.
Reasons:USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
2014-07-11 11:01:16,687 INFO
[org.ovirt.engine.api.restapi.security.auth.LoginValidator]
(ajp--127.0.0.1-8702-8) Login failure, user: admin domain: internal
reason: [USER_NOT_AUTHORIZED_TO_PERFORM_ACTION]

I tried to do a engine-cleanup/engine-setup but that didn't help. I
don't want to lose all the data as this environment has been running
for a while now.

I then tried to remove the ipa domain and log in with the internal
admin user without success. Followed by re-adding my domain and adding
permissions to my ipa admin user which also did not work.

If anyone knows how to correct this that would be very helpful.

Thanks,

-Darcy
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] admin@internal login failure

2014-07-14 Thread Dafna Ron

sorry :) answered wrong thread...
however, there was bug posted on that as well..
Adding meital since I think she might remember which bug it was.

On 07/14/2014 10:13 AM, Dafna Ron wrote:

the error you are looking for is in the vm log:

kvm: unhandled exit 8021
kvm_run returned -22

this seems to have been fixed in the past (if you do a google search 
you can find related bugs).

can you please make sure you are using the latest packages?

Thanks,
Dafna

On 07/11/2014 07:04 PM, Darcy Hodgson wrote:

Hey Everyone,

I am no longer able to log in with the internal admin account. I'm not
sure if this is related to me doing updates, or re-attaching the
engine to my ipa server.

For both webpage login and the ovirt shell I get the following
messaged in the enging.log file.

2014-07-11 11:01:16,685 WARN
[org.ovirt.engine.core.bll.LoginUserCommand] (ajp--127.0.0.1-8702-8)
CanDoAction of action LoginUser failed.
Reasons:USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
2014-07-11 11:01:16,687 INFO
[org.ovirt.engine.api.restapi.security.auth.LoginValidator]
(ajp--127.0.0.1-8702-8) Login failure, user: admin domain: internal
reason: [USER_NOT_AUTHORIZED_TO_PERFORM_ACTION]

I tried to do a engine-cleanup/engine-setup but that didn't help. I
don't want to lose all the data as this environment has been running
for a while now.

I then tried to remove the ipa domain and log in with the internal
admin user without success. Followed by re-adding my domain and adding
permissions to my ipa admin user which also did not work.

If anyone knows how to correct this that would be very helpful.

Thanks,

-Darcy
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem Windows guests start in pause

2014-07-14 Thread Dafna Ron

kvm: unhandled exit 8021
kvm_run returned -22

this seems to have been fixed in the past (if you do a google search you 
can find related bugs).

can you please make sure you are using the latest packages?

Thanks,
Dafna



On 07/11/2014 05:45 PM, lucas castro wrote:

The log collected from engine, vdsm and vm qemu.

If I deactivate the disk and attach an install CD image,
the vm start normally .
On Thu, Jul 10, 2014 at 5:42 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


can you please attach the vm's qemu logs and full vdsm + engine logs?


On 07/09/2014 06:50 PM, lucas castro wrote:

I have an Fedora host in cluster Default,
I've created another cluster with a CentOS to migrate the
environment.
and all the linux guest work perfectly, but the windows guest
start in pause on the other cluster.

the vdsm log from the CentOS host.

http://pastebin.com/AzUB2Fqy

-- 
contatos:

Celular: ( 99 ) 9143-5954 tel:%28%2099%20%29%209143-5954 - Vivo
skype: lucasd3castro
msn: lucascastrobor...@hotmail.com
mailto:lucascastrobor...@hotmail.com
mailto:lucascastrobor...@hotmail.com
mailto:lucascastrobor...@hotmail.com


___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Dafna Ron





--
contatos:
Celular: ( 99 ) 9143-5954 - Vivo
skype: lucasd3castro
msn: lucascastrobor...@hotmail.com mailto:lucascastrobor...@hotmail.com



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem Windows guests start in pause

2014-07-14 Thread Dafna Ron

Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.

also, can you try a different windows image?

Thanks,
Dafna




On 07/14/2014 02:03 PM, lucas castro wrote:

On the host there I've tried to run the vm, I use a centOS 6.5
and checked, no update for qemu, libvirt or related package.



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem Windows guests start in pause

2014-07-14 Thread Dafna Ron

adding some people.
what windows version are you using? do you have all windows drivers 
installed?


Thanks,
Dafna



On 07/14/2014 03:00 PM, lucas castro wrote:

I just reply to all, and cause you sent to me,
in reply, reply for you and list.
rpm -qa | grep qemu
qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem Windows guests start in pause

2014-07-10 Thread Dafna Ron

can you please attach the vm's qemu logs and full vdsm + engine logs?

On 07/09/2014 06:50 PM, lucas castro wrote:

I have an Fedora host in cluster Default,
I've created another cluster with a CentOS to migrate the environment.
and all the linux guest work perfectly, but the windows guest start in 
pause on the other cluster.


the vdsm log from the CentOS host.

http://pastebin.com/AzUB2Fqy

--
contatos:
Celular: ( 99 ) 9143-5954 - Vivo
skype: lucasd3castro
msn: lucascastrobor...@hotmail.com mailto:lucascastrobor...@hotmail.com


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO_DOMAIN issue

2014-06-27 Thread Dafna Ron

did you restart ovirt-engine service?



On 06/25/2014 05:54 AM, Koen Vanoppen wrote:

Ok, thanx.
Now when I try to remove the connection I get this:

[oVirt shell (connec...@vega.brusselsairport.aero 
mailto:connec...@vega.brusselsairport.aero)]# remove 
storageconnection 91fca941-d9f3-496c-908f-92d31bce6a64
  == ERROR 
===

  status: 404
  reason: Not Found
  detail: Entity not found: null




2014-06-24 15:53 GMT+02:00 Dafna Ron d...@redhat.com 
mailto:d...@redhat.com:


the destroy should clean the db only and any cleanup on the
storage/hosts side should be done manually by the user.
cleaning the iso domain from the vms would be a nice addition if
not done today - can you please open a bug on this?

Please check if your hosts have old mount to the iso side and
umount it.
restart of vdsm service on the hosts and engine service should
clean any leftovers after that.
if not, please file a bug since old connection should be clean
from the db.

Dafna



On 06/24/2014 01:53 PM, Sven Kieske wrote:

well as far as I know you should put any domain
first into maintenance, then detach from all DCs
and then remove it.

by force destroying you get what you now have:
old connections which are dead and log spam.

So I assume it would be safe to delete the connection
to this storage domain, but ymmv.

Am 24.06.2014 14 tel:24.06.2014%2014:45, schrieb Koen Vanoppen:

By destroying it in ovirt management interface...



-- 
Dafna Ron






--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO_DOMAIN issue

2014-06-24 Thread Dafna Ron
the destroy should clean the db only and any cleanup on the 
storage/hosts side should be done manually by the user.
cleaning the iso domain from the vms would be a nice addition if not 
done today - can you please open a bug on this?


Please check if your hosts have old mount to the iso side and umount it.
restart of vdsm service on the hosts and engine service should clean any 
leftovers after that.

if not, please file a bug since old connection should be clean from the db.

Dafna


On 06/24/2014 01:53 PM, Sven Kieske wrote:

well as far as I know you should put any domain
first into maintenance, then detach from all DCs
and then remove it.

by force destroying you get what you now have:
old connections which are dead and log spam.

So I assume it would be safe to delete the connection
to this storage domain, but ymmv.

Am 24.06.2014 14:45, schrieb Koen Vanoppen:

By destroying it in ovirt management interface...



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM migration timeout

2014-06-20 Thread Dafna Ron
you might have a problem and the migration got stuck - increasing 
timeout will not solve anything

please attach both src and dst vdsm, libvirt and vm qemu logs + engine log

Thanks,
Dafna


On 06/20/2014 01:00 PM, Alexandr Krivulya wrote:

Hi!
How can I adjust migration timeout? I see this error in my vdsm.log when
I try to migrate one of my VM's:

The migration took 130 seconds which is exceeding the configured maximum
time for migrations of 128 seconds. The migration will be aborted.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM migration timeout

2014-06-20 Thread Dafna Ron
', 'function': '0x0'}, 'reqsize': '0', 'index': 0, 'iface': 
'virtio', 'apparentsize': '32212254720', 'specParams': {}, 'imageID': 
'35499e90-dc09-4caa-9537-afdde82223ca', 'readonly': 'False', 'shared': 
'false', 'truesize': '9280925696', 'type': 'disk', 'domainID': 
'83c4e59a-d810-4965-b7c0-ac2839b709f8', 'volumeInfo': {'domainID': 
'83c4e59a-d810-4965-b7c0-ac2839b709f8', 'volType': 'path', 
'leaseOffset': 0, 'path': 
'/rhev/data-center/mnt/glusterSD/127.0.0.1:VM__Storage/83c4e59a-d810-4965-b7c0-ac2839b709f8/images/35499e90-dc09-4caa-9537-afdde82223ca/e9abc7ad-57dc-4636-a78c-149893a121a2', 
'volumeID': 'e9abc7ad-57dc-4636-a78c-149893a121a2', 'leasePath': 
'/rhev/data-center/mnt/glusterSD/127.0.0.1:VM__Storage/83c4e59a-d810-4965-b7c0-ac2839b709f8/images/35499e90-dc09-4caa-9537-afdde82223ca/e9abc7ad-57dc-4636-a78c-149893a121a2.lease', 
'imageID': '35499e90-dc09-4caa-9537-afdde82223ca'}, 'format': 'raw', 
'deviceId': '35499e90-dc09-4caa-9537-afdde82223ca', 'poolID': 
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/


Dafna






On 06/20/2014 02:10 PM, Alexandr Krivulya wrote:

Attached, thank you

20.06.2014 15:15, Dafna Ron пишет:

you might have a problem and the migration got stuck - increasing
timeout will not solve anything
please attach both src and dst vdsm, libvirt and vm qemu logs + engine
log

Thanks,
Dafna


On 06/20/2014 01:00 PM, Alexandr Krivulya wrote:

Hi!
How can I adjust migration timeout? I see this error in my vdsm.log when
I try to migrate one of my VM's:

The migration took 130 seconds which is exceeding the configured maximum
time for migrations of 128 seconds. The migration will be aborted.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM migration timeout

2014-06-20 Thread Dafna Ron

great!
this is an old bug that should have been fixed so I think you are using 
older versions of vdsm/libvirt and qemu.



On 06/20/2014 02:45 PM, Alexandr Krivulya wrote:

Thanks, detaching CD solves this problem.

20.06.2014 16:34, Dafna Ron пишет:

vm qemu vm log shows an error on dst:

qemu: warning: error while loading state section id 3
load of migration failed

but I think that the issue is that the vm has a cd attached which no
longer exists or available to the vm.

can you please try to detach any disk attached, activate the iso
domain if it's down and try migrating again.

here is the error from the vdsm log:

Thread-153::DEBUG::2014-06-20
14:49:06,162::task::974::TaskManager.Task::(_decref)
Task=`b37186f5-7959-495b-b9e3-816c2d3418ac`::ref 0 aborting False
libvirtEventLoop::DEBUG::2014-06-20
14:49:06,367::vm::4846::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`87c108fa-1ade-47a4-be66-f0416752eec4`::event Stopped detail 5
opaque None
libvirtEventLoop::INFO::2014-06-20
14:49:06,368::vm::2169::vm.Vm::(_onQemuDeath)
vmId=`87c108fa-1ade-47a4-be66-f0416752eec4`::underlying process
disconnected
libvirtEventLoop::INFO::2014-06-20
14:49:06,368::vm::4326::vm.Vm::(releaseVm)
vmId=`87c108fa-1ade-47a4-be66-f0416752eec4`::Release VM resources
Thread-65::DEBUG::2014-06-20
14:49:06,396::libvirtconnection::108::libvirtconnection::(wrapper)
Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not
found: no domain with matching uuid '87c108fa-1ade-47a4-be66-f0416752e
ec4'
libvirtEventLoop::WARNING::2014-06-20
14:49:06,394::clientIF::365::vds::(teardownVolumePath) Drive is not a
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:bound met
hod Drive._checkIoTuneCategories of vm.Drive object at
0x7ff3cc072cd0 _customize:bound method Drive._customize of
vm.Drive object at 0x7ff3cc072cd0 _deviceXML:disk device=cdrom
type=file
   driver name=qemu type=raw/
   source startupPolicy=optional/
   target bus=ide dev=hdc/
   readonly/
   serial/
   alias name=ide0-1-0/
   address bus=1 controller=0 target=0 type=drive unit=0/
 /disk _makeName:bound method Drive._makeName of vm.Drive
object at 0x7ff3cc072cd0 _setExtSharedState:bound method
Drive._setExtSharedState of vm.Drive object at 0x7ff3cc072cd0
_validateIoTuneParams:bound method Drive._val
idateIoTuneParams of vm.Drive object at 0x7ff3cc072cd0
address:{'bus': '1', 'controller': '0', 'type': 'drive', 'target':
'0', 'unit': '0'} alias:ide0-1-0 apparentsize:0 blockDev:False
cache:none conf:{'guestFQDN': '', 'acpiEnable': 'true',
'emulatedMachine': 'rhel6.4.0', 'afterMigrationStatus': '',
'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 1024,
'spiceSslCipherSuite': 'DEFAULT', 'displaySecurePort': '-1',
'timeOffset': '10801', 'cpuType': 'Penryn', 'custom': {}, 'pauseCode':
'NOERR', 'migrationDest': 'libvirt', 'smp': '2', 'vmType': 'kvm',
'memSize': 2048, 'smpCoresPerSocket': '1', 'vmName': 'z-store.lis.ua',
'nice': '0', 'username': 'Unknown', 'clientIp': '', 'vmId':
'87c108fa-1ade-47a4-be66-f0416752eec4', 'displayIp': '0',
'displayPort': '-1', 'smartcardEnable': 'false',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'nicModel': 'rtl8139,pv', 'keyboardLayout': 'en-us', 'kvmEnable':
'true', 'transparentHugePages': 'true', 'devices': [{'device': 'unix',
'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0',
'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device':
'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus':
'0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}},
{'device': 'usb', 'alias': 'usb0', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00', 'domain': '0x', 'type': 'pci',
'function': '0x2'}}, {'device': 'ide', 'alias': 'ide0', 'type':
'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain':
'0x', 'type': 'pci', 'function': '0x1'}}, {'device':
'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller',
'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}}, {'specParams': {'vram': '32768', 'heads':
'1'}, 'alias': 'video0', 'deviceId':
'568266ff-9e6c-4ac2-9dff-4ac298db00ca', 'address': {'slot': '0x02',
'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'},
'device': 'cirrus', 'type': 'video'}, {'nicModel': 'pv', 'macAddr':
'00:1a:4a:51:89:a6', 'linkActive': True, 'network': 'ovirtmgmt',
'specParams': {}, 'filter': 'vdsm-no-mac-spoofing', 'alias': 'net0',
'deviceId': '395be56a-2e10-405d-baed-dec5c5186a83', 'address':
{'slot': '0x03', 'bus': '0x00', 'domain': '0x', 'type': 'pci',
'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name':
'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias':
'ide0-1-0', 'specParams': {'path': ''}, 'readonly': 'True',
'deviceId': 'a65fa707-1cc3-4960-b6e3-6aa7ca124e48', 'address': {'bus':
'1', 'controller': '0

Re: [ovirt-users] ISO_DOMAIN can't be attached

2014-06-11 Thread Dafna Ron
:48:24,515 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Failed in 
AttachStorageDomainVDS method
2014-06-11 15:48:24,547 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] 
IrsBroker::Failed::AttachStorageDomainVDS due to: IRSErrorException: 
IRSGenericException: IRSErrorException: Failed to 
AttachStorageDomainVDS, error = Storage domain does not exist: 
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358
2014-06-11 15:48:24,555 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] FINISH, 
AttachStorageDomainVDSCommand, log id: a2cca1b
2014-06-11 15:48:24,556 ERROR 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Command 
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand 
throw Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: 
IRSGenericException: IRSErrorException: Failed to 
AttachStorageDomainVDS, error = Storage domain does not exist: 
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358 (Failed with 
error StorageDomainDoesNotExist and code 358)
2014-06-11 15:48:24,559 INFO 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Command 
[id=bff69195-7d00-4e18-bf5b-705be8d7210f]: Compensating NEW_ENTITY_ID 
of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; 
snapshot: storagePoolId = 0002-0002-0002-0002-011b, 
storageId = 0f6485ab-0301-4989-a59a-56efcd447ba0.
2014-06-11 15:48:24,580 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Correlation ID: 
7b977e41, Job ID: fafb4906-a9dd-41fb-b17f-833d831e953e, Call Stack: 
null, Custom Event ID: -1, Message: Failed to attach Storage Domain 
ISO_DOMAIN to Data Center Default. (User: admin)


Regards
Peter Haraldson


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IO errors when adding new disk on iSCSI storage

2014-05-16 Thread Dafna Ron

adding federico since I think he can probably add more info here.

when we use thin provisionining we have to extend the disk during 
writing (it's set to do that every 2GB I think).

during extend the vm pauses and resumes.

However, this action should not be noticeable by the vm user.

Does the vm pause and resumes or pauses completely? is it noticeable by 
the vm user?


Thanks,
Dafna


On 05/16/2014 01:40 PM, John Taylor wrote:

Hi Morten,
My understanding of thin disks on a block domain is that vdsm traps
ENOSPC on the thin lv and uses the mailbox to get the SPM to extend
it.   See a presentation by Nir
http://www.ovirt.org/File:Storage-mailbox.odp
I thought I saw somewhere there were some changes/bugs around that for
allowing mixed data centers (both block and file domains). ...just
looked now and this bz looks relevant
https://bugzilla.redhat.com/show_bug.cgi?id=1083476


So maybe you could check the engine logs and spm logs for that flow
(sorry I can't tell you any specifics about what you should look for )

-John

On Fri, May 16, 2014 at 1:54 AM, Morten A. Middelthon mor...@flipp.net wrote:

Hi,

I just re-ran the test with adding a preallocated disk, and the problem did
_not_ appear. I tried a few times to write large files with dd, but the vm
continued to run without problems

with regards,


--
Morten A. Middelthon
Email: mor...@flipp.net
Phone: +47 907 83 708
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IO errors when adding new disk on iSCSI storage

2014-05-15 Thread Dafna Ron



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] change disk size using a thin provision based template

2014-05-09 Thread Dafna Ron

I would assume that this is to avoid data corruption.
However it sounds like a good feature request (allow disk resize when 
creating a new vm from a template).

Can you please open it?

https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

Thanks!

Dafna


Hello,

I am making some tests and this time I want to reduce the disk size 
before create a new vm (in the new vm window )


test:
1) click on new VM
2) select template (centos6.5 , 6GB, 500GB HD , 2 cores)
3) resource allocation : select preallocated disk
4) change disk size  it is not possible !


Why I can not change disk size ?

I know it is based on template who had already  defined (500GB), but  
this template in fact  has only 4GB of actual size in thin 
provisioning allocation policy .


thanks

tamer


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration failing

2014-04-29 Thread Dafna Ron
Actually, the best way to debug this would be to look at both des and 
src vdsm logs.

is this happening on all vm's or just one of them?
was this vm launched from an iso? is that iso still available?
are there any snapshots?
what are the vdsm, libvirt and qemu versions?

Thanks.

Dafna


On 04/29/2014 02:24 PM, Steve Dainard wrote:

Thanks, logs attached:

libvirtd.log.4/central-syslog.log covers the first event (17:12 
timestamp)

libvirtd.log.3/owncloud.log covers the second event (01:22 timestamp)


Steve



On Tue, Apr 29, 2014 at 4:48 AM, Francesco Romani from...@redhat.com 
mailto:from...@redhat.com wrote:


- Original Message -
 From: Steve Dainard sdain...@miovision.com
mailto:sdain...@miovision.com
 To: users users@ovirt.org mailto:users@ovirt.org
 Sent: Tuesday, April 29, 2014 4:32:08 AM
 Subject: Re: [ovirt-users] Live migration failing

 Another error on migration.

Hi, in both cases the core issue is

ibvirtError: Unable to read from monitor: Connection reset by peer

can you share the libvirtd and qemu logs?

Hopefully we can find some more information on those logs.

Bests,

--
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration failing

2014-04-29 Thread Dafna Ron
-syslog, 
Source: ovirt002, Destination: ovirt001).
2014-Apr-28, 13:12 Migration started (VM: central-syslog, Source: 
ovirt002, Destination: ovirt001, User: admin).



Thanks,
Steve



On Tue, Apr 29, 2014 at 9:51 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


Actually, the best way to debug this would be to look at both des
and src vdsm logs.
is this happening on all vm's or just one of them?
was this vm launched from an iso? is that iso still available?
are there any snapshots?
what are the vdsm, libvirt and qemu versions?

Thanks.

Dafna



On 04/29/2014 02:24 PM, Steve Dainard wrote:

Thanks, logs attached:

libvirtd.log.4/central-syslog.log covers the first event
(17:12 timestamp)
libvirtd.log.3/owncloud.log covers the second event (01:22
timestamp)


Steve



On Tue, Apr 29, 2014 at 4:48 AM, Francesco Romani
from...@redhat.com mailto:from...@redhat.com
mailto:from...@redhat.com mailto:from...@redhat.com wrote:

- Original Message -
 From: Steve Dainard sdain...@miovision.com
mailto:sdain...@miovision.com
mailto:sdain...@miovision.com
mailto:sdain...@miovision.com
 To: users users@ovirt.org mailto:users@ovirt.org
mailto:users@ovirt.org mailto:users@ovirt.org
 Sent: Tuesday, April 29, 2014 4:32:08 AM
 Subject: Re: [ovirt-users] Live migration failing

 Another error on migration.

Hi, in both cases the core issue is

ibvirtError: Unable to read from monitor: Connection reset
by peer

can you share the libvirtd and qemu logs?

Hopefully we can find some more information on those logs.

Bests,

--
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani




___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Dafna Ron






--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] difference between thin/depentend and clone/dependent vm virtual machine

2014-04-23 Thread Dafna Ron
 
 
 
  ___
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-23 Thread Dafna Ron
-d70e3969e5ea
-rw-rw. 1 vdsm kvm 1048576 Apr 11 22:00 
5210eec2-a0eb-462e-95d5-7cf27db312f5.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 11 22:00 
dcce0903-0f24-434b-9d1c-d70e3969e5ea.meta
-rw-rw. 1 vdsm kvm 1048576 Apr 11 12:34 
dcce0903-0f24-434b-9d1c-d70e3969e5ea.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 11 12:34 
d3a1c505-8f6a-4c2b-97b7-764cd5baea47.meta
-rw-rw. 1 vdsm kvm   20824 Apr 11 12:33 
d3a1c505-8f6a-4c2b-97b7-764cd5baea47
-rw-rw. 1 vdsm kvm14614528 Apr 10 16:12 
638c2164-2edc-4294-ac99-c51963140940
-rw-rw. 1 vdsm kvm 1048576 Apr 10 16:12 
d3a1c505-8f6a-4c2b-97b7-764cd5baea47.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 16:12 
638c2164-2edc-4294-ac99-c51963140940.meta
-rw-rw. 1 vdsm kvm12779520 Apr 10 16:06 
f8f1f164-c0d9-4716-9ab3-9131179a79bd
-rw-rw. 1 vdsm kvm 1048576 Apr 10 16:05 
638c2164-2edc-4294-ac99-c51963140940.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 16:05 
f8f1f164-c0d9-4716-9ab3-9131179a79bd.meta
-rw-rw. 1 vdsm kvm92995584 Apr 10 16:00 
f9b14795-a26c-4edb-ae34-22361531a0a1
-rw-rw. 1 vdsm kvm 1048576 Apr 10 16:00 
f8f1f164-c0d9-4716-9ab3-9131179a79bd.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 16:00 
f9b14795-a26c-4edb-ae34-22361531a0a1.meta
-rw-rw. 1 vdsm kvm30015488 Apr 10 14:57 
39cbf947-f084-4e75-8d6b-b3e5c32b82d6
-rw-rw. 1 vdsm kvm 1048576 Apr 10 14:57 
f9b14795-a26c-4edb-ae34-22361531a0a1.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 14:57 
39cbf947-f084-4e75-8d6b-b3e5c32b82d6.meta
-rw-rw. 1 vdsm kvm19267584 Apr 10 14:34 
3ece1489-9bff-4223-ab97-e45135106222
-rw-rw. 1 vdsm kvm 1048576 Apr 10 14:34 
39cbf947-f084-4e75-8d6b-b3e5c32b82d6.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 14:34 
3ece1489-9bff-4223-ab97-e45135106222.meta
-rw-rw. 1 vdsm kvm22413312 Apr 10 14:29 
dcee2e8a-8803-44e2-80e8-82c882af83ef
-rw-rw. 1 vdsm kvm 1048576 Apr 10 14:28 
3ece1489-9bff-4223-ab97-e45135106222.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 14:28 
dcee2e8a-8803-44e2-80e8-82c882af83ef.meta
-rw-rw. 1 vdsm kvm54460416 Apr 10 14:26 
57066786-613a-46ff-b2f9-06d84678975b
-rw-rw. 1 vdsm kvm 1048576 Apr 10 14:26 
dcee2e8a-8803-44e2-80e8-82c882af83ef.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 14:26 
57066786-613a-46ff-b2f9-06d84678975b.meta
-rw-rw. 1 vdsm kvm15728640 Apr 10 13:31 
121ae509-d2b2-4df2-a56f-dfdba4b8d21c
-rw-rw. 1 vdsm kvm 1048576 Apr 10 13:30 
57066786-613a-46ff-b2f9-06d84678975b.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 13:30 
121ae509-d2b2-4df2-a56f-dfdba4b8d21c.meta
-rw-rw. 1 vdsm kvm 5767168 Apr 10 13:18 
1d95a9d2-e4ba-4bcc-ba71-5d493a838dcc
-rw-rw. 1 vdsm kvm 1048576 Apr 10 13:17 
121ae509-d2b2-4df2-a56f-dfdba4b8d21c.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 13:17 
1d95a9d2-e4ba-4bcc-ba71-5d493a838dcc.meta
-rw-rw. 1 vdsm kvm 5373952 Apr 10 13:13 
3ce8936a-38f5-43a9-a4e0-820094fbeb04
-rw-rw. 1 vdsm kvm 1048576 Apr 10 13:13 
1d95a9d2-e4ba-4bcc-ba71-5d493a838dcc.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 13:12 
3ce8936a-38f5-43a9-a4e0-820094fbeb04.meta
-rw-rw. 1 vdsm kvm  3815243776 Apr 10 13:11 
7211d323-c398-4c1c-8524-a1047f9d5ec9
-rw-rw. 1 vdsm kvm 1048576 Apr 10 13:11 
3ce8936a-38f5-43a9-a4e0-820094fbeb04.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 13:11 
7211d323-c398-4c1c-8524-a1047f9d5ec9.meta
-rw-r--r--. 1 vdsm kvm 272 Mar 19 10:35 
af94adc4-fad4-42f5-a004-689670311d66.meta
-rw-rw. 1 vdsm kvm 21474836480 Mar 19 10:22 
af94adc4-fad4-42f5-a004-689670311d66
-rw-rw. 1 vdsm kvm 1048576 Mar 19 09:39 
7211d323-c398-4c1c-8524-a1047f9d5ec9.lease
-rw-rw. 1 vdsm kvm 1048576 Mar 19 09:39 
af94adc4-fad4-42f5-a004-689670311d66.lease


Its just very odd that I can snapshot any other VM except this one.

I just cloned a new VM from the last snapshot on this VM and it 
created without issue. I was also able to snapshot the new VM without 
a problem.


*Steve
*


On Tue, Apr 22, 2014 at 12:51 PM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


it's the same error:

c1d7c4e-392b-4a62-9836-3add1360a46d::DEBUG::2014-04-22
12:13:44,340::volume::1058::Storage.Misc.excCmd::(createVolume)
FAILED: err =

'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/0b2d15e5-bf4f-4eaf-90e2-f1bd51a3a936:
error while creating qcow2: No such file or directory\n'; rc = 1


were these 23 snapshots created any way each time we fail to
create the snapshot or are these older snapshots which you
actually created before the failure?

at this point my main theory is that somewhere along the line you
had some sort of failure in your storage and from that time each
snapshot you create will fail.
if the snapshots are created during the failure can you please
delete the snapshots you do not need and try again

Re: [ovirt-users] difference between thin/depentend and clone/dependent vm virtual machine

2014-04-23 Thread Dafna Ron

what do you mean that host1 is engine + vdsm, are you using hosted engine?



On 04/23/2014 01:59 PM, Tamer Lima wrote:


hello,
thanks for reply

my storage is NFS v3, defined on host 01.  My DATA-DOMAIN and 
ISO-DOMAIN are hosted on host 01;
my SPM is located on host 03, I dont remember why. I tried to migrate 
SPM to host 01 but is not possible. All creation of virtual machine 
starts on server 01 (







On Wed, Apr 23, 2014 at 5:19 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


I think that you are mixing up a lot of different things and to be
honest I am not sure what configuration you have and what exactly
you are trying to do.
so lets try to simplify it?
what type of storage are you working on?
which host is the spm?



On 04/22/2014 07:36 PM, Tamer Lima wrote:

hello,

I am in trouble

I have 3 servers dedicated to test OVIRT:
01- engine + vdsm (8 cpus, 32GB ram , 2TB HD)
02 - vdsm (8 cpus, 32GB ram , 2TB HD)
03 - vdsm (8 cpus, 32GB ram , 2TB HD)

I want to create cloned virtual machines but in my
configuration I can only save virtual machines on server 01;
my configuration refers a DATA DOMAIN on server 01

All my virtual machines are : 2 cpu , 6 GB ram , 500gb HD and
were created like CLONE

My server 01 is the data domain and all new virtual machine is
created, via NFS, on server 01 , who has 2TB maximum capacity
( the same size of partition /sda3 = 2TB)

how can I save each virtual machine on a desired vdsm server ?

What I want is :
server 01 - engine + vdsm : 03 virtual machines running and
hosted phisicaly on this host
server 02 - vdsm : 04 virtual machines running and hosted
phisicaly on this host
server 03 - vdsm : 04 virtual machines running and hosted
phisicaly on this host

but I have this :
server 01 - engine + vdsm : 03 virtual machines running and
hosted phisicaly on this host
server 02 - vdsm : 01 virtual machines running on this server
BUT hosted phisicaly on server 01
server 03 - vdsm : none, because my DATA DOMAIN IS FULL (2TB )

How to solve this problem ?
is it possible create one DATA DOMAIN for each VDSM host ? I
think this is the solution but I do not know how to point VMs
to be saved on specific data domain.

thanks




On Fri, Apr 18, 2014 at 4:48 AM, Michal Skrivanek
michal.skriva...@redhat.com
mailto:michal.skriva...@redhat.com
mailto:michal.skriva...@redhat.com
mailto:michal.skriva...@redhat.com wrote:


On Apr 17, 2014, at 16:43 , Tamer Lima
tamer.amer...@gmail.com mailto:tamer.amer...@gmail.com
mailto:tamer.amer...@gmail.com
mailto:tamer.amer...@gmail.com wrote:

 hi, thanks for reply

 I am investigating what is and how thin virtualization
works

 Do you know if HADOOP is indicated to work under thin
environment ?
 On Hadoop I will put large workloads and this thin
virtualization utilizes more resources than exists (shareable
environment)
 that is,
 if I have a real physical necessity of 500gb for each hadoop
host and my Thin Virtualization has 2TB on NFS, I can have
only 4
virtual machines (500GB each), or less.

 For this case I believe clone virtual machine is the right
choice. But in my environment it takes 1h30m to build one
cloned
virtual machine.

if you plan to overcommit then go with thin. The drawback
is that
if you of course hit the physical limit the VMs will run
out of
space...
if you plan to allocate 500GB each, consume all of it,
never plan
to grow then go with the clone….yes, it's going to take
time to
write all that stuff. With thin you need to do the same
amount
of writes, but gradually over time while you're allocating
it it

hope it helps

Thanks,
michal




 Am I correct ?





 On Thu, Apr 17, 2014 at 7:33 AM, Michal Skrivanek
michal.skriva...@redhat.com
mailto:michal.skriva...@redhat.com
mailto:michal.skriva...@redhat.com
mailto:michal.skriva...@redhat.com

wrote:

 On Apr 16, 2014, at 16:41 , Tamer Lima
tamer.amer...@gmail.com mailto:tamer.amer...@gmail.com
mailto:tamer.amer...@gmail.com
mailto:tamer.amer...@gmail.com wrote

Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-23 Thread Dafna Ron
queries run on the system all the time but the only failure I 
encountered that was caused because of it was with live snapshots on a 
loaded setup in which vm running on the hsm would fail live snapshots if 
the system was loaded and the queries took a long time to come back.
However, since the error you have happens when the vm is down and only 
on that specific vm, I think it's most likely related to a failure 
somewhere in the chain.


Before committing or deleting any of the snapshots, is it possible for 
you to export the vm as is (with the snapshots) to an export domain? 
that way we know it's backed up before doing anything on the chain (and 
actually, this would be a much better way of backing up a vm rather than 
snapshots).


I don't really know what the problem in the chain is or when it 
happened, which is why I want to be cautious when continuing and delete 
of a snapshot would be better than committing it.


I can also suggest creating a new vm from any snapshot you think would 
be an important point in time for you - that case, even if there is a 
problem with the image you have a new vm with this image.


so to recap, lets try this:
1. restart vdsm and try to create a snapshot again
2. export the vm to an export domain without collapsing the snapshots
3. delete or commit the snapshots - if any fail please attach the logs. 
also, if you delete/commit, after each snapshot you can try to create a 
new snapshot to see if the issue is solved






On 04/23/2014 05:08 PM, Steve Dainard wrote:



*Steve *

On Wed, Apr 23, 2014 at 5:14 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


steve,
I did not say that there is a limit. there is no limit and you can
take a 1000 snapshots if you like, I simply said that I think that
it would not be would a good practice to do so.


I'm not trying to be adversarial here, but this is contradictory; if 
there's 'no limit' but 'its not good practice' and we assume that we 
want our virtual infrastructure to run smoothly, then effectively 
there is a limit we just don't know what it is.


I also did not say that this is your current problem with the vm
so you are jumping to conclusions here.


I wasn't connecting the dots between # of snapshots, and the current 
issue, I have other VM's with the same amount of snapshots without 
this problem. No conclusion jumping going on. More interested in what 
the best practice is for VM's that accumulate snapshots over time.


There is a feature slated for 3.5 
http://www.ovirt.org/Features/Live_Merge which merges snapshots on a 
running VM, so I suppose in the long run I won't have a high snapshot 
count.


i simply explained how snapshots work which is that they are
created in a chain, if there is a problem at a single point in
time it would effect the rest of the snapshots below it.


Just for clarity, such a problem would affect the snapshots 'below it' 
means after the problematic snapshot? Example: Snapshot 1,2,3,4,5. #4 
has a consistency issue, snaps 1,2,3 should be ok? I can try 
incrementally rolling back snapshots if this is the case (after vdsm 
restart suggested).


Is there any way to do a consistency check? I can imagine scheduling a 
cronjob to run through a nightly check for consistency issues, then 
roll back to an earlier snapshot to circumvent the issue.


And that we query all images under the base Image so if you have a
lot of them it would take a long time for the results to come back.


That's good to know, is this query done on new snapshot creation only? 
So over time the more snapshots I have, new snapshots will take longer 
to complete?



as for your vm, since you fail to create a snapshot on only that
vm it means that there is a problem in the current vm and it's chain.

I can see when comparing the uuid's that the pool, domain, base
image and last snapshots all exists in the rhev link.

2014-04-22 12:13:41,083 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(pool-6-thread-49) [7ccaed5] -- createVolume parameters:
sdUUID=95b9d922-4df7-4d3b-9bca-467e2fd9d573
spUUID=9497ef2c-8368-4c92-8d61-7f318a90748f
imgGUID=466d9ae9-e46a-46f8-9f4b-964d8af0675b
size=21,474,836,480 bytes
volFormat=COW
volType=Sparse
volUUID=0b2d15e5-bf4f-4eaf-90e2-f1bd51a3a936
descr=
srcImgGUID=466d9ae9-e46a-46f8-9f4b-964d8af0675b
srcVolUUID=1a67de4b-aa1c-4436-baca-ca55726d54d7



lets see if it's possibly a cache issue - can you please restart
vdsm on the hosts?


I'll update when I have a chance to restart the services.

Thanks








On 04/22/2014 08:22 PM, Steve Dainard wrote:

All snapshots are from before failure.

That's a bit scary that there may be a 'too many snapshots'
issue

Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Dafna Ron

This is the actual problem:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22 
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume) FAILED: 
err = 
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb: error 
while creating qcow2: No such file or directory\n'; rc = 1


from that you see the actual failure:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22 
10:21:49,392::volume::286::Storage.Volume::(clone) Volume.clone: can't 
clone: 
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7 to 
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1

4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22 
10:21:49,392::volume::508::Storage.Volume::(create) Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 466, in create
srcVolUUID, imgPath, volPath)
  File /usr/share/vdsm/storage/fileVolume.py, line 160, in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
  File /usr/share/vdsm/storage/volume.py, line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
CannotCloneVolume: Cannot clone volume: 
'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7, 
dst=/rhev/data-cen
ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb: 
Error creating a new volume: ([Formatting 
\'/rhev/data-center/9497ef2c-8368-
4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\', 
fmt=qcow2 size=21474836480 
backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off 
cluster_size=65536 ],)'



do you have any alert in the webadmin to restart the vm?

Dafna

On 04/22/2014 03:31 PM, Steve Dainard wrote:

Sorry for the confusion.

I attempted to take a live snapshot of a running VM. After that 
failed, I migrated the VM to another host, and attempted the live 
snapshot again without success, eliminating a single host as the cause 
of failure.


Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is CentOS 6.5.

Package versions:
libvirt-0.10.2-29.el6_5.5.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
vdsm-4.13.3-4.el6.x86_64
vdsm-gluster-4.13.3-4.el6.noarch


I made another live snapshot attempt at 10:21 EST today, full vdsm.log 
attached, and a truncated engine.log.


Thanks,

*Steve
*


On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


please explain the flow of what you are trying to do, are you
trying to live migrate the disk (from one storage to another), are
you trying to migrate the vm and after vm migration is finished
you try to take a live snapshot of the vm? or are you trying to
take a live snapshot of the vm during a vm migration from host1 to
host2?

Please attach full vdsm logs from any host you are using (if you
are trying to migrate the vm from host1 to host2) + please attach
engine log.

Also, what is the vdsm, libvirt and qemu versions, what ovirt
version are you using and what is the storage you are using?

Thanks,

Dafna




On 04/22/2014 02:12 PM, Steve Dainard wrote:

I've attempted migrating the vm to another host and taking a
snapshot, but I get this error:

6efd33f4-984c-4513-b5e6-fffdca2e983b::ERROR::2014-04-22
01:09:37,296::volume::286::Storage.Volume::(clone)
Volume.clone: can't clone:

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
to

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/b230596f-97bc-4532-ba57-5654fa9c6c51

A bit more of the vdsm log is attached.

Other vm's are snapshotting without issue.



Any help appreciated,

*Steve
*


___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Dafna Ron






--
Dafna Ron

Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Dafna Ron

are you able to take an offline snapshot? (while the vm is down)
how many snapshots do you have on this vm?

On 04/22/2014 04:19 PM, Steve Dainard wrote:
No alert in web ui, I restarted the VM yesterday just in case, no 
change. I also restored an earlier snapshot and tried to re-snapshot, 
same result.


*Steve
*


On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


This is the actual problem:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume)
FAILED: err =

'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
error while creating qcow2: No such file or directory\n'; rc = 1

from that you see the actual failure:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::286::Storage.Volume::(clone) Volume.clone:
can't clone:

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
to

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1
4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::508::Storage.Volume::(create) Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 466, in create
srcVolUUID, imgPath, volPath)
  File /usr/share/vdsm/storage/fileVolume.py, line 160, in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
  File /usr/share/vdsm/storage/volume.py, line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
CannotCloneVolume: Cannot clone volume:

'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7,
dst=/rhev/data-cen

ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
Error creating a new volume: ([Formatting
\'/rhev/data-center/9497ef2c-8368-

4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\',
fmt=qcow2 size=21474836480
backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off
cluster_size=65536 ],)'


do you have any alert in the webadmin to restart the vm?

Dafna


On 04/22/2014 03:31 PM, Steve Dainard wrote:

Sorry for the confusion.

I attempted to take a live snapshot of a running VM. After
that failed, I migrated the VM to another host, and attempted
the live snapshot again without success, eliminating a single
host as the cause of failure.

Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is
CentOS 6.5.

Package versions:
libvirt-0.10.2-29.el6_5.5.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
vdsm-4.13.3-4.el6.x86_64
vdsm-gluster-4.13.3-4.el6.noarch


I made another live snapshot attempt at 10:21 EST today, full
vdsm.log attached, and a truncated engine.log.

Thanks,

*Steve
*



On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

please explain the flow of what you are trying to do, are you
trying to live migrate the disk (from one storage to
another), are
you trying to migrate the vm and after vm migration is
finished
you try to take a live snapshot of the vm? or are you
trying to
take a live snapshot of the vm during a vm migration from
host1 to
host2?

Please attach full vdsm logs from any host you are using
(if you
are trying to migrate the vm from host1 to host2) + please
attach
engine log.

Also, what is the vdsm, libvirt and qemu versions, what ovirt
version are you using and what is the storage you are using?

Thanks,

Dafna




On 04/22/2014 02:12 PM, Steve Dainard wrote:

I've attempted migrating the vm to another host and
taking a
snapshot, but I get

Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Dafna Ron

it's the same error:

c1d7c4e-392b-4a62-9836-3add1360a46d::DEBUG::2014-04-22 
12:13:44,340::volume::1058::Storage.Misc.excCmd::(createVolume) FAILED: 
err = 
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/0b2d15e5-bf4f-4eaf-90e2-f1bd51a3a936: error 
while creating qcow2: No such file or directory\n'; rc = 1



were these 23 snapshots created any way each time we fail to create the 
snapshot or are these older snapshots which you actually created before 
the failure?


at this point my main theory is that somewhere along the line you had 
some sort of failure in your storage and from that time each snapshot 
you create will fail.
if the snapshots are created during the failure can you please delete 
the snapshots you do not need and try again?


There should not be a limit on how many snapshots you can have since 
it's only a link changing the image the vm should boot from.
Having said that, it's not ideal to have that many snapshots and can 
probably lead to unexpected results so I would not recommend having that 
many snapshots on a single vm :)


for example, my second theory would be that because we have so many 
snapshots we have some sort of race where part of the createVolume 
command expects some result from a query run before the create itself 
and because there are so many snapshots there is no such file on the 
volume because it's too far up the list.


can you also run: ls -l 
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b


lets see what images are listed under that vm.

btw, you know that your export domain is getting 
StorageDomainDoesNotExist in the vdsm log? is that domain in up state? 
can you try to deactivate the export domain?


Thanks,

Dafna




On 04/22/2014 05:20 PM, Steve Dainard wrote:

Ominous..

23 snapshots. Is there an upper limit?

Offline snapshot fails as well. Both logs attached again (snapshot 
attempted at 12:13 EST).


*Steve *

On Tue, Apr 22, 2014 at 11:20 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


are you able to take an offline snapshot? (while the vm is down)
how many snapshots do you have on this vm?


On 04/22/2014 04:19 PM, Steve Dainard wrote:

No alert in web ui, I restarted the VM yesterday just in case,
no change. I also restored an earlier snapshot and tried to
re-snapshot, same result.

*Steve
*



On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

This is the actual problem:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
   
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume)

FAILED: err =
   
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
   
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:

error while creating qcow2: No such file or directory\n';
rc = 1

from that you see the actual failure:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::286::Storage.Volume::(clone)
Volume.clone:
can't clone:
   
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
   
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7

to
   
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1

4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::508::Storage.Volume::(create)
Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 466, in
create
srcVolUUID, imgPath, volPath)
  File /usr/share/vdsm/storage/fileVolume.py, line 160,
in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
  File /usr/share/vdsm/storage/volume.py, line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path,
str(e))
CannotCloneVolume: Cannot clone volume:
   
'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7,

dst=/rhev/data-cen
   
ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1

Re: [ovirt-users] Error creating Disks

2014-04-17 Thread Dafna Ron
I am not sure this question relates to this thread and perhaps it should 
be posted in a different one :) can you explain what you mean by that 
question?



On 04/17/2014 01:09 AM, Maurice James wrote:

Which version of Ovirt are you guys going to build the new RHEV from?

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: elad Ben Aharon ebena...@redhat.com, Liron Aravot 
lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 5:39:20 PM
Subject: Re: [ovirt-users] Error creating Disks

Ok.
so it's the qemu bug that Liron sent and I think there is some bug there
with the engine cache since we did not see the job failing in vdsm log.
hopefully there will be a qemu patch for centos soon...

Thanks Maurice!

Dafna


On 04/16/2014 07:14 PM, Maurice James wrote:

The offline disk migration works

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: elad Ben Aharon ebena...@redhat.com, Liron Aravot 
lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:57:20 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Maurice.
so you are saying that there is some sort of caching.

can you migrate the disk off line?

Dafna


On 04/16/2014 04:06 PM, Maurice James wrote:

I ran tail -f /var/log/vdsm/vdsm.log |grep ERROR while attempting a live 
migration and nothing is coming up, but
tail -f /var/log/ovirt-engine/engine.log |grep ERROR returns:
2014-04-16 11:02:59,564 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Failed in SnapshotVDS method
2014-04-16 11:02:59,568 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Command SnapshotVDSCommand(HostName = 
vhost3, HostId = bc9c25e6-714e-4eac-8af0-860ac76fd195, 
vmId=ba49605b-fb7e-4a70-a380-6286d3903e50) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-16 11:02:59,966 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Command 
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand throw Vdc Bll 
exception. With error message VdcBLLException: Auto-generated live snapshot for 
VM ba49605b-fb7e-4a70-a380-6286d3903e50 failed (Failed with error imageErr and 
code 13)
2014-04-16 11:02:59,970 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Reverting task unknown, handler: 
org.ovirt.engine.core.bll.lsm.LiveSnapshotTaskHandler



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com, elad Ben Aharon 
ebena...@redhat.com
Cc: Liron Aravot lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:00:47 AM
Subject: Re: [ovirt-users] Error creating Disks

Since LiveStorageMigration is a complicated and long task which divides
into 4 different jobs I asked for the log with the echo before the task
so we can easily follow the task from start to end.

however, I did grep for the task id in the logs you just attached and
there is nothing there.
I'm adding Elad to try and see if he can reproduce ERROR populated by
engine because of cache.

Dafna


On 04/16/2014 03:45 PM, Maurice James wrote:

I attached a few of the rotated logs, something might be in there

- Original Message -
From: Dafna Ron d...@redhat.com
To: Liron Aravot lara...@redhat.com
Cc: Maurice James mja...@media-node.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 10:41:59 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Liron,

well, unless the first vdsm log was cut we were not seeing any errors in
the vdsm log (engine was reporting an issue and vdsm did not show any
errors at all), after the engine restart we can see an error in the vdsm
log.
Maurice, was is possible that the vdsm log was cut? if not, there might
be a second bug with engine cache.

The Error we are seeing in the vdsm log now does indeed look like the
live snapshot issue in qemu :)


Thanks,
Dafna



On 04/16/2014 03:23 PM, Liron Aravot wrote:

Hi Maurice, Dafna
The creation of the snapshot for each of the disks succeeds, but performing the 
live part of it fails - we can see the following error in the vdsm log.

Thread-::DEBUG::2014-04-16 09:40:39,071::vm::4007::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Snapshot faile
d using the quiesce flag, trying again without it (unsupported configuration: 
reuse is not supported with this QEMU binary)
Thread-::DEBUG::2014-04-16 
09:40:39,081::libvirtconnection::124::root::(wrapper) Unknown libvirterror: 
ecode: 67 edom: 10 level:
  2 message: unsupported configuration: reuse is not supported with this 
QEMU binary
Thread-::ERROR::2014-04-16 09:40:39,082::vm::4011::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Unable

Re: [ovirt-users] Error creating Disks

2014-04-17 Thread Dafna Ron

valid and important discussion indeed.
important enough I think that this should be raised in a separate thread 
allowing others to discuss it (since this is an on-going issue some 
people may not be aware of a subject change).


Can you please send a new mail with a relevant headline raising this issue?

Thanks,
Dafna


On 04/17/2014 11:47 AM, Maurice James wrote:

Just curious. I have come across a few major problems since upgrading to 3.4.x (External 
Authentication BZ1081204, Quota assignment BZ1081014, Live migration qemu bug 
) to name a few. I imagine that all of the features need to be polished before inclusion 
into RHEL. I'm trying to get my company to adopt RHEL instead of V-Sphere but that is 
becoming a little tricky. Minus the cool new features of 3.4.x 3.3.4 was pretty stable 
IMO.

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com, Liron Aravot lara...@redhat.com, itamar 
Heim ih...@redhat.com
Cc: elad Ben Aharon ebena...@redhat.com, users@ovirt.org
Sent: Thursday, April 17, 2014 5:31:46 AM
Subject: Re: [ovirt-users] Error creating Disks

I am not sure this question relates to this thread and perhaps it should
be posted in a different one :) can you explain what you mean by that
question?


On 04/17/2014 01:09 AM, Maurice James wrote:

Which version of Ovirt are you guys going to build the new RHEV from?

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: elad Ben Aharon ebena...@redhat.com, Liron Aravot 
lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 5:39:20 PM
Subject: Re: [ovirt-users] Error creating Disks

Ok.
so it's the qemu bug that Liron sent and I think there is some bug there
with the engine cache since we did not see the job failing in vdsm log.
hopefully there will be a qemu patch for centos soon...

Thanks Maurice!

Dafna


On 04/16/2014 07:14 PM, Maurice James wrote:

The offline disk migration works

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: elad Ben Aharon ebena...@redhat.com, Liron Aravot 
lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:57:20 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Maurice.
so you are saying that there is some sort of caching.

can you migrate the disk off line?

Dafna


On 04/16/2014 04:06 PM, Maurice James wrote:

I ran tail -f /var/log/vdsm/vdsm.log |grep ERROR while attempting a live 
migration and nothing is coming up, but
tail -f /var/log/ovirt-engine/engine.log |grep ERROR returns:
2014-04-16 11:02:59,564 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Failed in SnapshotVDS method
2014-04-16 11:02:59,568 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Command SnapshotVDSCommand(HostName = 
vhost3, HostId = bc9c25e6-714e-4eac-8af0-860ac76fd195, 
vmId=ba49605b-fb7e-4a70-a380-6286d3903e50) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-16 11:02:59,966 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Command 
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand throw Vdc Bll 
exception. With error message VdcBLLException: Auto-generated live snapshot for 
VM ba49605b-fb7e-4a70-a380-6286d3903e50 failed (Failed with error imageErr and 
code 13)
2014-04-16 11:02:59,970 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Reverting task unknown, handler: 
org.ovirt.engine.core.bll.lsm.LiveSnapshotTaskHandler



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com, elad Ben Aharon 
ebena...@redhat.com
Cc: Liron Aravot lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:00:47 AM
Subject: Re: [ovirt-users] Error creating Disks

Since LiveStorageMigration is a complicated and long task which divides
into 4 different jobs I asked for the log with the echo before the task
so we can easily follow the task from start to end.

however, I did grep for the task id in the logs you just attached and
there is nothing there.
I'm adding Elad to try and see if he can reproduce ERROR populated by
engine because of cache.

Dafna


On 04/16/2014 03:45 PM, Maurice James wrote:

I attached a few of the rotated logs, something might be in there

- Original Message -
From: Dafna Ron d...@redhat.com
To: Liron Aravot lara...@redhat.com
Cc: Maurice James mja...@media-node.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 10:41:59 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Liron,

well, unless the first vdsm log was cut we were not seeing any errors in
the vdsm log (engine was reporting an issue and vdsm did not show any
errors at all

Re: [ovirt-users] Snapshot removal

2014-04-17 Thread Dafna Ron

NFS is wipe_after_delete=true always.
so delete of snapshot will merge the data to upper level image + zero in 
on the data which is why this is taking a long time.



On 04/17/2014 11:47 AM, Maurice James wrote:

Im using NFS

- Original Message -
From: Michal Skrivanek michal.skriva...@redhat.com
To: Maurice Moe James mja...@media-node.com
Cc: users@ovirt.org
Sent: Thursday, April 17, 2014 5:44:45 AM
Subject: Re: [ovirt-users] Snapshot removal


On Apr 15, 2014, at 04:09 , Maurice Moe James mja...@media-node.com wrote:


Is it it me or does it take a very long time to delete a snapshot?
Upwards of 30 minutes to delete a snapshot of a 7 GB drive

what kind of storage you're using? if it is block-based it zeroes out the data 
which takes time


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Dafna Ron
 to SnapshotVDS, error = 
Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
at 
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:116)
at 
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
at 
org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:1971)
at 
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand$2.runInTransaction(CreateAllSnapshotsFromVmCommand.java:354)
at 
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand$2.runInTransaction(CreateAllSnapshotsFromVmCommand.java:351)
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:174)
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:116)
at 
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand.performLiveSnapshot(CreateAllSnapshotsFromVmCommand.java:351)
at 
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand.endVmCommand(CreateAllSnapshotsFromVmCommand.java:273)
at 
org.ovirt.engine.core.bll.VmCommand.endSuccessfully(VmCommand.java:304)
at 
org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(CommandBase.java:614)
at 
org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:560)
at 
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1886)
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:151)
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:118)
at 
org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:492)

at org.ovirt.engine.core.bll.Backend.endAction(Backend.java:446)
at sun.reflect.GeneratedMethodAccessor513.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)











On 04/15/2014 05:54 PM, Maurice James wrote:

Logs are attached.

Live Migration failed

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:46:29 AM
Subject: Re: [ovirt-users] Error creating Disks

yeah, but is it the same error we see before in vdsm? :)
lets try to follow the action in both logs so we can debug it better,
can you do the following?

1. stop the vm and clean all snapshots.
2. add  to the engine and vdsm logs using echo (#echo '' 
/var/log/ovirt-engine/engine.log and #echo '' 
/var/log/vdsm/vdsm.log)
3. start the vm
4. try to live migrate the disk
5. attach the full engine and vdsm logs.

Thanks,

Dafna


On 04/15/2014 03:41 PM, Maurice James wrote:

It failed again. Its reported in the same location.

Snip of engine.log

2014-04-15 10:41:17,174 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-34) Failed in SnapshotVDS method
2014-04-15 10:41:17,178 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-34) Command SnapshotVDSCommand(HostName = 
vhost3, HostId = bc9c25e6-714e-4eac-8af0-860ac76fd195, 
vmId=ba49605b-fb7e-4a70-a380-6286d3903e50) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-15 10:41:17,531 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-34) Command 
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand throw Vdc Bll 
exception. With error message VdcBLLException: Auto-generated live snapshot for 
VM ba49605b-fb7e-4a70-a380-6286d3903e50 failed (Failed with error imageErr and 
code 13)
2014-04-15 10:41:17,534 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-34) Reverting task unknown, handler: 
org.ovirt.engine.core.bll.lsm.LiveSnapshotTaskHandler




- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:37:41 AM
Subject: Re: [ovirt-users] Error creating Disks

ok.
so the vm was down and you deleted all the snapshots.
in the webadmin, when you select the disk you want to live migrate, is
it the same storage reported in vdsm?

Thanks,

Dafna


On 04/15/2014 03:25 PM, Maurice James wrote:

I restarted from the VM. The VM now has 0 snapshots. All of the snapshots in 
the screen shots never completed, so I deleted them while the VM was powerd off 
from the ui

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:21:06 AM
Subject: Re: [ovirt-users

Re: [ovirt-users] HA question

2014-04-16 Thread Dafna Ron

you need to be more clear on your question it seems very general :)

do you mean, if the engine is down will power management still work?
If so, than the answer is no and with good reason... (lots of them 
actually).
starting from the simplest - if the host reboots and the engine is down, 
most users will not know how to restart the vms without the engine.
even if you do know how to start the vm's without the engine, once you 
restore the engine, there is a chance of split brain.


Dafna

On 04/15/2014 07:23 PM, Maurice James wrote:

If I lose the host with the engine manager, will HA still function?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ***UNCHECKED*** Re: HA question

2014-04-16 Thread Dafna Ron

the answer is no.
HA vm's are re-run by the engine in case of a failure so in case the 
engine is down no actions will be done on vms.



On 04/16/2014 03:06 AM, Maurice James wrote:


3 Node system consists of 2 host nodes and 1 manager node. If the 
manager goes offline, will VMs marked as HA still be highly available 
if there is no manager?





*From: *适 兕 lijiangshe...@gmail.com
*To: *Maurice James mja...@media-node.com
*Cc: *users@ovirt.org
*Sent: *Tuesday, April 15, 2014 9:28:06 PM
*Subject: UNCHECKED*** Re: [ovirt-users] HA question

Hi:
   James,
   Please describe your environment.

  I guess, If you just have two host in your cluster, one of them 
fail, No HA.




2014-04-16 2:23 GMT+08:00 Maurice James mja...@media-node.com 
mailto:mja...@media-node.com:


If I lose the host with the engine manager, will HA still function?

___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
独立之思想,自由之精神。
--陈寅恪



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Dafna Ron

can you try to restart the engine?
This should clean the cache and some of the tables holding temporary 
task info.


Dafna



On 04/16/2014 02:37 PM, Maurice James wrote:

I ran vdsClient -s 0 getAllTasksInfo and nothing was returned. What should my 
next step be?

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 4:44:48 AM
Subject: Re: [ovirt-users] Error creating Disks

ok... Now I am starting to understand what's going on and if you look at
the vdsm log, the live snapshot succeeds and no other ERROR are
reported. I think it's related to the task management on engine side.

Can I ask you to run in the spm: vdsClient -s 0 getAllTasksInfo
If you have tasks we would have to stop and clear them (vdsClient -s 0
stopTask task ; vdsClient -s 0 clearTask task)
after you clear the tasks you will have to restart the engine

Also, what version of ovirt are you using? is it 3.4? because this was
suppose to be fixed...

here is the explanation from what I see in the logs:

you are sending a command to Live Migrate

2014-04-15 12:50:54,381 INFO
[org.ovirt.engine.core.bll.MoveDisksCommand] (ajp--127.0.0.1-8702-4)
[4c089392] Running command: MoveDisksCommand internal: false. Entities
affected :  ID: c24706d3-1872-4cd3-94a2-9c61ef032e29 Type: Disk
2014-04-15 12:50:54,520 INFO
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand]
(ajp--127.0.0.1-8702-4) [4c089392] Lock Acquired to object EngineLock
[exclusiveLocks= key: c24706d3-1872-4cd3-94a2-9c61ef032e29 value: DISK
, sharedLocks= key: ba49605b-fb7e-4a70-a380-6286d3903e50 value: VM
]

The fist step is creating the snapshot:

2014-04-15 12:50:54,734 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-47) Running command:
CreateAllSnapshotsFromVmCommand internal: true. Entities affected :  ID:
ba49605b-fb7e-4a70-a3
80-6286d3903e50 Type: VM
2014-04-15 12:50:54,748 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-6-thread-47) [23c23b4] Running command:
CreateSnapshotCommand internal: true. Entities affected :  ID:
----000
0 Type: Storage
2014-04-15 12:50:54,760 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-6-thread-47) [23c23b4] START,
CreateSnapshotVDSCommand( storagePoolId =
a106ab81-9d5f-49c1-aeaf-832a137b708c, ignor
eFailoverLimit = false, storageDomainId =
3406665e-4adc-4fd4-aa1e-037547b29adb, imageGroupId =
c24706d3-1872-4cd3-94a2-9c61ef032e29, imageSizeInBytes = 107374182400,
volumeFormat = COW, newImageId = 015f7d9d-ff75-4a3c-a634-a00e82e04803,
newImageDescription = , imageId = e8442348-e28e-4e78-abe8-4f2848b47661,
sourceImageGroupId = c24706d3-1872-4cd3-94a2-9c61ef032e29), log id: f3b7d8b


which actually succeeds in vdsm and reported as successful to the engine:

engine:

2014-04-15 12:51:07,291 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-42) SPMAsyncTask::PollTask: Polling task
db7f0f4d-f47d-472a-92bd-197b428fe417 (Parent Command LiveMigrateVmDisks,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
status finished, result 'success'.

vdsm:

Thread-4670::DEBUG::2014-04-15
12:51:07,287::task::1185::TaskManager.Task::(prepare)
Task=`ea9d5846-656c-4a44-bca2-d5c091004078`::finished:
{'allTasksStatus': {'db7f0f4d-f47d-472a-92bd-197b428fe417': {'code': 0,
'message': '1 jobs comple
ted successfully', 'taskState': 'finished', 'taskResult': 'success',
'taskID': 'db7f0f4d-f47d-472a-92bd-197b428fe417'}}}


now engine has to end the task:

2014-04-15 12:51:07,301 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-42) CommandAsyncTask::endAction: Ending
action for 1 tasks (command ID: 7c67ca08-9969-4a6e-87ef-0b7379d7edb6):
calling endAction .
2014-04-15 12:51:07,303 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-42) CommandAsyncTask::EndCommandAction
[within thread] context: Attempting to endAction LiveMigrateVmDisks,
executionIndex: 0
2014-04-15 12:51:07,315 INFO
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand]
(org.ovirt.thread.pool-6-thread-42) Ending command successfully:
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand
2014-04-15 12:51:07,319 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-42) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand

and than

we see an exception:

2014-04-15 12:51:07,498 WARN
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-42) Wasnt able to live snapshot due to
error: VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error =
Snapshot failed, code = 48 (Failed with error

Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Dafna Ron

Thanks Liron,

well, unless the first vdsm log was cut we were not seeing any errors in 
the vdsm log (engine was reporting an issue and vdsm did not show any 
errors at all), after the engine restart we can see an error in the vdsm 
log.
Maurice, was is possible that the vdsm log was cut? if not, there might 
be a second bug with engine cache.


The Error we are seeing in the vdsm log now does indeed look like the 
live snapshot issue in qemu :)



Thanks,
Dafna



On 04/16/2014 03:23 PM, Liron Aravot wrote:

Hi Maurice, Dafna
The creation of the snapshot for each of the disks succeeds, but performing the 
live part of it fails - we can see the following error in the vdsm log.

Thread-::DEBUG::2014-04-16 09:40:39,071::vm::4007::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Snapshot faile
d using the quiesce flag, trying again without it (unsupported configuration: 
reuse is not supported with this QEMU binary)
Thread-::DEBUG::2014-04-16 
09:40:39,081::libvirtconnection::124::root::(wrapper) Unknown libvirterror: 
ecode: 67 edom: 10 level:
  2 message: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::ERROR::2014-04-16 09:40:39,082::vm::4011::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Unable to take
  snapshot
Traceback (most recent call last):
   File /usr/share/vdsm/vm.py, line 4009, in snapshot
 self._dom.snapshotCreateXML(snapxml, snapFlags)
   File /usr/share/vdsm/vm.py, line 859, in f
 ret = attr(*args, **kwargs)
   File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 
92, in wrapper
 ret = f(*args, **kwargs)
   File /usr/lib64/python2.6/site-packages/libvirt.py, line 1636, in 
snapshotCreateXML
 if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', 
dom=self)
libvirtError: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::DEBUG::2014-04-16 
09:40:39,091::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with 
{'status': {'message': 'Snap
shot failed', 'code': 48}}

there's open bug on possibly the same issue - but let's verify that it's the 
same
https://bugzilla.redhat.com/show_bug.cgi?id=1009100

what OS are you running? if it's not centos, please try to upgrade libvirt and 
try again.

if it's urgent - as i see that you already stopped your vm during those tries, 
as a temporary solution you can stop the vm, move the disk while it's stopped 
(which won't be live storage migration) and than start it.

- Original Message -

From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users@ovirt.org
Sent: Wednesday, April 16, 2014 4:46:10 PM
Subject: Re: [ovirt-users] Error creating Disks

can you try to restart the engine?
This should clean the cache and some of the tables holding temporary
task info.

Dafna



On 04/16/2014 02:37 PM, Maurice James wrote:

I ran vdsClient -s 0 getAllTasksInfo and nothing was returned. What should
my next step be?

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 4:44:48 AM
Subject: Re: [ovirt-users] Error creating Disks

ok... Now I am starting to understand what's going on and if you look at
the vdsm log, the live snapshot succeeds and no other ERROR are
reported. I think it's related to the task management on engine side.

Can I ask you to run in the spm: vdsClient -s 0 getAllTasksInfo
If you have tasks we would have to stop and clear them (vdsClient -s 0
stopTask task ; vdsClient -s 0 clearTask task)
after you clear the tasks you will have to restart the engine

Also, what version of ovirt are you using? is it 3.4? because this was
suppose to be fixed...

here is the explanation from what I see in the logs:

you are sending a command to Live Migrate

2014-04-15 12:50:54,381 INFO
[org.ovirt.engine.core.bll.MoveDisksCommand] (ajp--127.0.0.1-8702-4)
[4c089392] Running command: MoveDisksCommand internal: false. Entities
affected :  ID: c24706d3-1872-4cd3-94a2-9c61ef032e29 Type: Disk
2014-04-15 12:50:54,520 INFO
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand]
(ajp--127.0.0.1-8702-4) [4c089392] Lock Acquired to object EngineLock
[exclusiveLocks= key: c24706d3-1872-4cd3-94a2-9c61ef032e29 value: DISK
, sharedLocks= key: ba49605b-fb7e-4a70-a380-6286d3903e50 value: VM
]

The fist step is creating the snapshot:

2014-04-15 12:50:54,734 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-6-thread-47) Running command:
CreateAllSnapshotsFromVmCommand internal: true. Entities affected :  ID:
ba49605b-fb7e-4a70-a3
80-6286d3903e50 Type: VM
2014-04-15 12:50:54,748 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-6-thread-47) [23c23b4] Running command:
CreateSnapshotCommand internal: true. Entities affected :  ID:
----000
0 Type: Storage
2014

Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Dafna Ron
Since LiveStorageMigration is a complicated and long task which divides 
into 4 different jobs I asked for the log with the echo before the task 
so we can easily follow the task from start to end.


however, I did grep for the task id in the logs you just attached and 
there is nothing there.
I'm adding Elad to try and see if he can reproduce ERROR populated by 
engine because of cache.


Dafna


On 04/16/2014 03:45 PM, Maurice James wrote:

I attached a few of the rotated logs, something might be in there

- Original Message -
From: Dafna Ron d...@redhat.com
To: Liron Aravot lara...@redhat.com
Cc: Maurice James mja...@media-node.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 10:41:59 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Liron,

well, unless the first vdsm log was cut we were not seeing any errors in
the vdsm log (engine was reporting an issue and vdsm did not show any
errors at all), after the engine restart we can see an error in the vdsm
log.
Maurice, was is possible that the vdsm log was cut? if not, there might
be a second bug with engine cache.

The Error we are seeing in the vdsm log now does indeed look like the
live snapshot issue in qemu :)


Thanks,
Dafna



On 04/16/2014 03:23 PM, Liron Aravot wrote:

Hi Maurice, Dafna
The creation of the snapshot for each of the disks succeeds, but performing the 
live part of it fails - we can see the following error in the vdsm log.

Thread-::DEBUG::2014-04-16 09:40:39,071::vm::4007::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Snapshot faile
d using the quiesce flag, trying again without it (unsupported configuration: 
reuse is not supported with this QEMU binary)
Thread-::DEBUG::2014-04-16 
09:40:39,081::libvirtconnection::124::root::(wrapper) Unknown libvirterror: 
ecode: 67 edom: 10 level:
   2 message: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::ERROR::2014-04-16 09:40:39,082::vm::4011::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Unable to take
   snapshot
Traceback (most recent call last):
File /usr/share/vdsm/vm.py, line 4009, in snapshot
  self._dom.snapshotCreateXML(snapxml, snapFlags)
File /usr/share/vdsm/vm.py, line 859, in f
  ret = attr(*args, **kwargs)
File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 
92, in wrapper
  ret = f(*args, **kwargs)
File /usr/lib64/python2.6/site-packages/libvirt.py, line 1636, in 
snapshotCreateXML
  if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', 
dom=self)
libvirtError: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::DEBUG::2014-04-16 
09:40:39,091::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with 
{'status': {'message': 'Snap
shot failed', 'code': 48}}

there's open bug on possibly the same issue - but let's verify that it's the 
same
https://bugzilla.redhat.com/show_bug.cgi?id=1009100

what OS are you running? if it's not centos, please try to upgrade libvirt and 
try again.

if it's urgent - as i see that you already stopped your vm during those tries, 
as a temporary solution you can stop the vm, move the disk while it's stopped 
(which won't be live storage migration) and than start it.

- Original Message -

From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users@ovirt.org
Sent: Wednesday, April 16, 2014 4:46:10 PM
Subject: Re: [ovirt-users] Error creating Disks

can you try to restart the engine?
This should clean the cache and some of the tables holding temporary
task info.

Dafna



On 04/16/2014 02:37 PM, Maurice James wrote:

I ran vdsClient -s 0 getAllTasksInfo and nothing was returned. What should
my next step be?

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 4:44:48 AM
Subject: Re: [ovirt-users] Error creating Disks

ok... Now I am starting to understand what's going on and if you look at
the vdsm log, the live snapshot succeeds and no other ERROR are
reported. I think it's related to the task management on engine side.

Can I ask you to run in the spm: vdsClient -s 0 getAllTasksInfo
If you have tasks we would have to stop and clear them (vdsClient -s 0
stopTask task ; vdsClient -s 0 clearTask task)
after you clear the tasks you will have to restart the engine

Also, what version of ovirt are you using? is it 3.4? because this was
suppose to be fixed...

here is the explanation from what I see in the logs:

you are sending a command to Live Migrate

2014-04-15 12:50:54,381 INFO
[org.ovirt.engine.core.bll.MoveDisksCommand] (ajp--127.0.0.1-8702-4)
[4c089392] Running command: MoveDisksCommand internal: false. Entities
affected :  ID: c24706d3-1872-4cd3-94a2-9c61ef032e29 Type: Disk
2014-04-15 12:50:54,520 INFO
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand]
(ajp

Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Dafna Ron

Thanks Maurice.
so you are saying that there is some sort of caching.

can you migrate the disk off line?

Dafna


On 04/16/2014 04:06 PM, Maurice James wrote:

I ran tail -f /var/log/vdsm/vdsm.log |grep ERROR while attempting a live 
migration and nothing is coming up, but
tail -f /var/log/ovirt-engine/engine.log |grep ERROR returns:
2014-04-16 11:02:59,564 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Failed in SnapshotVDS method
2014-04-16 11:02:59,568 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Command SnapshotVDSCommand(HostName = 
vhost3, HostId = bc9c25e6-714e-4eac-8af0-860ac76fd195, 
vmId=ba49605b-fb7e-4a70-a380-6286d3903e50) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-16 11:02:59,966 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Command 
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand throw Vdc Bll 
exception. With error message VdcBLLException: Auto-generated live snapshot for 
VM ba49605b-fb7e-4a70-a380-6286d3903e50 failed (Failed with error imageErr and 
code 13)
2014-04-16 11:02:59,970 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Reverting task unknown, handler: 
org.ovirt.engine.core.bll.lsm.LiveSnapshotTaskHandler



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com, elad Ben Aharon 
ebena...@redhat.com
Cc: Liron Aravot lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:00:47 AM
Subject: Re: [ovirt-users] Error creating Disks

Since LiveStorageMigration is a complicated and long task which divides
into 4 different jobs I asked for the log with the echo before the task
so we can easily follow the task from start to end.

however, I did grep for the task id in the logs you just attached and
there is nothing there.
I'm adding Elad to try and see if he can reproduce ERROR populated by
engine because of cache.

Dafna


On 04/16/2014 03:45 PM, Maurice James wrote:

I attached a few of the rotated logs, something might be in there

- Original Message -
From: Dafna Ron d...@redhat.com
To: Liron Aravot lara...@redhat.com
Cc: Maurice James mja...@media-node.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 10:41:59 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Liron,

well, unless the first vdsm log was cut we were not seeing any errors in
the vdsm log (engine was reporting an issue and vdsm did not show any
errors at all), after the engine restart we can see an error in the vdsm
log.
Maurice, was is possible that the vdsm log was cut? if not, there might
be a second bug with engine cache.

The Error we are seeing in the vdsm log now does indeed look like the
live snapshot issue in qemu :)


Thanks,
Dafna



On 04/16/2014 03:23 PM, Liron Aravot wrote:

Hi Maurice, Dafna
The creation of the snapshot for each of the disks succeeds, but performing the 
live part of it fails - we can see the following error in the vdsm log.

Thread-::DEBUG::2014-04-16 09:40:39,071::vm::4007::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Snapshot faile
d using the quiesce flag, trying again without it (unsupported configuration: 
reuse is not supported with this QEMU binary)
Thread-::DEBUG::2014-04-16 
09:40:39,081::libvirtconnection::124::root::(wrapper) Unknown libvirterror: 
ecode: 67 edom: 10 level:
2 message: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::ERROR::2014-04-16 09:40:39,082::vm::4011::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Unable to take
snapshot
Traceback (most recent call last):
 File /usr/share/vdsm/vm.py, line 4009, in snapshot
   self._dom.snapshotCreateXML(snapxml, snapFlags)
 File /usr/share/vdsm/vm.py, line 859, in f
   ret = attr(*args, **kwargs)
 File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 
92, in wrapper
   ret = f(*args, **kwargs)
 File /usr/lib64/python2.6/site-packages/libvirt.py, line 1636, in 
snapshotCreateXML
   if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', 
dom=self)
libvirtError: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::DEBUG::2014-04-16 
09:40:39,091::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with 
{'status': {'message': 'Snap
shot failed', 'code': 48}}

there's open bug on possibly the same issue - but let's verify that it's the 
same
https://bugzilla.redhat.com/show_bug.cgi?id=1009100

what OS are you running? if it's not centos, please try to upgrade libvirt and 
try again.

if it's urgent - as i see that you already stopped your vm during those tries, 
as a temporary solution you can stop the vm

Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Dafna Ron

Ok.
so it's the qemu bug that Liron sent and I think there is some bug there 
with the engine cache since we did not see the job failing in vdsm log.

hopefully there will be a qemu patch for centos soon...

Thanks Maurice!

Dafna


On 04/16/2014 07:14 PM, Maurice James wrote:

The offline disk migration works

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: elad Ben Aharon ebena...@redhat.com, Liron Aravot 
lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:57:20 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Maurice.
so you are saying that there is some sort of caching.

can you migrate the disk off line?

Dafna


On 04/16/2014 04:06 PM, Maurice James wrote:

I ran tail -f /var/log/vdsm/vdsm.log |grep ERROR while attempting a live 
migration and nothing is coming up, but
tail -f /var/log/ovirt-engine/engine.log |grep ERROR returns:
2014-04-16 11:02:59,564 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Failed in SnapshotVDS method
2014-04-16 11:02:59,568 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-40) Command SnapshotVDSCommand(HostName = 
vhost3, HostId = bc9c25e6-714e-4eac-8af0-860ac76fd195, 
vmId=ba49605b-fb7e-4a70-a380-6286d3903e50) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-16 11:02:59,966 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Command 
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand throw Vdc Bll 
exception. With error message VdcBLLException: Auto-generated live snapshot for 
VM ba49605b-fb7e-4a70-a380-6286d3903e50 failed (Failed with error imageErr and 
code 13)
2014-04-16 11:02:59,970 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-40) Reverting task unknown, handler: 
org.ovirt.engine.core.bll.lsm.LiveSnapshotTaskHandler



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com, elad Ben Aharon 
ebena...@redhat.com
Cc: Liron Aravot lara...@redhat.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 11:00:47 AM
Subject: Re: [ovirt-users] Error creating Disks

Since LiveStorageMigration is a complicated and long task which divides
into 4 different jobs I asked for the log with the echo before the task
so we can easily follow the task from start to end.

however, I did grep for the task id in the logs you just attached and
there is nothing there.
I'm adding Elad to try and see if he can reproduce ERROR populated by
engine because of cache.

Dafna


On 04/16/2014 03:45 PM, Maurice James wrote:

I attached a few of the rotated logs, something might be in there

- Original Message -
From: Dafna Ron d...@redhat.com
To: Liron Aravot lara...@redhat.com
Cc: Maurice James mja...@media-node.com, users@ovirt.org
Sent: Wednesday, April 16, 2014 10:41:59 AM
Subject: Re: [ovirt-users] Error creating Disks

Thanks Liron,

well, unless the first vdsm log was cut we were not seeing any errors in
the vdsm log (engine was reporting an issue and vdsm did not show any
errors at all), after the engine restart we can see an error in the vdsm
log.
Maurice, was is possible that the vdsm log was cut? if not, there might
be a second bug with engine cache.

The Error we are seeing in the vdsm log now does indeed look like the
live snapshot issue in qemu :)


Thanks,
Dafna



On 04/16/2014 03:23 PM, Liron Aravot wrote:

Hi Maurice, Dafna
The creation of the snapshot for each of the disks succeeds, but performing the 
live part of it fails - we can see the following error in the vdsm log.

Thread-::DEBUG::2014-04-16 09:40:39,071::vm::4007::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Snapshot faile
d using the quiesce flag, trying again without it (unsupported configuration: 
reuse is not supported with this QEMU binary)
Thread-::DEBUG::2014-04-16 
09:40:39,081::libvirtconnection::124::root::(wrapper) Unknown libvirterror: 
ecode: 67 edom: 10 level:
 2 message: unsupported configuration: reuse is not supported with this 
QEMU binary
Thread-::ERROR::2014-04-16 09:40:39,082::vm::4011::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Unable to take
 snapshot
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 4009, in snapshot
self._dom.snapshotCreateXML(snapxml, snapFlags)
  File /usr/share/vdsm/vm.py, line 859, in f
ret = attr(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 
92, in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1636, in 
snapshotCreateXML
if ret is None:raise libvirtError('virDomainSnapshotCreateXML() 
failed', dom

Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron

did you have a failed Live Storage Migration failure on the vm?

Thank,

Dafna


On 04/14/2014 04:04 PM, Maurice James wrote:

Snapshot creation seems to be failing as well

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com, Federico Simoncelli 
fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 11:00:55 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Federico,
Can you please take a look?


- Original Message -

From: Maurice James mja...@media-node.com
To: Yair Zaslavsky yzasl...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 5:44:44 PM
Subject: Re: [ovirt-users] Error creating Disks

Logs attached

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 10:33:03 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi,
IMHO not enough info is provided,
Can you please provide full engine.log and relevant vdsm.log?

THanks,
Yair


- Original Message -

From: Maurice James mja...@media-node.com
To: users@ovirt.org
Sent: Monday, April 14, 2014 5:00:37 PM
Subject: [ovirt-users] Error creating Disks

oVirt Engine Version: 3.4.1-0.0.master.20140412010845.git43746c6.el6


While attempting to create a disk on an NFS storage domain, it fails with
the
following error in the engine.log




2014-04-14 09:58:12,127 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-72) Failed in HSMGetAllTasksStatusesVDS
method
2014-04-14 09:58:12,139 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-72) BaseAsyncTask::LogEndTaskFailure: Task
ee6ce682-bd76-467a-82d2-d227229cb9de (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
with
failure:
2014-04-14 09:58:12,159 ERROR [org.ovirt.engine.core.bll.AddDiskCommand]
(org.ovirt.thread.pool-6-thread-9) [483e53d6] Ending command with failure:
org.ovirt.engine.core.bll.AddDiskCommand
2014-04-14 09:58:12,212 ERROR
[org.ovirt.engine.core.bll.AddImageFromScratchCommand]
(org.ovirt.thread.pool-6-thread-9) [ab1e0be] Ending command with failure:
org.ovirt.engine.core.bll.AddImageFromScratchCommand


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron
Also, before the task fails, there are ERRORs in the log which are 
reporting problems in the DC


2014-04-14 09:56:05,161 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-16) Command 
GetCapabilitiesVDSCommand(HostName = vhost3, HostId = 
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution 
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException: 
connect timed out
2014-04-14 09:56:10,226 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-49) Command 
GetCapabilitiesVDSCommand(HostName = vhost3, HostId = 
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution 
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException: 
connect timed out
2014-04-14 09:56:15,283 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-81) Command 
GetCapabilitiesVDSCommand(HostName = vhost3, HostId = 
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution 
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException: 
connect timed out
2014-04-14 09:56:20,342 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-58) Command 
GetCapabilitiesVDSCommand(HostName = vhost3, HostId = 
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution 
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException: 
connect timed out
2014-04-14 09:56:25,409 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-84) Command 
GetCapabilitiesVDSCommand(HostName = vhost3, HostId = 
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution 
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException: 
connect timed out
2014-04-14 09:56:30,542 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] 
(DefaultQuartzScheduler_Worker-91) START, 
GetHardwareInfoVDSCommand(HostName = vhost3, HostId = 
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]), log id: 30f312c



than you stop the vm and fail the AddDisk.


is there a chance you were having problems with your storage?

Dafna


On 04/14/2014 04:04 PM, Maurice James wrote:

Snapshot creation seems to be failing as well

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com, Federico Simoncelli 
fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 11:00:55 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Federico,
Can you please take a look?


- Original Message -

From: Maurice James mja...@media-node.com
To: Yair Zaslavsky yzasl...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 5:44:44 PM
Subject: Re: [ovirt-users] Error creating Disks

Logs attached

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 10:33:03 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi,
IMHO not enough info is provided,
Can you please provide full engine.log and relevant vdsm.log?

THanks,
Yair


- Original Message -

From: Maurice James mja...@media-node.com
To: users@ovirt.org
Sent: Monday, April 14, 2014 5:00:37 PM
Subject: [ovirt-users] Error creating Disks

oVirt Engine Version: 3.4.1-0.0.master.20140412010845.git43746c6.el6


While attempting to create a disk on an NFS storage domain, it fails with
the
following error in the engine.log




2014-04-14 09:58:12,127 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-72) Failed in HSMGetAllTasksStatusesVDS
method
2014-04-14 09:58:12,139 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-72) BaseAsyncTask::LogEndTaskFailure: Task
ee6ce682-bd76-467a-82d2-d227229cb9de (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
with
failure:
2014-04-14 09:58:12,159 ERROR [org.ovirt.engine.core.bll.AddDiskCommand]
(org.ovirt.thread.pool-6-thread-9) [483e53d6] Ending command with failure:
org.ovirt.engine.core.bll.AddDiskCommand
2014-04-14 09:58:12,212 ERROR
[org.ovirt.engine.core.bll.AddImageFromScratchCommand]
(org.ovirt.thread.pool-6-thread-9) [ab1e0be] Ending command with failure:
org.ovirt.engine.core.bll.AddImageFromScratchCommand


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron

Hi Maurice,

are you getting any errors/alert about restarting the vm in the event 
log (in the webadmin).


I believe that you have a mismatch in the directory link between the db 
and vdsm which was created when you tried to live migrate and failed the 
fist time).


we can see in the vdsm log that createVolume (which is part of the live 
migration) is failing with this error:


OSError: [Errno 2] No such file or directory: 
'/rhev/data-center/a106ab81-9d5f-49c1-aeaf-832a137b708c/b7663d70-e658-41fa-b9f0-8da83c9eddce/images/348effd9-c5db-44fc-a9e5-67391647096b'


so it seems that the engine is pointing to a link that does not exist 
and we need to find on what storage the disk actually exists on and 
where the engine is pointing to.


you can look at the webadmin at the disk and see where it's suppose to 
be and than look in /rhev and see where the disk link is in vdsm (if you 
can see where the disk is actually is - physically in the storage, that 
would be good too).


Thanks,

Dafna


On 04/15/2014 01:16 PM, Maurice James wrote:

As far as the disk creation , I got that sorted out. It seem that my vdsm 
versions were out of sync on my hosts. When I updated them the disk creation 
began working, but im still having issues with live disk migrations.

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 5:31:48 AM
Subject: Re: [ovirt-users] Error creating Disks

Also, before the task fails, there are ERRORs in the log which are
reporting problems in the DC

2014-04-14 09:56:05,161 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-16) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:10,226 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-49) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:15,283 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-81) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:20,342 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-58) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:25,409 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-84) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:30,542 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-91) START,
GetHardwareInfoVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]), log id: 30f312c


than you stop the vm and fail the AddDisk.


is there a chance you were having problems with your storage?

Dafna


On 04/14/2014 04:04 PM, Maurice James wrote:

Snapshot creation seems to be failing as well

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com, Federico Simoncelli 
fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 11:00:55 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Federico,
Can you please take a look?


- Original Message -

From: Maurice James mja...@media-node.com
To: Yair Zaslavsky yzasl...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 5:44:44 PM
Subject: Re: [ovirt-users] Error creating Disks

Logs attached

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 10:33:03 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi,
IMHO not enough info is provided,
Can you please provide full engine.log and relevant vdsm.log?

THanks,
Yair


- Original Message -

From: Maurice James mja...@media-node.com
To: users@ovirt.org
Sent: Monday, April 14, 2014 5:00:37 PM
Subject: [ovirt-users] Error creating Disks

oVirt Engine Version: 3.4.1-0.0.master.20140412010845.git43746c6.el6


While

Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron

it seems that the issue is the links between the engine and the vdsm

why don't you check where the disk is physically located and than try to 
restart the vm?


Thanks,
Dafna



On 04/15/2014 02:57 PM, Maurice James wrote:

yes I did see a warning about restarting. It said:
Failed to create live snapshot '20140415 for VM 'TIEATS_Racktables'. VM restart 
is recommended

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 9:49:22 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Maurice,

are you getting any errors/alert about restarting the vm in the event
log (in the webadmin).

I believe that you have a mismatch in the directory link between the db
and vdsm which was created when you tried to live migrate and failed the
fist time).

we can see in the vdsm log that createVolume (which is part of the live
migration) is failing with this error:

OSError: [Errno 2] No such file or directory:
'/rhev/data-center/a106ab81-9d5f-49c1-aeaf-832a137b708c/b7663d70-e658-41fa-b9f0-8da83c9eddce/images/348effd9-c5db-44fc-a9e5-67391647096b'

so it seems that the engine is pointing to a link that does not exist
and we need to find on what storage the disk actually exists on and
where the engine is pointing to.

you can look at the webadmin at the disk and see where it's suppose to
be and than look in /rhev and see where the disk link is in vdsm (if you
can see where the disk is actually is - physically in the storage, that
would be good too).

Thanks,

Dafna


On 04/15/2014 01:16 PM, Maurice James wrote:

As far as the disk creation , I got that sorted out. It seem that my vdsm 
versions were out of sync on my hosts. When I updated them the disk creation 
began working, but im still having issues with live disk migrations.

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 5:31:48 AM
Subject: Re: [ovirt-users] Error creating Disks

Also, before the task fails, there are ERRORs in the log which are
reporting problems in the DC

2014-04-14 09:56:05,161 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-16) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:10,226 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-49) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:15,283 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-81) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:20,342 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-58) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:25,409 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-84) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:30,542 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-91) START,
GetHardwareInfoVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]), log id: 30f312c


than you stop the vm and fail the AddDisk.


is there a chance you were having problems with your storage?

Dafna


On 04/14/2014 04:04 PM, Maurice James wrote:

Snapshot creation seems to be failing as well

- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Maurice James mja...@media-node.com, Federico Simoncelli 
fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 11:00:55 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Federico,
Can you please take a look?


- Original Message -

From: Maurice James mja...@media-node.com
To: Yair Zaslavsky yzasl...@redhat.com
Cc: users@ovirt.org
Sent: Monday, April 14, 2014 5:44:44 PM
Subject: Re: [ovirt-users] Error creating Disks

Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron

did you restart the vm from the vm internally or stop - start the vm?

how many snapshots does the vm have?

Thanks,
Dafna


On 04/15/2014 03:16 PM, Maurice James wrote:

I restarted the VM. Same issue. I was able to find the physical location on 
disk. I attached 2 screen shots showing the location of the disks on the file 
system.



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:12:56 AM
Subject: Re: [ovirt-users] Error creating Disks

it seems that the issue is the links between the engine and the vdsm

why don't you check where the disk is physically located and than try to
restart the vm?

Thanks,
Dafna



On 04/15/2014 02:57 PM, Maurice James wrote:

yes I did see a warning about restarting. It said:
Failed to create live snapshot '20140415 for VM 'TIEATS_Racktables'. VM restart 
is recommended

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 9:49:22 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Maurice,

are you getting any errors/alert about restarting the vm in the event
log (in the webadmin).

I believe that you have a mismatch in the directory link between the db
and vdsm which was created when you tried to live migrate and failed the
fist time).

we can see in the vdsm log that createVolume (which is part of the live
migration) is failing with this error:

OSError: [Errno 2] No such file or directory:
'/rhev/data-center/a106ab81-9d5f-49c1-aeaf-832a137b708c/b7663d70-e658-41fa-b9f0-8da83c9eddce/images/348effd9-c5db-44fc-a9e5-67391647096b'

so it seems that the engine is pointing to a link that does not exist
and we need to find on what storage the disk actually exists on and
where the engine is pointing to.

you can look at the webadmin at the disk and see where it's suppose to
be and than look in /rhev and see where the disk link is in vdsm (if you
can see where the disk is actually is - physically in the storage, that
would be good too).

Thanks,

Dafna


On 04/15/2014 01:16 PM, Maurice James wrote:

As far as the disk creation , I got that sorted out. It seem that my vdsm 
versions were out of sync on my hosts. When I updated them the disk creation 
began working, but im still having issues with live disk migrations.

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 5:31:48 AM
Subject: Re: [ovirt-users] Error creating Disks

Also, before the task fails, there are ERRORs in the log which are
reporting problems in the DC

2014-04-14 09:56:05,161 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-16) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:10,226 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-49) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:15,283 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-81) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:20,342 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-58) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:25,409 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-84) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:30,542 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-91) START,
GetHardwareInfoVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]), log id: 30f312c


than you stop the vm and fail the AddDisk.


is there a chance you were having problems with your storage?

Dafna


On 04/14/2014 04:04 PM, Maurice

Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron

ok.
so the vm was down and you deleted all the snapshots.
in the webadmin, when you select the disk you want to live migrate, is 
it the same storage reported in vdsm?


Thanks,

Dafna


On 04/15/2014 03:25 PM, Maurice James wrote:

I restarted from the VM. The VM now has 0 snapshots. All of the snapshots in 
the screen shots never completed, so I deleted them while the VM was powerd off 
from the ui

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:21:06 AM
Subject: Re: [ovirt-users] Error creating Disks

did you restart the vm from the vm internally or stop - start the vm?

how many snapshots does the vm have?

Thanks,
Dafna


On 04/15/2014 03:16 PM, Maurice James wrote:

I restarted the VM. Same issue. I was able to find the physical location on 
disk. I attached 2 screen shots showing the location of the disks on the file 
system.



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:12:56 AM
Subject: Re: [ovirt-users] Error creating Disks

it seems that the issue is the links between the engine and the vdsm

why don't you check where the disk is physically located and than try to
restart the vm?

Thanks,
Dafna



On 04/15/2014 02:57 PM, Maurice James wrote:

yes I did see a warning about restarting. It said:
Failed to create live snapshot '20140415 for VM 'TIEATS_Racktables'. VM restart 
is recommended

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 9:49:22 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Maurice,

are you getting any errors/alert about restarting the vm in the event
log (in the webadmin).

I believe that you have a mismatch in the directory link between the db
and vdsm which was created when you tried to live migrate and failed the
fist time).

we can see in the vdsm log that createVolume (which is part of the live
migration) is failing with this error:

OSError: [Errno 2] No such file or directory:
'/rhev/data-center/a106ab81-9d5f-49c1-aeaf-832a137b708c/b7663d70-e658-41fa-b9f0-8da83c9eddce/images/348effd9-c5db-44fc-a9e5-67391647096b'

so it seems that the engine is pointing to a link that does not exist
and we need to find on what storage the disk actually exists on and
where the engine is pointing to.

you can look at the webadmin at the disk and see where it's suppose to
be and than look in /rhev and see where the disk link is in vdsm (if you
can see where the disk is actually is - physically in the storage, that
would be good too).

Thanks,

Dafna


On 04/15/2014 01:16 PM, Maurice James wrote:

As far as the disk creation , I got that sorted out. It seem that my vdsm 
versions were out of sync on my hosts. When I updated them the disk creation 
began working, but im still having issues with live disk migrations.

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 5:31:48 AM
Subject: Re: [ovirt-users] Error creating Disks

Also, before the task fails, there are ERRORs in the log which are
reporting problems in the DC

2014-04-14 09:56:05,161 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-16) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:10,226 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-49) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:15,283 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-81) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:20,342 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-58) Command
GetCapabilitiesVDSCommand(HostName = vhost3, HostId =
bc9c25e6-714e-4eac-8af0-860ac76fd195, vds=Host[vhost3]) execution
failed. Exception: VDSNetworkException: java.net.SocketTimeoutException:
connect timed out
2014-04-14 09:56:25,409 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand

Re: [ovirt-users] Error creating Disks

2014-04-15 Thread Dafna Ron

yeah, but is it the same error we see before in vdsm? :)
lets try to follow the action in both logs so we can debug it better, 
can you do the following?


1. stop the vm and clean all snapshots.
2. add  to the engine and vdsm logs using echo (#echo ''  
/var/log/ovirt-engine/engine.log and #echo ''  
/var/log/vdsm/vdsm.log)

3. start the vm
4. try to live migrate the disk
5. attach the full engine and vdsm logs.

Thanks,

Dafna


On 04/15/2014 03:41 PM, Maurice James wrote:

It failed again. Its reported in the same location.

Snip of engine.log

2014-04-15 10:41:17,174 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-34) Failed in SnapshotVDS method
2014-04-15 10:41:17,178 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-34) Command SnapshotVDSCommand(HostName = 
vhost3, HostId = bc9c25e6-714e-4eac-8af0-860ac76fd195, 
vmId=ba49605b-fb7e-4a70-a380-6286d3903e50) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
SnapshotVDS, error = Snapshot failed, code = 48
2014-04-15 10:41:17,531 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-34) Command 
org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand throw Vdc Bll 
exception. With error message VdcBLLException: Auto-generated live snapshot for 
VM ba49605b-fb7e-4a70-a380-6286d3903e50 failed (Failed with error imageErr and 
code 13)
2014-04-15 10:41:17,534 ERROR 
[org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-34) Reverting task unknown, handler: 
org.ovirt.engine.core.bll.lsm.LiveSnapshotTaskHandler




- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:37:41 AM
Subject: Re: [ovirt-users] Error creating Disks

ok.
so the vm was down and you deleted all the snapshots.
in the webadmin, when you select the disk you want to live migrate, is
it the same storage reported in vdsm?

Thanks,

Dafna


On 04/15/2014 03:25 PM, Maurice James wrote:

I restarted from the VM. The VM now has 0 snapshots. All of the snapshots in 
the screen shots never completed, so I deleted them while the VM was powerd off 
from the ui

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:21:06 AM
Subject: Re: [ovirt-users] Error creating Disks

did you restart the vm from the vm internally or stop - start the vm?

how many snapshots does the vm have?

Thanks,
Dafna


On 04/15/2014 03:16 PM, Maurice James wrote:

I restarted the VM. Same issue. I was able to find the physical location on 
disk. I attached 2 screen shots showing the location of the disks on the file 
system.



- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 10:12:56 AM
Subject: Re: [ovirt-users] Error creating Disks

it seems that the issue is the links between the engine and the vdsm

why don't you check where the disk is physically located and than try to
restart the vm?

Thanks,
Dafna



On 04/15/2014 02:57 PM, Maurice James wrote:

yes I did see a warning about restarting. It said:
Failed to create live snapshot '20140415 for VM 'TIEATS_Racktables'. VM restart 
is recommended

- Original Message -
From: Dafna Ron d...@redhat.com
To: Maurice James mja...@media-node.com
Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
Sent: Tuesday, April 15, 2014 9:49:22 AM
Subject: Re: [ovirt-users] Error creating Disks

Hi Maurice,

are you getting any errors/alert about restarting the vm in the event
log (in the webadmin).

I believe that you have a mismatch in the directory link between the db
and vdsm which was created when you tried to live migrate and failed the
fist time).

we can see in the vdsm log that createVolume (which is part of the live
migration) is failing with this error:

OSError: [Errno 2] No such file or directory:
'/rhev/data-center/a106ab81-9d5f-49c1-aeaf-832a137b708c/b7663d70-e658-41fa-b9f0-8da83c9eddce/images/348effd9-c5db-44fc-a9e5-67391647096b'

so it seems that the engine is pointing to a link that does not exist
and we need to find on what storage the disk actually exists on and
where the engine is pointing to.

you can look at the webadmin at the disk and see where it's suppose to
be and than look in /rhev and see where the disk link is in vdsm (if you
can see where the disk is actually is - physically in the storage, that
would be good too).

Thanks,

Dafna


On 04/15/2014 01:16 PM, Maurice James wrote:

As far as the disk creation , I got that sorted out. It seem that my vdsm 
versions were out of sync on my hosts. When I updated them

Re: [ovirt-users] compatibility relationship between datacenter, ovirt and cluster

2014-04-11 Thread Dafna Ron

you should look at the feature page.
if you do not fully upgrade cpu/data center you simply continue to work 
with 3.3 features




On 04/10/2014 08:50 PM, Tamer Lima wrote:

Hi,


yesterday my ovirt was 3.3
my datacenter and cluster (compatibility version) was aligned with 
ovirt 3.3



today my ovirt is now 3.4.
and my datacenter and cluster (compatibility version) remains 3.3  
(with the option enabled to change for 3.4)


browsing the ovirt admin page I see 2 occurrences of ovirt version:
datacenter tab  = 3.3
cluster tab  =3.3



I would like to understand what means all these versions, the same 
version for a lot of important things, and how my ovirt works/behaves 
using different versions.


all my doubts together :
What means datacenter in version 3.3(or lower version) when ovirt is 
3.4  ?

what means cluster in version 3.3 when ovirt is 3.4?
what means change the compatibility version for datacenter?
what means change the compatibility version for cluster?



thanks






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Re-add a node

2014-04-11 Thread Dafna Ron

please add the host-deploy log from /var/log/ovirt-engine/host-deploy/

On 04/11/2014 12:11 AM, James James wrote:

Hi,
I don know i the subject is explicit enough but I have a problem and I 
hope to find some help here.


I had two hosts in my cluster (node1 and node2). I had to reinstall 
node1 due to some networks problem. In the engine, node1 appears now 
Not reponsive and I can't remove it from the engine ui.


Now node1 is back and I want to add it in the cluster but I can't. 
I've got this error (vdsm.log) :
BindingXMLRPC::ERROR::2014-04-11 
01:04:42,622::BindingXMLRPC::81::vds::(threaded_start) xml-rpc handler 
exception

Traceback (most recent call last):
  File /usr/share/vdsm/BindingXMLRPC.py, line 77, in threaded_start
self.server.handle_request()
  File /usr/lib64/python2.6/SocketServer.py, line 278, in handle_request
self._handle_request_noblock()
  File /usr/lib64/python2.6/SocketServer.py, line 288, in 
_handle_request_noblock

request, client_address = self.get_request()
  File /usr/lib64/python2.6/SocketServer.py, line 456, in get_request
return self.socket.accept()
  File 
/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py, line 
136, in accept

raise SSL.SSLError(%s, client %s % (e, address[0]))
SSLError: sslv3 alert certificate unknown, client 192.168.1.100

192.168.1.100 is the engine's address

Can somebody help me  ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Re-add a node

2014-04-11 Thread Dafna Ron

can you put the host in maintenance?



On 04/11/2014 10:43 AM, James James wrote:
I can't delete the old node because it is in Non Responsive state. The 
remove button is stil blank .


In the engine.log I've got this log :
2014-04-11 11:40:45,911 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-88) Command 
GetCapabilitiesVDSCommand(HostName = node1, HostId = 
36fb6df3-c2c2-4133-86ac-fe50b99ee2e3, vds=Host[node1]) execution 
failed. Exception: VDSNetworkException: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target
2014-04-11 11:40:48,943 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-12) Command 
GetCapabilitiesVDSCommand(HostName = node1, HostId = 
36fb6df3-c2c2-4133-86ac-fe50b99ee2e3, vds=Host[node1]) execution 
failed. Exception: VDSNetworkException: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target






2014-04-11 11:06 GMT+02:00 Alon Bar-Lev alo...@redhat.com 
mailto:alo...@redhat.com:




- Original Message -
 From: James James jre...@gmail.com mailto:jre...@gmail.com
 To: d...@redhat.com mailto:d...@redhat.com
 Cc: users users@ovirt.org mailto:users@ovirt.org
 Sent: Friday, April 11, 2014 12:04:23 PM
 Subject: Re: [ovirt-users] Re-add a node

 The engine is the same but the node (node1) has been reinstalled ...

So you need to re-add it to the engine.
Delete the old node at engine side and use the node user interface
to add it to the engine.



 2014-04-11 10:48 GMT+02:00 James James  jre...@gmail.com
mailto:jre...@gmail.com  :



 the log contains information about the first node1 installation 

 http://pastebin.com/mZSb2wmD


 2014-04-11 10:12 GMT+02:00 Dafna Ron  d...@redhat.com
mailto:d...@redhat.com  :



 please add the host-deploy log from /var/log/ovirt-engine/host-
deploy/


 On 04/11/2014 12:11 AM, James James wrote:



 Hi,
 I don know i the subject is explicit enough but I have a problem
and I hope
 to find some help here.

 I had two hosts in my cluster (node1 and node2). I had to
reinstall node1 due
 to some networks problem. In the engine, node1 appears now Not
reponsive
 and I can't remove it from the engine ui.

 Now node1 is back and I want to add it in the cluster but I
can't. I've got
 this error (vdsm.log) :
 BindingXMLRPC::ERROR::2014-04- 11 01
tel:2014-04-%2011%2001:04:42,622::BindingXMLRPC::
 81::vds::(threaded_start) xml-rpc handler exception
 Traceback (most recent call last):
 File /usr/share/vdsm/ BindingXMLRPC.py, line 77, in threaded_start
 self.server.handle_request()
 File /usr/lib64/python2.6/ SocketServer.py, line 278, in
handle_request
 self._handle_request_noblock()
 File /usr/lib64/python2.6/ SocketServer.py, line 288, in
 _handle_request_noblock
 request, client_address = self.get_request()
 File /usr/lib64/python2.6/ SocketServer.py, line 456, in
get_request
 return self.socket.accept()
 File /usr/lib64/python2.6/site- packages/vdsm/
SecureXMLRPCServer.py, line
 136, in accept
 raise SSL.SSLError(%s, client %s % (e, address[0]))
 SSLError: sslv3 alert certificate unknown, client 192.168.1.100

 192.168.1.100 is the engine's address

 Can somebody help me ?



 __ _
 Users mailing list
 Users@ovirt.org mailto:Users@ovirt.org
 http://lists.ovirt.org/ mailman/listinfo/users


 --
 Dafna Ron



 ___
 Users mailing list
 Users@ovirt.org mailto:Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users






--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Re-add a node

2014-04-11 Thread Dafna Ron

confirm host has been rebooted should release the SPM :)



On 04/11/2014 01:23 PM, James James wrote:




2014-04-11 14:08 GMT+02:00 Dafna Ron d...@redhat.com 
mailto:d...@redhat.com:


James, Please answer the user's list as well as to me so that
other people can participate as well :)


Oups ... I will do that ..



did you try to press the confirm host has been rebooted  (right
click)


Yes but same problem. node1 cannot be in maintenance mode.

node1 is SPM . I will make node1 release the SPM ressource.



On 04/11/2014 12:41 PM, James James wrote:




2014-04-11 13:16 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com:


what is the error message you get when you try to put the
host in
maintenance?

I have this message :


Error while executing action: Cannot switch Host to
Maintenance mode.
Host is Storage Pool Manager and is in Non Responsive state.
- If power management is configured, engine will try to fence
the host automatically.
- Otherwise, either bring the node back up, or release the SPM
resource.
To do so, verify that the node is really down by right
clicking on the host and confirm that the node was shutdown
manually.



are there any running vm's reported?


No there is no VM running on this host



Try to press the confirm host has been rebooted button
and than
see if you can put the host in maintenance.

If that fails, select the host, in the general tab you
will get
the re-install link.



I am running ovirt 3.4.0-1. I down't know where is the
re-install link but I can't see it.

try to re-install, when install fails the host should change
status to failed installation.




On 04/11/2014 12:09 PM, James James wrote:

No, I can't 


2014-04-11 12:12 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com

mailto:d...@redhat.com mailto:d...@redhat.com:


can you put the host in maintenance?




On 04/11/2014 10:43 AM, James James wrote:

I can't delete the old node because it is in Non
Responsive
state. The remove button is stil blank .

In the engine.log I've got this log :
2014-04-11 11:40:45,911 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-88) Command
GetCapabilitiesVDSCommand(HostName = node1,
HostId =
36fb6df3-c2c2-4133-86ac-fe50b99ee2e3,
vds=Host[node1])
execution failed. Exception: VDSNetworkException:
 sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to
requested
target
2014-04-11 11:40:48,943 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-12) Command
GetCapabilitiesVDSCommand(HostName = node1,
HostId =
36fb6df3-c2c2-4133-86ac-fe50b99ee2e3,
vds=Host[node1])
execution failed. Exception: VDSNetworkException:
 sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to
requested
target





2014-04-11 11:06 GMT+02:00 Alon Bar-Lev
alo...@redhat.com mailto:alo...@redhat.com
mailto:alo...@redhat.com mailto:alo...@redhat.com
mailto:alo...@redhat.com
mailto:alo...@redhat.com mailto:alo...@redhat.com
mailto:alo...@redhat.com
mailto:alo...@redhat.com mailto:alo...@redhat.com
mailto:alo...@redhat.com mailto:alo...@redhat.com

mailto:alo...@redhat.com
mailto:alo...@redhat.com mailto:alo...@redhat.com
mailto:alo...@redhat.com:





- Original Message -
 From: James James jre...@gmail.com
mailto:jre...@gmail.com
mailto:jre...@gmail.com mailto:jre...@gmail.com
mailto:jre...@gmail.com
mailto:jre...@gmail.com mailto:jre...@gmail.com
mailto:jre...@gmail.com
mailto:jre...@gmail.com mailto:jre...@gmail.com
mailto:jre...@gmail.com mailto:jre

Re: [ovirt-users] Re-add a node

2014-04-11 Thread Dafna Ron
I think there might be a chance it's related to issues we were seeing 
with engine cache.


James, would you mind testing that? restart of ovirt-engine process 
should clear the cache.
once you restart, log in to the webadmin and if the host is still spm 
and none responsive, can you try confirm host reboot again?


If that still fails, can you please attach the engine log?

Thanks.
Dafna





On 04/11/2014 02:13 PM, Itamar Heim wrote:

On 04/11/2014 04:10 PM, James James wrote:

Nothing new.  node1 can't release SPM ... :(


allon/federico - thoughts? confirm host shutdown should release SPM 
for a non-responsive node





2014-04-11 14:49 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com:

confirm host has been rebooted should release the SPM :)



On 04/11/2014 01:23 PM, James James wrote:




2014-04-11 14:08 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com:


 James, Please answer the user's list as well as to me so 
that

 other people can participate as well :)


Oups ... I will do that ..



 did you try to press the confirm host has been rebooted
  (right
 click)


Yes but same problem. node1 cannot be in maintenance mode.

node1 is SPM . I will make node1 release the SPM ressource.



 On 04/11/2014 12:41 PM, James James wrote:




 2014-04-11 13:16 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com
 mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com

 mailto:d...@redhat.com mailto:d...@redhat.com:


 what is the error message you get when you try to
put the
 host in
 maintenance?

 I have this message :


 Error while executing action: Cannot switch Host to
 Maintenance mode.
 Host is Storage Pool Manager and is in Non Responsive
state.
 - If power management is configured, engine will try to
fence
 the host automatically.
 - Otherwise, either bring the node back up, or release
the SPM
 resource.
 To do so, verify that the node is really down by right
 clicking on the host and confirm that the node was 
shutdown

 manually.



 are there any running vm's reported?


 No there is no VM running on this host



 Try to press the confirm host has been rebooted
button
 and than
 see if you can put the host in maintenance.

 If that fails, select the host, in the general 
tab you

 will get
 the re-install link.



 I am running ovirt 3.4.0-1. I down't know where is the
 re-install link but I can't see it.

 try to re-install, when install fails the host
should change
 status to failed installation.




 On 04/11/2014 12:09 PM, James James wrote:

 No, I can't 


 2014-04-11 12:12 GMT+02:00 Dafna Ron
d...@redhat.com mailto:d...@redhat.com
 mailto:d...@redhat.com mailto:d...@redhat.com
 mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com
 mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com

 mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com:



 can you put the host in maintenance?




 On 04/11/2014 10:43 AM, James James wrote:

 I can't delete the old node because it
is in Non
 Responsive
 state. The remove button is stil 
blank .


 In the engine.log I've got this log :
 2014-04-11 11:40:45,911 ERROR

[org.ovirt.engine.core.__vdsbroker.vdsbroker.__GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler___Worker-88)
Command
GetCapabilitiesVDSCommand(__HostName =
node1,
 HostId =
36fb6df3-c2c2-4133-86ac-__fe50b99ee2e3,
 vds=Host[node1])
 execution failed. Exception:
VDSNetworkException:

sun.security.provider.__certpath.__SunCertPathBuilderException:
 unable to find valid certification 
path to

 requested
 target

Re: [ovirt-users] Re-add a node

2014-04-11 Thread Dafna Ron

can you attach the engine log and the vdsm log from the second host?

The SPM cannot be released because the master storage domain is not 
visible from the second host.


Dafna



On 04/11/2014 02:38 PM, James James wrote:

I try to follow the Daria's advices ..
I restarted my engine to clear the cache 

but now I am facing a new problem. node1 is the SPM and I have this 
error message

:

Manual fence did not revoke the selected SPM (node1) since the master 
storage domain

 was not active or could not use another host for the fence operation.





2014-04-11 15:16 GMT+02:00 Dafna Ron d...@redhat.com 
mailto:d...@redhat.com:


I think there might be a chance it's related to issues we were
seeing with engine cache.

James, would you mind testing that? restart of ovirt-engine
process should clear the cache.
once you restart, log in to the webadmin and if the host is still
spm and none responsive, can you try confirm host reboot again?

If that still fails, can you please attach the engine log?

Thanks.
Dafna






On 04/11/2014 02:13 PM, Itamar Heim wrote:

On 04/11/2014 04:10 PM, James James wrote:

Nothing new.  node1 can't release SPM ... :(


allon/federico - thoughts? confirm host shutdown should
release SPM for a non-responsive node



2014-04-11 14:49 GMT+02:00 Dafna Ron d...@redhat.com
mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com:

confirm host has been rebooted should release the SPM :)



On 04/11/2014 01:23 PM, James James wrote:




2014-04-11 14:08 GMT+02:00 Dafna Ron
d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com:


 James, Please answer the user's list as well
as to me so that
 other people can participate as well :)


Oups ... I will do that ..



 did you try to press the confirm host has
been rebooted
  (right
 click)


Yes but same problem. node1 cannot be in
maintenance mode.

node1 is SPM . I will make node1 release the SPM
ressource.



 On 04/11/2014 12:41 PM, James James wrote:




 2014-04-11 13:16 GMT+02:00 Dafna Ron
d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
 mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com

 mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com:


 what is the error message you get
when you try to
put the
 host in
 maintenance?

 I have this message :


 Error while executing action: Cannot
switch Host to
 Maintenance mode.
 Host is Storage Pool Manager and is in
Non Responsive
state.
 - If power management is configured,
engine will try to
fence
 the host automatically.
 - Otherwise, either bring the node back
up, or release
the SPM
 resource.
 To do so, verify that the node is really
down by right
 clicking on the host and confirm that the
node was shutdown
 manually.



 are there any running vm's reported?


 No there is no VM running on this host



 Try to press the confirm host has
been rebooted
button
 and than
 see if you can put the host in
maintenance.

 If that fails, select the host, in
the general tab you
 will get
 the re-install link.



 I am running ovirt 3.4.0-1. I down't know
where is the
 re

Re: [ovirt-users] [Users] Resizing bootable disk erase OS

2014-04-10 Thread Dafna Ron

what type of storage are you using?
did the vm have only 1 disk?
was there anything you did before taking down the vm? (live migration, 
live snapshots?)

Can you reproduce this or did it happen on only 1 vm?
what storage are you using?

Please attach the engine, vdsm, libvirt and qemu logs from the event.

Thanks,

Dafna



On 04/10/2014 04:03 AM, Yusufi M R wrote:

I did extend the size of disk from Ovirt Admin Portal after taking VM offline. 
This has crashed my OS and was unable to boot.

Regards,
Yusuf


From: Elad Ben Aharon ebena...@redhat.com
Sent: Wednesday, April 09, 2014 4:38 PM
To: d...@redhat.com; Yusufi M R
Cc: users@ovirt.org; Allon Mureinik; Aharon Canan; Gadi Ickowicz
Subject: Re: [Users] Resizing bootable disk erase OS

For extending the VM disk, as Dafna mentioned,  you should use the ovirt-engine 
webadmin (via 'edit' disk).
If you'd like to extend your OS disk, there is a nice guide for that. it's not 
an official guide AFAIK, but it works :)
http://myshell.co.uk/index.php/how-to-extend-a-root-lvm-partition-online/

- Original Message -
From: Dafna Ron d...@redhat.com
To: users@ovirt.org, Allon Mureinik amure...@redhat.com, Aharon Canan aca...@redhat.com, Elad Ben 
Aharon ebena...@redhat.com, Gadi Ickowicz gicko...@redhat.com
Sent: Wednesday, April 9, 2014 1:29:12 PM
Subject: Re: [Users] Resizing bootable disk erase OS

the OS should not be erased if it's done from the webadmin.
Adding some people to this bug.

you are extending from the webadmin and not by manually sending lvresize
right?

Thanks,
Dafna


On 04/09/2014 11:21 AM, Yusufi M R wrote:

Hello Everyone,

I have Ovirt 3.3 setup and have VMs created using template. If I
resize (extend) the disk size, the OS is erased. How can I extend the
size of disk without affecting the OS ?

Regards,

Yusuf



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Dafna Ron



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration and snapshot problem

2014-04-10 Thread Dafna Ron
can you please attach engine, vdsm, libvirt, qemu and gluster logs from 
both the create snapshot and live migration actions.


Thanks,
Dafna

On 04/10/2014 05:59 PM, Demeter Tibor wrote:

Dear members,

We made a test plaform for testing ovirt 3.4 features. We got four amd 
x2 4400+ machine with 2 gigs of ram and build a gluster based cluster. 
I set up an amd-G2 based cluster.
I rebuild and installed the qemu-kvm-rhev package 
(http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.src.rpm) 
.
The data storage is a four brick distributed-replicated gluster 
storage. I got a virtual machine (centos 6.5) from the ovirt's 
openstack based template repository.


Everything was good, the vm can run on any host, but the live 
migration doesn't work. Also, the snapshot feature is lost from menu. 
I can make live snapshot, but it doesn't show on the panel.


So:

- I  lost the snapshots:)
- the live migration doesn't work.

Can anyone help me?

Thanks in advance.

Tibor


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Resizing bootable disk erase OS

2014-04-09 Thread Dafna Ron

the OS should not be erased if it's done from the webadmin.
Adding some people to this bug.

you are extending from the webadmin and not by manually sending lvresize 
right?


Thanks,
Dafna


On 04/09/2014 11:21 AM, Yusufi M R wrote:


Hello Everyone,

I have Ovirt 3.3 setup and have VMs created using template. If I 
resize (extend) the disk size, the OS is erased. How can I extend the 
size of disk without affecting the OS ?


Regards,

Yusuf



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fail snapshot

2014-04-04 Thread Dafna Ron
/vdsm.log
Thread-4732::DEBUG::2014-04-04 
12:43:34,439::BindingXMLRPC::1067::vds::(wrapper) client [192.168.99.104]::call 
vmSnapshot with ('cb038ccf-6c6f-475c-872f-ea812ff795a1', [{'baseVolumeID': 
'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'domainID': 
'5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 
'f5fc4fed-4acd-46e8-9980-90a9c3985840', 'imageID': 
'646df162-5c6d-44b1-bc47-b63c3fdab0e2'}], 
'5ae613a4-44e4-42cb-89fc-7b5d34c1f30f,0002-0002-0002-0002-0076,4fb31c32-8467-4d4a-b817-977643a462e3,ceb881f3-9a46-4ebc-b82e-c4c91035f807,2c06b4da-2743-4422-ba94-74da2c709188,02804da9-34f8-438f-9e8a-9689bc94790c')
 {}
Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm::(snapshot) 
vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: 
{'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 
'volumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'imageID': 
'646df162-5c6d-44b1-bc47-b63c3fdab0e2'}
Thread-4732::DEBUG::2014-04-04 
12:43:34,440::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with 
{'status': {'message': 'Snapshot failed', 'code': 48}}
Thread-299::DEBUG::2014-04-04 
12:43:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd 
iflag=direct 
if=/rhev/data-center/mnt/host01.ovirt.lan:_home_export/ff98d346-4515-4349-8437-fb2f5e9eaadf/dom_md/metadata
 bs=4096 count=1' (cwd None)


Thx;)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Snapshots

2014-03-05 Thread Dafna Ron

https://bugzilla.redhat.com/show_bug.cgi?id=1009100

On 03/05/2014 07:07 AM, Gadi Ickowicz wrote:

Hi,

Were you taking a live snapshot? That process is actually composed of 2 steps:
1) Taking the snapshot (creating the a new volume that is part of the image)
2) Configuring the vm to use the new volume
A failure in step 2 would result in the new volume being created, but the VM 
still writing to the old volume, and that warning could be what you saw.

could you please attach the engine logs, and if possible, the vdsm logs for the 
SPM at the time you took the snapshot.

Thanks,
Gadi Ickowicz

- Original Message -
From: Maurice James midnightst...@msn.com
To: users@ovirt.org
Sent: Tuesday, March 4, 2014 10:45:39 PM
Subject: [Users] Snapshots

I attempted to create a snapshot and an alert came up saying that it failed, 
but when I look at the snapshots tab for that specific VM, it says that the 
status is OK. Which should I believe?

Ver 3.3.3-2.el6

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPICE causes migration failure?

2014-03-04 Thread Dafna Ron

Thanks Ted,

Please send logs to the user's list since others may help if I am off line

Thanks,

Dafna


On 03/03/2014 11:48 PM, Ted Miller wrote:
Dafna, I will get the logs to you when I get a chance.  I have an 
intern to keep busy this week, and that gets higher priority than 
oVirt (unfortunately).  Ted Miller


On 3/3/2014 12:26 PM, Dafna Ron wrote:
I don't see a reason why open monitor will fail migration - at most, 
if there is a problem I would close the spice session on src and 
restarted it at the dst.
can you please attach vdsm/libvirt/qemu logs from both hosts and 
engine logs so that we can see the migration failure reason?


Thanks,
Dafna



On 03/03/2014 05:16 PM, Ted Miller wrote:
I just got my Data Center running again, and am proceeding with some 
setup  testing.


I created a VM (not doing anything useful)
I clicked on the Console and had a SPICE console up (viewed in Win7).
I had it printing the time on the screen once per second (while 
date;do sleep 1; done).

I tried to migrate the VM to another host and got in the GUI:

Migration started (VM: web1, Source: s1, Destination: s3, User: 
admin@internal).


Migration failed due to Error: Fatal error during migration (VM: 
web1, Source: s1, Destination: s3).


As I started the migration I happened to think I wonder how they 
handle the SPICE console, since I think that is a link from the host 
to my machine, letting me see the VM's screen.


After the failure, I tried shutting down the SPICE console, and 
found that the migration succeeded.  I again opened SPICE and had a 
migration fail.  Closed SPICE, migration failed.


I can understand how migrating SPICE is a problem, but, at least 
could we give the victim of this condition a meaningful error 
message?  I have seen a lot of questions about failed migrations 
(mostly due to attached CDs), but I have never seen this discussed. 
If I had not had that particular thought cross my brain at that 
particular time, I doubt that SPICE would have been where I went 
looking for a solution.


If this is the first time this issue has been raised, I am willing 
to file a bug.


Ted Miller
Elkhart, IN, USA

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users








--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPICE causes migration failure?

2014-03-03 Thread Dafna Ron
I don't see a reason why open monitor will fail migration - at most, if 
there is a problem I would close the spice session on src and restarted 
it at the dst.
can you please attach vdsm/libvirt/qemu logs from both hosts and engine 
logs so that we can see the migration failure reason?


Thanks,
Dafna



On 03/03/2014 05:16 PM, Ted Miller wrote:
I just got my Data Center running again, and am proceeding with some 
setup  testing.


I created a VM (not doing anything useful)
I clicked on the Console and had a SPICE console up (viewed in Win7).
I had it printing the time on the screen once per second (while 
date;do sleep 1; done).

I tried to migrate the VM to another host and got in the GUI:

Migration started (VM: web1, Source: s1, Destination: s3, User: 
admin@internal).


Migration failed due to Error: Fatal error during migration (VM: web1, 
Source: s1, Destination: s3).


As I started the migration I happened to think I wonder how they 
handle the SPICE console, since I think that is a link from the host 
to my machine, letting me see the VM's screen.


After the failure, I tried shutting down the SPICE console, and found 
that the migration succeeded.  I again opened SPICE and had a 
migration fail.  Closed SPICE, migration failed.


I can understand how migrating SPICE is a problem, but, at least could 
we give the victim of this condition a meaningful error message?  I 
have seen a lot of questions about failed migrations (mostly due to 
attached CDs), but I have never seen this discussed. If I had not had 
that particular thought cross my brain at that particular time, I 
doubt that SPICE would have been where I went looking for a solution.


If this is the first time this issue has been raised, I am willing to 
file a bug.


Ted Miller
Elkhart, IN, USA

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] The purpose of Wipe on delete ?

2014-02-28 Thread Dafna Ron
1. you cannot use this option for nfs based storage since we zero the 
files any way when we delete the disk (the only way to actually delete 
it in nfs).
2. configuration on the storage side is the administrator decision... 
they can choose not to use this option and use a different method on 
storage side.


Dafna


On 02/28/2014 08:11 AM, Sandro Bonazzola wrote:

Il 27/02/2014 22:16, Dafna Ron ha scritto:

wipe = writing zero's on the space allocated to that disk to make sure any data 
once written will be deleted permanently.

so it's  a security vs speed decision on using this option - since we zeroing 
the disk to make sure any information once written will be overwritten,
a delete of a large disk can take a while.

I think this may be not really useful, zeroing files on modern file systems 
can't grant any kind of security improvement.
According to shred man page:

CAUTION: Note that shred relies on a very important assumption: that 
the file system overwrites data in place.  This is the traditional way to
do things, but many modern file system designs  do  not
satisfy this assumption.  The following are examples of file systems on 
which shred is not effective, or is not guaranteed to be effective in
all file system modes:

* log-structured or journaled file systems, such as those supplied with 
AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)

* file systems that write redundant data and carry on even if some 
writes fail, such as RAID-based file systems

* file systems that make snapshots, such as Network Appliance's NFS 
server

* file systems that cache in temporary locations, such as NFS version 3 
clients

* compressed file systems

In the case of ext3 file systems, the above disclaimer applies (and 
shred is thus of limited effectiveness) only in data=journal mode, which
journals file data in addition to just metadata.  In both
the data=ordered (default) and data=writeback modes, shred works as 
usual.  Ext3 journaling modes can be changed by adding the data=something
option to the mount options for a particular file system
in the /etc/fstab file, as documented in the mount man page (man mount).

In addition, file system backups and remote mirrors may contain copies 
of the file that cannot be removed, and that will allow a shredded file
to be recovered later.




Dafna




On 02/27/2014 04:14 PM, Richard Davis wrote:

Hi

What is the purpose of the Wipe on delete option for a VM disk ?
Why would you not want data wiped on delete if the alternative is to leave LV 
metadata and other data languishing on the SD ?


Thanks

Rich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] The purpose of Wipe on delete ?

2014-02-27 Thread Dafna Ron
wipe = writing zero's on the space allocated to that disk to make sure 
any data once written will be deleted permanently.


so it's  a security vs speed decision on using this option - since we 
zeroing the disk to make sure any information once written will be 
overwritten, a delete of a large disk can take a while.


Dafna




On 02/27/2014 04:14 PM, Richard Davis wrote:

Hi

What is the purpose of the Wipe on delete option for a VM disk ?
Why would you not want data wiped on delete if the alternative is to 
leave LV metadata and other data languishing on the SD ?



Thanks

Rich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How do you move an host with local-storage into regular Data Center?

2014-02-26 Thread Dafna Ron

you did not remove the storage before moving the host.
If you select the force remove DC option it should clean all object 
under that DC (just make sure you are selecting the one you want to 
remove ;))



On 02/26/2014 04:31 PM, Giorgio Bersano wrote:

Hi all,
I need your help to clean up my DataCenter.

I'm new to this wonderful product so I'm exploring to understand what's offered.
I'm talking about 3.4.0beta3 (Centos 6.5).

In the beginning it was a normal system with two hosts, an iSCSI
storage, and the engine installed as a regular KVM guest on another
host (external to the oVirt setup).
So far so good.

Then I selected one of the two hosts, put it in maintenance mode,
clicked on Configure Local Storage, accepted the defaults for Data
Center, Cluster and Storage, put in an appropriate path to local
storage...
Now I have another DC (hostname-Local), another Cluster
(hostname-Local) and another SD (hostname-Local). This host has been
migrated from the original Cluster and it is now in the
hostname-Local Cluster. Nothing to regret, I was expecting something
like that.

Obviously I have now a single host in my main DC and so I'm unable
to do migration of VMs and so on.

After some tests I decide to go back to the original situation.
I put again the host in maintenance mode, Edit, select the correct
DC and the host is now back in his place.

Now I try to remove the spurious DC to clean-up the situation but the
result is an error popup:
Error while executing action: Cannot remove Data Center. There is no
active Host in the Data Center.

OK, I move again the host in. But now when I select the DC the
Remove button is obviously greyed out.

Well, for the moment I'm happy to move again the host into the regular
DC to gain full functionality of my cluster but I would like to clean
my setup removing the other, useless, DC.
Does anyone know how to get out from this?
I'm probably missing something obvious but here I'm stuck.

TIA,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk Migration

2014-02-26 Thread Dafna Ron

I don't think that you can configure interface for disk migration.
Disk migration is actually copy of information from the original disk to 
a new disk created on a new domain + delete of the original disk once 
that is done.
it's not actually a migration and so I am not sure you can actually 
configure an interface for that.
adding ofer - perhpas he has a solution or it's possible and I am not 
aware of it.


Dafna


On 02/26/2014 05:24 PM, Maurice James wrote:
I have a specific interface set up for migrations. Why do disk 
migrations not use the interface that I have set for migrations? Is 
that by design? Shouldnt it use the interfaces that I have set aside 
for migrations? VM migrations work as they should but not disk migrations








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-25 Thread Dafna Ron

On 02/25/2014 10:23 AM, Sven Kieske wrote:

Can they be attached to multiple DCs at the same time?
I didn't try this out. Maybe it already works?
If it does, since which version?
So I can mount different/the same isos on different
DCs to multiple vms? That would be nice.


We were always able to share an ISO domain
You can use the same iso's on different DC's and multiple vm's can read 
from the same iso.






Am 25.02.2014 11:14, schrieb Itamar Heim:

ISO domains are shareable today (Across different engines as well?)



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] moving an unplugged/offline disk on a running vm creates snapshots

2014-02-21 Thread Dafna Ron

sounds like this issue only libvirt is not blocking the command.

https://bugzilla.redhat.com/show_bug.cgi?id=957494

when you say unplugged disk, is it still attached to a vm? we are not 
creating snapshots to an unattached disk are we?


Thanks,

Dafna

On 02/21/2014 10:56 AM, Ernest Beinrohr wrote:
Imho, when the disk is offline, it should be movable directly without 
the need for snapshots.


this is on ovirt 3.3

btw: any chance of thin-preallocated procedure?

--
Ernest Beinrohr, AXON PRO
Ing http://www.beinrohr.sk/ing.php, RHCE 
http://www.beinrohr.sk/rhce.php, RHCVA 
http://www.beinrohr.sk/rhce.php, LPIC 
http://www.beinrohr.sk/lpic.php, VCA 
http://www.beinrohr.sk/vca.php, +421-2--6241-0360 
callto://+421-2--6241-0360, +421-903--482-603 
callto://+421-903--482-603

icq:28153343, gtalk: oer...@axonpro.sk, jabber:oer...@jabber.org

“For a successful technology, reality must take precedence over public 
relations, for Nature cannot be fooled.” Richard Feynman



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [BUG] wrong template shown for vm

2014-02-21 Thread Dafna Ron
in 3.3 this was the correct behavior if the vm is not linked to the 
template (so if it was a clone and not thin copy it would have Blank 
for template)


in 3.4 there is a new feature which should show the vm name under the 
General tab even if it was created as a clone.

so this should have actually been solved in 3.4

On 02/21/2014 04:28 PM, Sven Kieske wrote:

Hi,

can anyone reproduce this bug?

https://bugzilla.redhat.com/show_bug.cgi?id=1068679

when you create a vm from an imported template (from export doomain)
the template info for the vm shows that it is based on the blank template.

I can reproduce this on ovirt 3.3.3-2.el6

it would be cool if someone could try to reproduce this on 3.4 and
on fedora.

Thanks



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host installation failed. SSH session closed during connection (ovirt-node-iso-3.0.3-1.1.fc19)

2014-02-20 Thread Dafna Ron
(CommandBase.java:1895)
[bll.jar:]
  at

org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:174)
[utils.jar:]
  at

org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:116)
[utils.jar:]
  at
org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1239)
[bll.jar:]
  at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:362)
[bll.jar:]
  at

org.ovirt.engine.core.bll.MultipleActionsRunner.executeValidatedCommand(MultipleActionsRunner.java:175)
[bll.jar:]
  at

org.ovirt.engine.core.bll.MultipleActionsRunner.RunCommands(MultipleActionsRunner.java:156)
[bll.jar:]
  at

org.ovirt.engine.core.bll.MultipleActionsRunner$1.run(MultipleActionsRunner.java:94)
[bll.jar:]
  at

org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:71)
[utils.jar:]
  at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
[rt.jar:1.7.0_45]
  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
[rt.jar:1.7.0_45]
  at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_45]
  at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_45]
  at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]

2014-02-19 06:29:53,332 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(pool-6-thread-39) [565733] START, SetVdsStatusVDSCommand(HostName
= oVirtNodeBay3, HostId = dcc03f35-6603-4854-88e8-ca9674274032,
status=InstallFailed, nonOperationalReason=NONE), log id: bf2eaf
2014-02-19 06:29:53,337 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(pool-6-thread-39) [565733] FINISH, SetVdsStatusVDSCommand, log
id: bf2eaf
2014-02-19 06:29:53,352 INFO
 [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-80) Initializing Host: oVirtNodeBay3
2014-02-19 06:29:53,354 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-39) [565733] Correlation ID: 565733, Job ID:
167f2bd7-12ca-4890-a1de-ec9faff786c4, Call Stack: null, Custom
Event ID: -1, Message: Host oVirtNodeBay3 installation failed.
Unable to parse host key.



Regards,
Udaya Kiran


On Wednesday, 19 February 2014 1:55 PM, Nir Soffer
nsof...@redhat.com wrote:
- Original Message -
 From: Udaya Kiran P ukiran...@yahoo.in
mailto:ukiran...@yahoo.in
 To: Nir Soffer nsof...@redhat.com mailto:nsof...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com
mailto:mbour...@redhat.com, users users@ovirt.org
mailto:users@ovirt.org
 Sent: Wednesday, February 19, 2014 9:58:12 AM
 Subject: Re: [Users] Host installation failed. SSH session
closed duringconnection (ovirt-node-iso-3.0.3-1.1.fc19)

 I have tried approving the host after executing setenforce 0 on
the host.

 This time the error in the Management Events says - Installation
Failed.
 Unable to Parse the Host key.

You should fix your selinux issues, disabling selinux is not
recommended.

To understand installation error, attach the installation log -
you should
see the path to the log in the engine events log.


Nir







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage Performance Issue !

2014-02-19 Thread Dafna Ron

Actually, I meant how is the network configured rhevm side?
are all vm's writing on rhevm interface?
Itamar, Should local storage be used this way in rhevm?

On 02/19/2014 03:56 AM, Vishvendra Singh Chauhan wrote:

Hi,

My network is 1 Gigabit. And i am using my each node in a separate 
cluster/DC.


And memory and processor uses are very low. Means when i run 30 vms 
then it's processor uses is 45% and memory uses is 35% around.



On Tue, Feb 18, 2014 at 2:14 PM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


what is the network configuration? are you using each host with
it's own cluster/DC as a local storage host?
how many hosts are we talking about? what is the memory/cpu
consumption on the hosts?



On 02/18/2014 01:18 AM, Vishvendra Singh Chauhan wrote:

Thanks Dafna,

I am Using Dell server Just as node, My manager is running on
other machine. On each node we have 40 vm's running. and all
most vm's has RHEL6 os. Now the problem about performance of
vm's, when client is writing the data in vm, they are
reporting, that they have  very slow speed to write the data.

I also feel slow performance at writing speed in the vm's, So
now please suggest, is any way to improve the writing speed in
virtual machine's ?.








On Mon, Feb 17, 2014 at 2:40 PM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

please give more information.
what do you mean by storage performance?
how many vm's are you running?
are you using the Dell as just a host or is engine also
installed
on it?


On 02/17/2014 03:26 AM, Vishvendra Singh Chauhan wrote:

Hello Group,

Please help me out in storage issue in ovirt.


I am using Dell PowerEdge XD 720 servers, as node in
Ovirt.
Every node has 24TB storage space, so i am using all this
space as the local storage in that.. But still i am
facing the
problem in storgae performance. My guests os are very
slow to
write the data.


So please give me, some tips using them i can increase the
performance in storage.



-- /*Thanks and Regards.*/
/*Vishvendra Singh Chauhan*/


___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
mailto:Users@ovirt.org mailto:Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users



-- Dafna Ron




-- 
/*Thanks and Regards.*/

/*Vishvendra Singh Chauhan*/
/*(*//*RHC{SA,E,SS,VA}CC{NA,NP})*/
/*+91-9711460593
*/

http://linux-links.blogspot.in/
God First Work Hard Success is Sure...



-- 
Dafna Ron





--
/*Thanks and Regards.*/
/*Vishvendra Singh Chauhan*/
/*(*//*RHC{SA,E,SS,VA}CC{NA,NP})*/
/*+91-9711460593
*/
http://linux-links.blogspot.in/
God First Work Hard Success is Sure...



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to Remove Template, if VM exists.

2014-02-19 Thread Dafna Ron
As long as the vm is linked to the template (which means you created the 
vm as a thin provision copy) you cannot delete the template.

what you can do to work around this is:
1. export the vm and template to an export domain
2. delete the vm and template from the source setup only
3. try to import the vm as clone

other way is:
1. create a template from the vm
2. create a new vm from the template (make sure to create a clone and 
not a thin copy this time :))


Dafna


On 02/19/2014 06:22 AM, Tejesh M wrote:

Hi All,

I'm not able to Delete Template, if the VM is exists which is deployed 
from that Template. How can i delete Template leaving the VM as is?


Thanks  Regards,
Tejesh


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live migration of VM's occasionally fails

2014-02-19 Thread Dafna Ron

Thanks Steve!
I am still perplexed how come one out of two identical vm's would fail 
migration... logic states that they should both fail.


On 02/19/2014 06:29 PM, Steve Dainard wrote:
I added another vlan on both hosts, and designated it a migration 
network. Still the same issue, failed to migrate one of the two VM's.


I then deleted a failed posix domain on another gluster volume with 
some heal tasks pending, with no hosts attached to it, and the VM's 
migrated successfully. Perhaps gluster isn't passing storage errors up 
properly for non-dependent volumes. Anyways this is solved for now, 
just wanted this here for posterity.


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Tue, Feb 18, 2014 at 5:00 PM, Steve Dainard sdain...@miovision.com 
mailto:sdain...@miovision.com wrote:


sanlock.log on the second host (ovirt002) doesn't have any entries
anywhere near that time of failure.

I see some heal-failed errors in gluster, but seeing as the
storage is exposed via NFS I'm surprised to think this might be
the issue. I'm working on fixing those files now, I'll update if I
make any progress.



*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog  | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please delete
the e-mail and any attachments and notify us immediately.


On Mon, Feb 17, 2014 at 5:32 PM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com wrote:

really interesting case :)  maybe gluster related?
Elad, can you please try to reproduce this?
gluster storage - at least two vm's server type created from
template as thin provision (it's clone copy).
after create run and migrate all vm's from one host to the
second host.
I think it would be a locking issue.

Steve, can you please also check the sanlock log in the second
host + look if there are any errors in the gluster logs (on
both hosts)?

Thanks,

Dafna



On 02/17/2014 06:52 PM, Steve Dainard wrote:

VM's are identical, same template, same cpu/mem/nic.
Server type, thin provisioned on NFS (backend is glusterfs
3.4).

Does monitor = spice console? I don't believe either of
them had a spice connection.

I don't see anything in the ovirt001 sanlock.log:

2014-02-14 11:16:05-0500 255246 [5111]: cmd_inq_lockspace
4,14

a52938f7-2cf4-4771-acb2-0c78d14999e5:1:/rhev/data-center/mnt/gluster-store-vip:_rep1/a52938f7-2cf4-4771-acb2-0c78d14999e5/dom_md/ids:0
flags 0
2014-02-14 11:16:05-0500 255246 [5111]: cmd_inq_lockspace
4,14 done 0
2014-02-14 11:16:15-0500 255256 [5110]: cmd_inq_lockspace
4,14

a52938f7-2cf4-4771-acb2-0c78d14999e5:1:/rhev/data-center/mnt/gluster-store-vip:_rep1/a52938f7-2cf4-4771-acb2-0c78d14999e5/dom_md/ids:0
flags 0
2014-02-14 11:16:15-0500 255256 [5110]: cmd_inq_lockspace
4,14 done 0
2014-02-14 11:16:25-0500 255266 [5111]: cmd_inq_lockspace
4,14

a52938f7-2cf4-4771-acb2-0c78d14999e5:1:/rhev/data-center/mnt/gluster-store-vip:_rep1/a52938f7-2cf4-4771-acb2-0c78d14999e5/dom_md/ids:0
flags 0
2014-02-14 11:16:25-0500 255266 [5111]: cmd_inq_lockspace
4,14 done 0
2014-02-14 11:16:36-0500 255276 [5110]: cmd_inq_lockspace
4,14

a52938f7-2cf4-4771-acb2-0c78d14999e5:1:/rhev/data-center/mnt/gluster-store-vip:_rep1/a52938f7-2cf4-4771-acb2-0c78d14999e5/dom_md/ids:0
flags 0
2014-02-14 11:16:36-0500 255276 [5110]: cmd_inq_lockspace
4,14 done 0
2014-02-14 11:16:46-0500 255286 [5111]: cmd_inq_lockspace
4,14

a52938f7-2cf4-4771-acb2

Re: [Users] Storage Performance Issue !

2014-02-18 Thread Dafna Ron
what is the network configuration? are you using each host with it's own 
cluster/DC as a local storage host?
how many hosts are we talking about? what is the memory/cpu consumption 
on the hosts?



On 02/18/2014 01:18 AM, Vishvendra Singh Chauhan wrote:

Thanks Dafna,

I am Using Dell server Just as node, My manager is running on other 
machine. On each node we have 40 vm's running. and all most vm's has 
RHEL6 os. Now the problem about performance of vm's, when client is 
writing the data in vm, they are reporting, that they have  very slow 
speed to write the data.


I also feel slow performance at writing speed in the vm's, So now 
please suggest, is any way to improve the writing speed in virtual 
machine's ?.









On Mon, Feb 17, 2014 at 2:40 PM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


please give more information.
what do you mean by storage performance?
how many vm's are you running?
are you using the Dell as just a host or is engine also installed
on it?


On 02/17/2014 03:26 AM, Vishvendra Singh Chauhan wrote:

Hello Group,

Please help me out in storage issue in ovirt.


I am using Dell PowerEdge XD 720 servers, as node in Ovirt.
Every node has 24TB storage space, so i am using all this
space as the local storage in that.. But still i am facing the
problem in storgae performance. My guests os are very slow to
write the data.


So please give me, some tips using them i can increase the
performance in storage.



-- 
/*Thanks and Regards.*/

/*Vishvendra Singh Chauhan*/


___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Dafna Ron





--
/*Thanks and Regards.*/
/*Vishvendra Singh Chauhan*/
/*(*//*RHC{SA,E,SS,VA}CC{NA,NP})*/
/*+91-9711460593
*/
http://linux-links.blogspot.in/
God First Work Hard Success is Sure...



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage Performance Issue !

2014-02-17 Thread Dafna Ron

please give more information.
what do you mean by storage performance?
how many vm's are you running?
are you using the Dell as just a host or is engine also installed on it?

On 02/17/2014 03:26 AM, Vishvendra Singh Chauhan wrote:

Hello Group,

Please help me out in storage issue in ovirt.


I am using Dell PowerEdge XD 720 servers, as node in Ovirt. Every node 
has 24TB storage space, so i am using all this space as the local 
storage in that.. But still i am facing the problem in storgae 
performance. My guests os are very slow to write the data.



So please give me, some tips using them i can increase the performance 
in storage.




--
/*Thanks and Regards.*/
/*Vishvendra Singh Chauhan*/


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live migration of VM's occasionally fails

2014-02-17 Thread Dafna Ron
did you install these vm's from a cd? run it as run-once with a special 
monitor?
try to think if there is anything different in the configuration of 
these vm's from the other vm's that succeed to migrate?


On 02/17/2014 04:36 PM, Steve Dainard wrote:

Hi Dafna,

No snapshots of either of those VM's have been taken, and there are no 
updates for any of those packages on EL 6.5.


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Sun, Feb 16, 2014 at 7:05 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


does the vm that fails migration have a live snapshot?
if so how many snapshots does the vm have.
I think that there are newer packages of vdsm, libvirt and qemu -
can you try to update



On 02/16/2014 12:33 AM, Steve Dainard wrote:

Versions are the same:

[root@ovirt001 ~]# rpm -qa | egrep 'libvirt|vdsm|qemu' | sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch
vdsm-gluster-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch

[root@ovirt002 ~]# rpm -qa | egrep 'libvirt|vdsm|qemu' | sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch
vdsm-gluster-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch

Logs attached, thanks.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Sat, Feb 15, 2014 at 6:24 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

the migration fails in libvirt:


Thread-153709::ERROR::2014-02-14
11:17:40,420::vm::337::vm.Vm::(run)
vmId=`08434c90-ffa3-4b63-aa8e-5613f7b0e0cd`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 323, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/vm.py, line 403, in
_startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/vm.py, line 841, in f
ret = attr(*args, **kwargs)
  File
   
/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py,

line 76, in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py,
line 1178,
in migrateToURI2
if ret == -1: raise libvirtError
('virDomainMigrateToURI2()
failed', dom=self)
libvirtError: Unable to read from monitor: Connection
reset by peer
Thread-54041::DEBUG::2014-02-14
11:17:41,752::task::579::TaskManager.Task::(_updateState)
Task=`094c412a-43dc-4c29-a601-d759486469a8`::moving from state
init - state preparing
Thread-54041::INFO::2014-02

Re: [Users] Live migration of VM's occasionally fails

2014-02-17 Thread Dafna Ron

mmm... that is very interesting...
both vm's are identical? are they server or desktops type? created as 
thin copy or clone? what storage type are you using? did you happen to 
have an open monitor on the vm that failed migration?
I wonder if it can be sanlock lock on the source template but I can only 
see this bug happening if the vm's are linked to the template

can you look at the sanlock log and see if there are any warning or errors?

All logs are in debug so I don't think we can get anything more from it 
but I am adding Meital and Omer to this mail to help debug this - 
perhaps they can think of something that can cause that from the trace.


This case is really interesting... sorry, probably not what you want to 
hear...  thanks for helping with this :)


Dafna


On 02/17/2014 05:08 PM, Steve Dainard wrote:
Failed live migration is wider spread than these two VM's, but they 
are a good example because they were both built from the same template 
and have no modifications after they were created. They were also 
migrated one after the other, with one successfully migrating and the 
other not.


Are there any increased logging levels that might help determine what 
the issue is?


Thanks,

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Mon, Feb 17, 2014 at 11:47 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


did you install these vm's from a cd? run it as run-once with a
special monitor?
try to think if there is anything different in the configuration
of these vm's from the other vm's that succeed to migrate?


On 02/17/2014 04:36 PM, Steve Dainard wrote:

Hi Dafna,

No snapshots of either of those VM's have been taken, and
there are no updates for any of those packages on EL 6.5.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Sun, Feb 16, 2014 at 7:05 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

does the vm that fails migration have a live snapshot?
if so how many snapshots does the vm have.
I think that there are newer packages of vdsm, libvirt and
qemu -
can you try to update



On 02/16/2014 12:33 AM, Steve Dainard wrote:

Versions are the same:

[root@ovirt001 ~]# rpm -qa | egrep 'libvirt|vdsm|qemu'
| sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch
vdsm-gluster-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch

[root@ovirt002 ~]# rpm -qa | egrep 'libvirt|vdsm|qemu'
| sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-3.el6.x86_64

Re: [Users] Live migration of VM's occasionally fails

2014-02-17 Thread Dafna Ron
/gluster-store-vip:_rep1/a52938f7-2cf4-4771-acb2-0c78d14999e5/dom_md/ids:0 
flags 0


ovirt002 sanlock.log has on entries during that time frame.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Mon, Feb 17, 2014 at 12:59 PM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


mmm... that is very interesting...
both vm's are identical? are they server or desktops type? created
as thin copy or clone? what storage type are you using? did you
happen to have an open monitor on the vm that failed migration?
I wonder if it can be sanlock lock on the source template but I
can only see this bug happening if the vm's are linked to the template
can you look at the sanlock log and see if there are any warning
or errors?

All logs are in debug so I don't think we can get anything more
from it but I am adding Meital and Omer to this mail to help debug
this - perhaps they can think of something that can cause that
from the trace.

This case is really interesting... sorry, probably not what you
want to hear...  thanks for helping with this :)

Dafna



On 02/17/2014 05:08 PM, Steve Dainard wrote:

Failed live migration is wider spread than these two VM's, but
they are a good example because they were both built from the
same template and have no modifications after they were
created. They were also migrated one after the other, with one
successfully migrating and the other not.

Are there any increased logging levels that might help
determine what the issue is?

Thanks,

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Mon, Feb 17, 2014 at 11:47 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

did you install these vm's from a cd? run it as run-once
with a
special monitor?
try to think if there is anything different in the
configuration
of these vm's from the other vm's that succeed to migrate?


On 02/17/2014 04:36 PM, Steve Dainard wrote:

Hi Dafna,

No snapshots of either of those VM's have been taken, and
there are no updates for any of those packages on EL 6.5.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn
   
https://www.linkedin.com/company/miovision-technologies  |

Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*
   


Miovision Technologies Inc. | 148 Manitou Drive, Suite
101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient,
please
delete the e-mail and any attachments and notify us
immediately.


On Sun, Feb 16, 2014 at 7:05 AM, Dafna Ron
d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com wrote:

does the vm that fails migration have a live snapshot?
if so how many snapshots does the vm have.
I think that there are newer packages of vdsm

Re: [Users] Live migration of VM's occasionally fails

2014-02-16 Thread Dafna Ron

does the vm that fails migration have a live snapshot?
if so how many snapshots does the vm have.
I think that there are newer packages of vdsm, libvirt and qemu - can 
you try to update



On 02/16/2014 12:33 AM, Steve Dainard wrote:

Versions are the same:

[root@ovirt001 ~]# rpm -qa | egrep 'libvirt|vdsm|qemu' | sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch
vdsm-gluster-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch

[root@ovirt002 ~]# rpm -qa | egrep 'libvirt|vdsm|qemu' | sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch
vdsm-gluster-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch

Logs attached, thanks.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Sat, Feb 15, 2014 at 6:24 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


the migration fails in libvirt:


Thread-153709::ERROR::2014-02-14
11:17:40,420::vm::337::vm.Vm::(run)
vmId=`08434c90-ffa3-4b63-aa8e-5613f7b0e0cd`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 323, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/vm.py, line 403, in _startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/vm.py, line 841, in f
ret = attr(*args, **kwargs)
  File
/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py,
line 76, in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178,
in migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2()
failed', dom=self)
libvirtError: Unable to read from monitor: Connection reset by peer
Thread-54041::DEBUG::2014-02-14
11:17:41,752::task::579::TaskManager.Task::(_updateState)
Task=`094c412a-43dc-4c29-a601-d759486469a8`::moving from state
init - state preparing
Thread-54041::INFO::2014-02-14
11:17:41,753::logUtils::44::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID='a52938f7-2cf4-4771-acb2-0c78d14999e5',
spUUID='fcb89071-6cdb-4972-94d1-c9324cebf814',
imgUUID='97c9108f-a506-415f-ad2
c-370d707cb130', volUUID='61f82f7f-18e4-4ea8-9db3-71ddd9d4e836',
options=None)

Do you have the same libvirt/vdsm/qemu on both your hosts?
Please attach the libvirt and vm logs from both hosts.

Thanks,
Dafna



On 02/14/2014 04:50 PM, Steve Dainard wrote:

Quick overview:
Ovirt 3.3.2 running on CentOS 6.5
Two hosts: ovirt001, ovirt002
Migrating two VM's: puppet-agent1, puppet-agent2 from ovirt002
to ovirt001.

The first VM puppet-agent1 migrates successfully. The second
VM puppet-agent2 fails with Migration failed due to Error:
Fatal error during migration (VM: puppet-agent2, Source:
ovirt002, Destination: ovirt001).

I've attached the logs if anyone can help me track down the issue.

Thanks,

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*



Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us

Re: [Users] Live migration of VM's occasionally fails

2014-02-15 Thread Dafna Ron

the migration fails in libvirt:


Thread-153709::ERROR::2014-02-14 11:17:40,420::vm::337::vm.Vm::(run) 
vmId=`08434c90-ffa3-4b63-aa8e-5613f7b0e0cd`::Failed to migrate

Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 323, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/vm.py, line 403, in _startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/vm.py, line 841, in f
ret = attr(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, 
line 76, in wrapper

ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() 
failed', dom=self)

libvirtError: Unable to read from monitor: Connection reset by peer
Thread-54041::DEBUG::2014-02-14 
11:17:41,752::task::579::TaskManager.Task::(_updateState) 
Task=`094c412a-43dc-4c29-a601-d759486469a8`::moving from state init - 
state preparing
Thread-54041::INFO::2014-02-14 
11:17:41,753::logUtils::44::dispatcher::(wrapper) Run and protect: 
getVolumeSize(sdUUID='a52938f7-2cf4-4771-acb2-0c78d14999e5', 
spUUID='fcb89071-6cdb-4972-94d1-c9324cebf814', 
imgUUID='97c9108f-a506-415f-ad2
c-370d707cb130', volUUID='61f82f7f-18e4-4ea8-9db3-71ddd9d4e836', 
options=None)


Do you have the same libvirt/vdsm/qemu on both your hosts?
Please attach the libvirt and vm logs from both hosts.

Thanks,
Dafna


On 02/14/2014 04:50 PM, Steve Dainard wrote:

Quick overview:
Ovirt 3.3.2 running on CentOS 6.5
Two hosts: ovirt001, ovirt002
Migrating two VM's: puppet-agent1, puppet-agent2 from ovirt002 to 
ovirt001.


The first VM puppet-agent1 migrates successfully. The second VM 
puppet-agent2 fails with Migration failed due to Error: Fatal error 
during migration (VM: puppet-agent2, Source: ovirt002, Destination: 
ovirt001).


I've attached the logs if anyone can help me track down the issue.

Thanks,

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Stuck Migration

2014-02-13 Thread Dafna Ron

I think there might be a command to clear the cache.
Omer, is there a command to clear the engine cache aside from restart?


On 02/12/2014 11:32 PM, Maurice James wrote:

That worked. Was there any other way to do it without a restart?

-Original Message-
From: Dafna Ron [mailto:d...@redhat.com]
Sent: Wednesday, February 12, 2014 10:37 AM
To: Maurice James
Cc: Meital Bourvine; users@ovirt.org
Subject: Re: [Users] Stuck Migration

If it's not listed in any of the hosts than its a cache issue in engine.
if you are reluctant to reboot the engine server, than restart the postgress
and than the engine.

Dafna


On 02/12/2014 02:10 PM, Maurice James wrote:

Log is attached
--
--
Date: Wed, 12 Feb 2014 05:52:58 -0500
From: mbour...@redhat.com
To: midnightst...@msn.com
CC: users@ovirt.org
Subject: Re: [Users] Stuck Migration

Can you please attach engine and vdsm longs?


--
--

 *From: *Maurice James midnightst...@msn.com
 *To: *users@ovirt.org
 *Sent: *Wednesday, February 12, 2014 12:45:33 PM
 *Subject: *[Users] Stuck Migration

 I have a vm that is shut down and is somehow stuck in a migration
 state. Im running version 3.3.3-2 on this setup. The vm disk is
 activated and cannot be deactivated, and the vm is shut down and
 cannot be started. When I try to start the vm it tells me that it
 cannot be started because it is in the process of migrating


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Dafna Ron



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk Images

2014-02-13 Thread Dafna Ron
There was a talk in the past to allow edit of the disk as its own entity 
it but I think that some of the disk params are vm dependent so editing 
them for disk only are a problem.

Ayal, do you remember why editing the disk on it's own was not implemented?

On 02/13/2014 07:33 AM, andreas.ew...@cbc.de wrote:

Yes, I know! Why is there this restriction? I have the following scenario: I have a 
recover vm with one bootable disk attached. Now I want to repair some broken 
configs from another vm’s boot disk. First I unattach the device from the broken vm. If I 
forget to remove the bootable flag, then I have to attach the disk again to the broken 
vm. I remove the bootable flag and attach the disk to my „recover vm“.  This is quite a 
long way round.
This could be a feature request, right?

best regards
Andreas

Am 12.02.2014 um 16:37 schrieb Dafna Ron d...@redhat.com:


disks can only be edited when attached to a vm

On 02/12/2014 03:04 PM, andreas.ew...@cbc.de wrote:

Hi,

If I remove a disk image from a Virtual Machine, then I can’t edit the disk 
properties in the „Disks“ tab. (e.g. bootable flag)
It is only possible to change the flags on attached disks.
My test environments are engine versions 3.3.3 and 3.4 beta2

Best regards
Andreas


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Dafna Ron



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] issues with live snapshot

2014-02-13 Thread Dafna Ron
', 'imageID': 
'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, 'path': 
'/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c',
 'imgVolumesInfo': [{'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 
'volType': 'path', 'leaseOffset': 128974848, 'path': 
'/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-!

  486b-a532-
f88a71666d2c', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath': 
'/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 
'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID': 
'54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 
127926272, 'path': 
'/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17',
 'volumeID': 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'leasePath': 
'/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 
'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID': 
'54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 
49056, 'path': 
'/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8',
 'volumeID': 'bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'leasePath': 
'/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imag!
  eID': 'db6
faf9e-2cc8-4106-954b-fef7e4b1bd1b'}]}

Thread-338209::DEBUG::2014-02-13 
08:40:19,850::task::579::TaskManager.Task::(_updateState) 
Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::moving from state preparing - 
state finished
Thread-338209::DEBUG::2014-02-13 
08:40:19,850::resourceManager::939::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources 
{'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57':  ResourceRef 
'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57', isValid: 'True' obj: 'None'}
Thread-338209::DEBUG::2014-02-13 
08:40:19,851::resourceManager::976::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-338209::DEBUG::2014-02-13 
08:40:19,851::resourceManager::615::ResourceManager::(releaseResource) Trying 
to release resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57'
Thread-338209::DEBUG::2014-02-13 
08:40:19,851::resourceManager::634::ResourceManager::(releaseResource) Released 
resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' (0 active users)
Thread-338209::DEBUG::2014-02-13 
08:40:19,851::resourceManager::640::ResourceManager::(releaseResource) Resource 
'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' is free, finding out if anyone 
is waiting for it.
Thread-338209::DEBUG::2014-02-13 
08:40:19,851::resourceManager::648::ResourceManager::(releaseResource) No one 
is waiting for resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57', 
Clearing records.
Thread-338209::DEBUG::2014-02-13 
08:40:19,852::task::974::TaskManager.Task::(_decref) 
Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::ref 0 aborting False
Thread-338209::INFO::2014-02-13 
08:40:19,852::clientIF::353::vds::(prepareVolumePath) prepared volume path: 
/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c
Thread-338209::DEBUG::2014-02-13 08:40:19,852::vm::3743::vm.Vm::(snapshot) 
vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::domainsnapshot
Thread-338209::DEBUG::2014-02-13 
08:40:19,865::libvirtconnection::108::libvirtconnection::(wrapper) Unknown 
libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported configuration: 
reuse is not supported with this QEMU binary
Thread-338209::DEBUG::2014-02-13 08:40:19,865::vm::3764::vm.Vm::(snapshot) 
vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::Snapshot failed using the quiesce 
flag, trying again without it (unsupported configuration: reuse is not 
supported with this QEMU binary)
Thread-338209::DEBUG::2014-02-13 
08:40:19,869::libvirtconnection::108::libvirtconnection::(wrapper) Unknown 
libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported configuration: 
reuse is not supported with this QEMU binary
Thread-338209::ERROR::2014-02-13 08:40:19,869::vm::3768::vm.Vm::(snapshot) 
vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::Unable to take snapshot
Thread-338209::DEBUG::2014-02-13 
08:40:19,870::BindingXMLRPC::972::vds::(wrapper) return vmSnapshot with 
{'status': {'message': 'Snapshot failed', 'code': 48}}

What can I do to fix this?

best regards
Andreas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk error

2014-02-12 Thread Dafna Ron

is the domain visible from the host the vm is running?
please run vdsClient -s 0 getStorageDomainInfo 
21619c8e-99ea-4813-be02-d708971e5393


did you install the vm from cd?


On 02/12/2014 02:06 AM, Maurice James wrote:


ImageDoesNotExistInSD: Image does not exist in domain: 
'image=d09418b1-2854-40e3-b4be-b4b5062f51d9, 
domain=21619c8e-99ea-4813-be02-d708971e5393'


Does anyone know how to fix this? I have a windows server using that 
disk and its having issues booting up because of it. Look like I will 
have to fix it by hand




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Stuck Migration

2014-02-12 Thread Dafna Ron

please run in both hosts:
vdsClient - s 0 list table

do you see the vm on the list in any of the hosts? what state is it if 
it exists?




On 02/12/2014 10:45 AM, Maurice James wrote:


I have a vm that is shut down and is somehow stuck in a migration 
state. Im running version 3.3.3-2 on this setup. The vm disk is 
activated and cannot be deactivated, and the vm is shut down and 
cannot be started. When I try to start the vm it tells me that it 
cannot be started because it is in the process of migrating




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Stuck Migration

2014-02-12 Thread Dafna Ron

If it's not listed in any of the hosts than its a cache issue in engine.
if you are reluctant to reboot the engine server, than restart the 
postgress and than the engine.


Dafna


On 02/12/2014 02:10 PM, Maurice James wrote:


Log is attached

Date: Wed, 12 Feb 2014 05:52:58 -0500
From: mbour...@redhat.com
To: midnightst...@msn.com
CC: users@ovirt.org
Subject: Re: [Users] Stuck Migration

Can you please attach engine and vdsm longs?




*From: *Maurice James midnightst...@msn.com
*To: *users@ovirt.org
*Sent: *Wednesday, February 12, 2014 12:45:33 PM
*Subject: *[Users] Stuck Migration

I have a vm that is shut down and is somehow stuck in a migration
state. Im running version 3.3.3-2 on this setup. The vm disk is
activated and cannot be deactivated, and the vm is shut down and
cannot be started. When I try to start the vm it tells me that it
cannot be started because it is in the process of migrating


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk Images

2014-02-12 Thread Dafna Ron

disks can only be edited when attached to a vm

On 02/12/2014 03:04 PM, andreas.ew...@cbc.de wrote:

Hi,

If I remove a disk image from a Virtual Machine, then I can’t edit the disk 
properties in the „Disks“ tab. (e.g. bootable flag)
It is only possible to change the flags on attached disks.
My test environments are engine versions 3.3.3 and 3.4 beta2

Best regards
Andreas


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to activate iSCSI domain after crash of host

2014-02-10 Thread Dafna Ron

On 02/10/2014 08:00 PM, Gianluca Cecchi wrote:

On Mon, Feb 10, 2014 at 10:56 AM, Alon Bar-Lev alo...@redhat.com wrote:


- Original Message -

From: Dafna Ron d...@redhat.com
To: Gianluca Cecchi gianluca.cec...@gmail.com, Alon Bar-Lev 
alo...@redhat.com
Cc: users users@ovirt.org
Sent: Monday, February 10, 2014 11:31:33 AM
Subject: Re: [Users] Unable to activate iSCSI domain after crash of host

adding Alon

On 02/08/2014 05:42 PM, Gianluca Cecchi wrote:

where can I find the function that encrypts iscsi chap password and
put the encrypted value into storage_server_connections table?
So that I can try to reinsert it and verify.

You can just put plain password, it should work...

If you want to encrypt use:

echo -n 'PASSWORD' | openssl pkeyutl -encrypt -certin -inkey 
/etc/pki/ovirt-engine/certs/engine.cer  | openssl enc -a | tr -d '\n'

But Dafna, isn't there a way at UI to re-specify password, so it be encrypted 
by the application?


the problem is that the storage already exists but non-operational and 
we cannot edit a storage in any status other than active.
so if the password changed during a storage issue, the storage cannot 
recover to active state if the password had changed and the luns are not 
visible on the storage and we also cannot edit the password for the 
domain...



Thanks
Gianluca


--
Dafna Ron


In my opinion when I first defined the ISCSI domain and input a wrong
password there was something not correctly managed when I then used
the correct one.
In fact in my opinion it seems there is no correspondence between
storage_domains table and storage_server_connections table.

If I take a glusterfs domain named gv01 I see this:

engine=# select * from storage_server_connections where id=(select
storage from storage_domains where storage_name='gv01');
   id  |  connection   | user_name |
password | iqn | port | portal | storage_type | mount_options |
vfs_type
| nfs_version | nfs_timeo | nfs_retrans
--+---+---+--+-+--++--+---+---
+-+---+-
  3b6a-aff3-47fa-b7ca-8e809804cbe2 | ovnode01:gv01 |   |
   | |  ||7 |   | glusterfs
| |   |
(1 row)

Instead for this ISCSI domain named OV01

engine=# select * from storage_server_connections where id=(select
storage from storage_domains where storage_name='OV01');
  id | connection | user_name | password | iqn | port | portal |
storage_type | mount_options | vfs_type | nfs_version | nfs_timeo |
nfs_retran
s
++---+--+-+--++--+---+--+-+---+---
--
(0 rows)


In particular:

engine=# select * from storage_domains where storage_name='OV01';
   id  |storage
 | storage_name | storage_description | storage_comment |
 storage_pool_id| available_disk_size | used_disk_size
| commited_disk_size | actual_images_size | status | storage_pool_name
|
  storage_type | storage_domain_type | storage_domain_format_type |
last_time_used_as_master | storage_domain_shared_status | recoverable
--++--+-+-+---
---+-+++++---+
--+-++--+--+-
  f741671e-6480-4d7b-b357-8cf6e8d2c0f1 |
uqe7UZ-PaBY-IiLj-XLAY-XoCZ-cmOk-cMJkeX | OV01 |
  | | 546cd2
9c-7249-4733-8fd5-317cff38ed71 |  44 |  5
| 10 |  1 |  4 | ISCSI
|
 3 |   0 | 3  |
0 |2 | t
(1 row)


engine=# select * from storage_pool where
id='546cd29c-7249-4733-8fd5-317cff38ed71';
   id  | name  | description |
storage_pool_type | storage_pool_format_type | status |
master_domain_version |
spm_vds_id | compatibility_version | _create_date  |
   _update_date  | quota_enforcement_type |
free_text_commen
t
--+---+-+---+--++---+-
---+---+---+---++-
--
  546cd29c-7249-4733-8fd5-317cff38ed71 | ISCSI | |
3 | 3|  4 | 2 |
| 3.3   | 2014-02-05 11:46:50.797079

Re: [Users] Unable to activate iSCSI domain after crash of host

2014-02-07 Thread Dafna Ron

what happens when you try to update from the UI? (edit the storage)

On 02/07/2014 02:06 PM, Gianluca Cecchi wrote:

On Fri, Feb 7, 2014 at 2:11 PM, Gianluca Cecchi wrote:


I'm going to check rdbms tables too...

Gianluca

it seems that the table is storage_server_connections

but the value seems (correctly in my opinion) encrypted... how can I
update it eventually?

engine=# select * from storage_server_connections ;
   id  |  connection
   | user_name |

   password

   |   iqn   |
port | portal | storage_type | mount_options | vfs_type  | nfs_version
|
  nfs_timeo | nfs_retrans
--+--+---+
--
--
--+-+--++--+---+---+-+
---+-


  6a5b159d-4c11-43cc-aa09-55c325de47b3 | 192.168.230.101
   | ovirt |
lf1mtw6jWq0tcO/jBeLtSdrx9WSMvLOJxMF/Z4UWsgK
W10jYKXzkxG8iPgX9xMEcOhTJCeMNtC6EQES5Tq0MjHGPfuzigwL9nejZEZwtDvOFmKZtCBSGaKoOyjQpU8hfoqq7u47jvGE5VmVwDQ40p6goXWDHMWPxdCk2IzAOBsDlsnrJGmqLioRDj
JQVya28cJsgzGoaLFHZMQD8bfW7ay3cQ6k8Hxlz99MKNpxxoV0fju1Blpfrqpa2bCSpQ5w0PrVHmJrW4eiBEd/Rg/XV497PGatAcwQr7hD5/uG/GLoqBbCMyR9S11Ot90aprL0Gd9cOlM4
VngzCD/2JqFmvhA== | iqn.2013-09.local.localdomain:c6iscsit.target11 |
3260 | 1  |3 |   |   |
|
|

Gianluca



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Will this two node concept scale and work?

2014-02-05 Thread Dafna Ron

see in line


On 02/05/2014 10:45 AM, ml ml wrote:

Hello List,

my aim is to host multiple VMs which are redundant and are high 
available. It should also scale well.


I'm assuming you are talking about HA cluster since redundancy vm and HA 
vms are a contradiction :)


I think usually people just buy a fat iSCSI Storage and attach this. 
In my case it should scale well from very small nodes to big ones.
Therefore an iSCSI Target will bring a lot of overhead (10GBit Links 
and two Paths, and really i should have a 2nd Hot Standby SAN, too). 
This makes scalability very hard.


This post is also not meant to be a iscsi discussion.

Since oVirt does not support DRBD out of the box i came up with my own 
concept:


check out posix storage domain.
If it supports gluster you might be able to use it for DRBD.



http://oi62.tinypic.com/2550xg5.jpg

As far as i can tell i have the following advantages:

- i can start with two simple cheap nodes
- i could add more disks to my nodes. Maybe even a SSD as a dedicated 
drbd resource.
- i can connect the two nodes directly to each other with bonding or 
infiniband. i dont need a switch or something between it.


Downside:
---
- i always need two nodes (as a couple)

Will this setup work for me. So far i think i will be quite happy with it.
Since the DRBD Resources are shared in dual primary mode i am not sure 
if ovirt can handle it. It is not allowed to write to a vm disk at the 
same time.


not true that you cannot write to the same vm disk at the same time - 
you have a shared disk option




The Concept of Linbit 
(http://www.linbit.com/en/company/news/333-high-available-virtualization-at-a-most-reasonable-price) 
seems to much of an overhead with the iSCSI Layer and pacemaker setup. 
Its just too much for such a simple task.


Please tell me that this concept is great and will work and scale well.
Otherwise i am also thankful for any hints or critical ideas.


Thanks a lot,
Mario


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-04 Thread Dafna Ron

On 02/04/2014 08:20 AM, Elad Ben Aharon wrote:

 From what I saw in the thread, libvirt pauses the VM, which effects the 
continuity of its operation.
I checked it also in one of the latest builds of 3.3 and I observed the same 
behaviour:
2014-02-04 10:18:29,441 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-22) [5cbc7ab5] VM nfs2-1 
86728f5c-3583-420f-9536-bfabaf11b235 moved from Up -- Paused

Dafna, I saw you've already opened a bug on that:
https://bugzilla.redhat.com/show_bug.cgi?id=1057587


just proves I'm getting senile :)


- Original Message -
From: Dafna Ron d...@redhat.com
To: Maor Lipchuk mlipc...@redhat.com
Cc: Steve Dainard sdain...@miovision.com, Elad Ben Aharon ebena...@redhat.com, 
Karli Sjöberg karli.sjob...@slu.se, users@ovirt.org
Sent: Tuesday, February 4, 2014 12:15:42 AM
Subject: Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

Maor I am not saying that we are not doing a live snapshot :) I am
saying that we need a print in the log that states live snapshot command
was called i.e: Print in the log: LiveSnapshotCommand - this can call
to the rest of snapshotVDSCreateCommand.


On 02/03/2014 07:38 PM, Maor Lipchuk wrote:

On 02/03/2014 07:46 PM, Dafna Ron wrote:

On 02/03/2014 05:34 PM, Maor Lipchuk wrote:

On 02/03/2014 07:18 PM, Dafna Ron wrote:

Maor,

If snapshotVDSCommand is for live snapshot, what is the offline create
snapshot command?

It is the CreateSnapshotVdsCommand which calls createVolume in VDSM

but we need to be able to know that a live snapshot was sent and not an
offline snapshot.

Yes, at the logs we can see the all process :

First a request to create a snapshot (new volume) sent to VDSM:
2014-02-02 09:41:09,557 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(pool-6-thread-49) [67ea047a] START, CreateSnapshotVDSCommand(
storagePoolId = fcb89071-6cdb-4972-94d1-c9324cebf814,
ignoreFailoverLimit = false, storageDomainId =
a52938f7-2cf4-4771-acb2-0c78d14999e5, imageGroupId =
c1cb6b66-655e-48c3-8568-4975295eb037, imageSizeInBytes = 21474836480,
volumeFormat = COW, newImageId = 6d8c80a4-328f-4a53-86a2-a4080a2662ce,
newImageDescription = , imageId = 5085422e-6592-415a-9da3-9e43dac9374b,
sourceImageGroupId = c1cb6b66-655e-48c3-8568-4975295eb037), log id: 7875f3f5

after the snapshot gets created :
2014-02-02 09:41:20,553 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(pool-6-thread-49) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand

then the engine calls the live snapshot (see also [1])
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872


Elad, somewhere in this flow we need to know that the snapshot was taken
on a running vm :) this seems like a bug to me.

we did not say that live snapshot did not succeed :)  we said that the
vm is paused and restarted - which is something that should not happen
for live snapshot (or at least never did before).

It's not sure that the restart is related to the live snapshot. but that
should be observed in the libvirt/vdsm logs.

yes, I am sure because the user is reporting it and the logs show it...

as I wrote before, we know that vdsm is reporting the vm as paused, that
is because libvirt is reporting the vm as paused and I think that its
happening because libvirt is not doing a live snapshot and so pauses the
vm while taking the snapshot.

That sounds logic to me, it's need to be checked with libvirt, if that
kind of behaviour could happen.

Elad, can you please try to reproduce and open a bug to libvirt?


Dafna


On 02/03/2014 05:08 PM, Maor Lipchuk wrote:

From the engine logs it seems that indeed live snapshot is called
(The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like Wasn't able to live snapshot due to error:..., but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine
that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002,
HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-04 Thread Dafna Ron

No, the bug is about libvirt pausing the vm's for live snapshot

I am not sure about your question on clock skew... we have to take a lot 
of snapshot to surpass the 5 minutes skew though :)



On 02/04/2014 01:45 PM, Steve Dainard wrote:
Just for clarity on that bug report; are you suggesting that the guest 
must go into a paused state to take a snapshot and that the admin 
shouldn't be aware of this state?


Although not horribly concerning for most tasks, wouldn't this lead to 
a 10 second clock skew every time there is a snapshot? Or does the 
guest sync from host hw clock on resume?


Thanks,

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Tue, Feb 4, 2014 at 3:44 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


On 02/04/2014 08:20 AM, Elad Ben Aharon wrote:

 From what I saw in the thread, libvirt pauses the VM, which
effects the continuity of its operation.
I checked it also in one of the latest builds of 3.3 and I
observed the same behaviour:
2014-02-04 10:18:29,441 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-22) [5cbc7ab5] VM nfs2-1
86728f5c-3583-420f-9536-bfabaf11b235 moved from Up -- Paused

Dafna, I saw you've already opened a bug on that:
https://bugzilla.redhat.com/show_bug.cgi?id=1057587


just proves I'm getting senile :)


- Original Message -
From: Dafna Ron d...@redhat.com mailto:d...@redhat.com
To: Maor Lipchuk mlipc...@redhat.com
mailto:mlipc...@redhat.com
Cc: Steve Dainard sdain...@miovision.com
mailto:sdain...@miovision.com, Elad Ben Aharon
ebena...@redhat.com mailto:ebena...@redhat.com, Karli
Sjöberg karli.sjob...@slu.se mailto:karli.sjob...@slu.se,
users@ovirt.org mailto:users@ovirt.org
Sent: Tuesday, February 4, 2014 12:15:42 AM
Subject: Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

Maor I am not saying that we are not doing a live snapshot :) I am
saying that we need a print in the log that states live
snapshot command
was called i.e: Print in the log: LiveSnapshotCommand - this
can call
to the rest of snapshotVDSCreateCommand.


On 02/03/2014 07:38 PM, Maor Lipchuk wrote:

On 02/03/2014 07:46 PM, Dafna Ron wrote:

On 02/03/2014 05:34 PM, Maor Lipchuk wrote:

On 02/03/2014 07:18 PM, Dafna Ron wrote:

Maor,

If snapshotVDSCommand is for live snapshot,
what is the offline create
snapshot command?

It is the CreateSnapshotVdsCommand which calls
createVolume in VDSM

but we need to be able to know that a live snapshot
was sent and not an
offline snapshot.

Yes, at the logs we can see the all process :

First a request to create a snapshot (new volume) sent to
VDSM:
2014-02-02 09:41:09,557 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(pool-6-thread-49) [67ea047a] START, CreateSnapshotVDSCommand(
storagePoolId = fcb89071-6cdb-4972-94d1-c9324cebf814,
ignoreFailoverLimit = false, storageDomainId =
a52938f7-2cf4-4771-acb2-0c78d14999e5, imageGroupId =
c1cb6b66-655e-48c3-8568-4975295eb037, imageSizeInBytes =
21474836480,
volumeFormat = COW, newImageId =
6d8c80a4-328f-4a53-86a2-a4080a2662ce,
newImageDescription = , imageId =
5085422e-6592-415a-9da3-9e43dac9374b,
sourceImageGroupId =
c1cb6b66-655e-48c3-8568-4975295eb037), log id: 7875f3f5

after the snapshot gets created :
2014-02-02 09:41:20,553 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(pool-6-thread-49) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand

then the engine calls the live snapshot (see also [1])
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand

Re: [Users] I can't remove VM

2014-02-04 Thread Dafna Ron

Any time.
If you manage to reproduce please let us know.

Dafna

On 02/04/2014 05:00 PM, Eduardo Ramos wrote:

Hi Dafna! Thanks for responding.

In order to collect full logs, I migrated my 61 machines from 16 to 2 
hosts. When I tried to remove, it worked without any problem. I did 
not understand why. I'm investigating.


Thanks again for your attention.

On 02/03/2014 11:43 AM, Dafna Ron wrote:

please attach full vdsm and engine logs.

Thanks,

Dafna


On 02/03/2014 12:11 PM, Eduardo Ramos wrote:

Hi all!

I'm having trouble on removing virtual machines. My environment run 
on a ISCSI domain storage. When I try remove, the SPM logs:


# Start vdsm SPM log #
Thread-6019517::INFO::2014-02-03 
09:58:09,293::logUtils::41::dispatcher::(wrapper) Run and protect: 
deleteImage(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a', 
spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', 
imgUUID='57ba1906-2035-4503-acbc-5f6f077f75cc', postZero='false', 
force='false')
Thread-6019517::INFO::2014-02-03 
09:58:09,293::blockSD::816::Storage.StorageDomain::(validate) 
sdUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a
Thread-6019517::ERROR::2014-02-03 
09:58:10,061::task::833::TaskManager.Task::(_setError) 
Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 42, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 1429, in deleteImage
allVols = dom.getAllVolumes()
  File /usr/share/vdsm/storage/blockSD.py, line 972, in getAllVolumes
return getAllVolumes(self.sdUUID)
  File /usr/share/vdsm/storage/blockSD.py, line 172, in getAllVolumes
vImg not in res[vPar]['imgs']):
KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
Thread-6019517::INFO::2014-02-03 
09:58:10,063::task::1134::TaskManager.Task::(prepare) 
Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::aborting: Task is 
aborted: u'63650a24-7e83-4c0a-851d-0ce9869a294d' - code 100
Thread-6019517::ERROR::2014-02-03 
09:58:10,066::dispatcher::70::Storage.Dispatcher.Protect::(run) 
'63650a24-7e83-4c0a-851d-0ce9869a294d'

Traceback (most recent call last):
  File /usr/share/vdsm/storage/dispatcher.py, line 62, in run
result = ctask.prepare(self.func, *args, **kwargs)
  File /usr/share/vdsm/storage/task.py, line 1142, in prepare
raise self.error
KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
Thread-6019518::INFO::2014-02-03 
09:58:10,087::logUtils::41::dispatcher::(wrapper) Run and protect: 
getSpmStatus(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', 
options=None)
Thread-6019518::INFO::2014-02-03 
09:58:10,088::logUtils::44::dispatcher::(wrapper) Run and protect: 
getSpmStatus, Return response: {'spm_st': {'spmId': 14, 'spmStatus': 
'SPM', 'spmLver': 64}}
Thread-6019519::INFO::2014-02-03 
09:58:10,100::logUtils::41::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses(spUUID=None, options=None)
Thread-6019519::INFO::2014-02-03 
09:58:10,101::logUtils::44::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses, Return response: {'allTasksStatus': {}}
Thread-6019520::INFO::2014-02-03 
09:58:10,109::logUtils::41::dispatcher::(wrapper) Run and protect: 
spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019520::INFO::2014-02-03 
09:58:10,681::clusterlock::121::SafeLease::(release) Releasing 
cluster lock for domain c332da29-ba9f-4c94-8fa9-346bb8e04e2a
Thread-6019521::INFO::2014-02-03 
09:58:11,054::logUtils::41::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-6019521::INFO::2014-02-03 
09:58:11,054::logUtils::44::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: 
{u'51eb6183-157d-4015-ae0f-1c7ffb1731c0': {'delay': 
'0.00799298286438', 'lastCheck': '5.3', 'code': 0, 'valid': True}, 
u'c332da29-ba9f-4c94-8fa9-346bb8e04e2a': {'delay': 
'0.0197920799255', 'lastCheck': '4.9', 'code': 0, 'valid': True}, 
u'0e0be898-6e04-4469-bb32-91f3cf8146d1': {'delay': 
'0.00803208351135', 'lastCheck': '5.3', 'code': 0, 'valid': True}}
Thread-6019520::INFO::2014-02-03 
09:58:11,732::logUtils::44::dispatcher::(wrapper) Run and protect: 
spmStop, Return response: None
Thread-6019523::INFO::2014-02-03 
09:58:11,835::logUtils::41::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses(spUUID=None, options=None)
Thread-6019523::INFO::2014-02-03 
09:58:11,835::logUtils::44::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses, Return response: {'allTasksStatus': {}}
Thread-6019524::INFO::2014-02-03 
09:58:11,844::logUtils::41::dispatcher::(wrapper) Run and protect: 
spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019524::ERROR::2014-02-03 
09:58:11,846::task::833::TaskManager.Task::(_setError) 
Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File

Re: [Users] I can't remove VM

2014-02-03 Thread Dafna Ron
 call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 42, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 601, in spmStop
pool.stopSpm()
  File /usr/share/vdsm/storage/securable.py, line 66, in wrapper
raise SecureError()
SecureError
Thread-6019541::INFO::2014-02-03 
09:58:39,128::task::1134::TaskManager.Task::(prepare) 
Task=`1f478485-401b-4b9b-b58b-1e7973cf64a2`::aborting: Task is 
aborted: u'' - code 100
Thread-6019541::ERROR::2014-02-03 
09:58:39,130::dispatcher::70::Storage.Dispatcher.Protect::(run)

Traceback (most recent call last):
  File /usr/share/vdsm/storage/dispatcher.py, line 62, in run
result = ctask.prepare(self.func, *args, **kwargs)
  File /usr/share/vdsm/storage/task.py, line 1142, in prepare
raise self.error
SecureError
# End vdsm SPM log #

And after, the cluster elects another SPM.

The webgui shows on 'events' tab:

# Start webgui events #
Data Center is being initialized, please wait for initialization to 
complete.
Failed to remove VM _12.147_postgresql_default.sir.inpe.br_apagar 
(User: eduardo.ramos).

# End webgui events #

Engine logs nothing but normal change of SPM.

I would like to know how I can identify what is stuck, and if I can 
delete by hand, deleting entry from DB and lvremove.


Thanks!


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Dafna Ron

Can you also put the vdsm, libvirt and qemu packages?

Thanks,
Dafna


On 02/03/2014 04:49 PM, Steve Dainard wrote:

FYI I'm running version 3.3.2, not the 3.3.3 beta.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Mon, Feb 3, 2014 at 11:24 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


Thanks Steve.

from the logs I can see that the create snapshot succeeds and that
the vm is resumed.
the vm moves to pause as part of libvirt flows:

2014-02-02 14:41:20.872+: 5843: debug :
qemuProcessHandleStop:728 : Transitioned guest snapshot-test to
paused state
2014-02-02 14:41:30.031+: 5843: debug :
qemuProcessHandleResume:776 : Transitioned guest snapshot-test out
of paused into resumed state

There are bugs here but I am not sure yet if this is libvirt
regression or engine.

I'm adding Elad and Maor since in engine logs I can't see anything
calling for live snapshot (only for snapshot) - Maor, shouldn't
live snapshot command be logged somewhere in the logs?
Is it possible that engine is calling to create snapshot and not
create live snapshot which is why the vm pauses?

Elad, if engine is not logging live snapshot anywhere I would open
a bug for engine (to print that in the logs).
Also, there is a bug in vdsm log for sdc where the below is logged
as ERROR and not INFO:

Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

If the engine was sending live snapshot or if there is no
difference in the two commands in engine side than I would open a
bug for libvirt for pausing the vm during live snapshot.

Dafna


On 02/03/2014 02:41 PM, Steve Dainard wrote:

[root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5
uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
lver = 5
version = 3
role = Master
remotePath = gluster-store-vip:/rep1
spm_id = 2
type = NFS
class = Data
master_ver = 1
name = gluster-store-rep1


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 tel:519-513-2407 ex.250
877-646-8476 tel:877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

please run vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5

Thanks,

Dafna



On 02/02/2014 03:02 PM, Steve Dainard wrote:

Logs attached with VM running on qemu-kvm-rhev
packages installed.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 tel:519-513-2407 tel:519-513-2407
tel:519-513-2407 ex.250

877-646-8476 tel:877-646-8476 tel:877-646-8476
tel:877-646-8476 (toll-free)


*Blog http://miovision.com/blog | **LinkedIn
   
https://www.linkedin.com/company/miovision-technologies  |

Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Dafna Ron

Maor,

If snapshotVDSCommand is for live snapshot, what is the offline create 
snapshot command?


we did not say that live snapshot did not succeed :)  we said that the 
vm is paused and restarted - which is something that should not happen 
for live snapshot (or at least never did before).
as I wrote before, we know that vdsm is reporting the vm as paused, that 
is because libvirt is reporting the vm as paused and I think that its 
happening because libvirt is not doing a live snapshot and so pauses the 
vm while taking the snapshot.


Dafna


On 02/03/2014 05:08 PM, Maor Lipchuk wrote:

 From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like Wasn't able to live snapshot due to error:..., but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002, HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
[67ea047a] Ending command successfully:
org.ovirt.engine.core.bll.CreateSnapshotCommand
...

Regards,
Maor

On 02/03/2014 06:24 PM, Dafna Ron wrote:

Thanks Steve.

from the logs I can see that the create snapshot succeeds and that the
vm is resumed.
the vm moves to pause as part of libvirt flows:

2014-02-02 14:41:20.872+: 5843: debug : qemuProcessHandleStop:728 :
Transitioned guest snapshot-test to paused state
2014-02-02 14:41:30.031+: 5843: debug : qemuProcessHandleResume:776
: Transitioned guest snapshot-test out of paused into resumed state

There are bugs here but I am not sure yet if this is libvirt regression
or engine.

I'm adding Elad and Maor since in engine logs I can't see anything
calling for live snapshot (only for snapshot) - Maor, shouldn't live
snapshot command be logged somewhere in the logs?
Is it possible that engine is calling to create snapshot and not create
live snapshot which is why the vm pauses?

Elad, if engine is not logging live snapshot anywhere I would open a bug
for engine (to print that in the logs).
Also, there is a bug in vdsm log for sdc where the below is logged as
ERROR and not INFO:

Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

If the engine was sending live snapshot or if there is no difference in
the two commands in engine side than I would open a bug for libvirt for
pausing the vm during live snapshot.

Dafna

On 02/03/2014 02:41 PM, Steve Dainard wrote:

[root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5
uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
lver = 5
version = 3
role = Master
remotePath = gluster-store-vip:/rep1
spm_id = 2
type = NFS
class = Data
master_ver = 1
name = gluster-store-rep1


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  | Twitter
https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please delete the
e-mail and any attachments and notify us immediately.


On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com wrote:

 please run vdsClient -s 0 getStorageDomainInfo
 a52938f7-2cf4-4771-acb2-0c78d14999e5

 Thanks,

 Dafna



 On 02/02/2014 03:02 PM, Steve Dainard wrote:

 Logs attached with VM running on qemu-kvm-rhev packages
installed.

 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http

  1   2   3   >