Re: [ovirt-users] NFS export domain still remain in "Preparing for maintenance"

2015-05-13 Thread Maor Lipchuk
Hi NUNIN,

I'm not sure that I clearly understood the problem.
You wrote that your NFS export is attached to a 6.6 cluster, though a cluster 
is mainly an entity which contains Hosts.

If it is the Host that was preparing for maintenance then it could be that 
there are VMs which are running on that Host which are currently during live 
migration.
In that case you could either manually migrate those VMs, shut them down, or 
simply move the Host back to active.
Is that indeed the issue? if not ,can you elaborate a but more please


Thanks,
Maor



- Original Message -
> From: "NUNIN Roberto" 
> To: users@ovirt.org
> Sent: Tuesday, May 12, 2015 5:17:39 PM
> Subject: [ovirt-users] NFS export domain still remain in "Preparing for   
> maintenance"
> 
> 
> 
> Hi all
> 
> 
> 
> We are using oVirt engine 3.5.1-0.0 on Centos 6.6
> 
> We have two DC. One with hosts using vdsm-4.16.10-8.gitc937927.el7.x86_64,
> the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
> 
> No hosted-engine, it run on a dedicates VM, outside oVirt.
> 
> 
> 
> Behavior: When try to put the NFS export currently active and attached to the
> 6.6 cluster, used to move VM from one DC to the other, this remain
> indefinitely in “Preparing for maintenance phase”.
> 
> 
> 
> No DNS resolution issue in place. All parties involved are solved directly
> and via reverse resolution.
> 
> I’ve read about the issue on el7 and IPv6 bug, but here we have the problem
> on Centos 6.6 hosts.
> 
> 
> 
> Any idea/suggestion/further investigation ?
> 
> 
> 
> Can we reinitialize the NFS export in some way ? Only erasing content ?
> 
> Thanks in advance for any suggestion.
> 
> 
> 
> 
> 
> Roberto Nunin
> 
> Italy
> 
> 
> 
> 
> Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
> potrebbe contenere informazioni confidenziali, riservate o proprietarie.
> Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
> immediatamente al mittente, cancellando l'originale e ogni sua copia e
> distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
> proibito e potrebbe essere fonte di violazione di legge.
> 
> This message is for the designated recipient only and may contain privileged,
> proprietary, or otherwise private information. If you have received it in
> error, please notify the sender immediately, deleting the original and all
> copies and destroying any hard copies. Any other use is strictly prohibited
> and may be unlawful.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] missing disk after storage domain expansion

2015-05-13 Thread Maor Lipchuk
Hi Andrea,

First of all, the issue sounds quite severe, can u please attach the engine 
logs so we can try to figure out how that happened.
second, does this disk contained any snapshots?
 If not, can you try to register it back (see 
http://www.ovirt.org/Features/ImportStorageDomain#Register_an_unregistered_disk)


Regards,
Maor



- Original Message -
> From: "Andrea Ghelardi" 
> To: users@ovirt.org
> Sent: Monday, May 11, 2015 12:01:17 AM
> Subject: Re: [ovirt-users] missing disk after storage domain expansion
> 
> 
> 
> Ok, so,
> 
> After _ a lot _ of unsuccessful approach, I finally connected to postegre DB
> directly.
> 
> Browsing the tables I found “unregistered_ovf_of_entities” where there is a
> reference of the missing disk
> 
> 
> 
> 
> 
> @ovf:diskId:4ab070c0-fb16-452d-8521-4ff0b004aef3
> 
> @ovf:size:210
> 
> @ovf:actual_size:210
> 
> @ovf:vm_snapshot_id:2e24b255-bb84-4284-8785-e2a042045882
> 
> @ovf:fileRef:16736ce0-a9df-410f-9f29-3a28364cdd41/4ab070c0-fb16-452d-8521-4ff0b004aef3
> 
> @ovf:format: http://www.vmware.com/specifications/vmdk.html#sparse
> 
> @ovf:volume-format:RAW
> 
> @ovf:volume-type:Preallocated
> 
> @ovf:disk-interface:VirtIO
> 
> @ovf:boot:false
> 
> @ovf:disk-alias:hertz_disk5
> 
> @ovf:disk-description:disk for SYBASE installation, on SAN shared storage
> 
> @ovf:wipe-after-delete:false
> 
> 
> 
> However, I’ve been unable to find any other helpful details.
> 
> I guess the disk is not recoverable at this point?
> 
> Any guru who has a good ovirt DB kwnoledge willing to give me some advice?
> 
> 
> 
> Thanks as usual
> 
> AG
> 
> 
> 
> 
> From: Andrea Ghelardi [mailto: a.ghela...@iontrading.com ]
> Sent: Thursday, May 07, 2015 6:08 PM
> To: ' users@ovirt.org '
> Subject: missing disk after storage domain expansion
> 
> 
> 
> 
> Hi gentlemen,
> 
> 
> 
> I recently found an error on 1 of my storages: it was complaining about no
> free space but the VM was running and disk operational.
> 
> Since I needed to perform some maintenance on the VM, I shut it down and at
> restart VM couldn’t boot up properly.
> 
> Checked VM via console and a disk was missing. Edited fstab (luckily this
> disk was not root but heck! It had a Sybase DB on it!) and restarted VM this
> time ok.
> 
> 
> 
> Since the disk resides on the dstore with no space, I expanded the iSCSI LUN,
> then refreshed multipath on hosts, then resized PVs and now ovirt is showing
> the correct size (logs do not complain anymore on no free space).
> 
> 
> 
> BUUUT
> 
> 
> 
> Now disk is missing. It is not shown anymore on Disks tab nor anywhere else.
> 
> Problem is that storage shows 214GB occupancy (size of the missing disk) so
> data is there but cannot find it anymore.
> 
> 
> 
> Logs show original disk creation, errors from the lack of space, refresh of
> the storage size and then no more references on the disk.
> 
> 
> 
> What can I do to find those missing ~210GBs?
> 
> 
> 
> Cheers
> 
> Andrea Ghelardi
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] When can we expect updated packages for Venom Vulnerability

2015-05-13 Thread Tim Macy
https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] providing hosts with foreman

2015-05-13 Thread Nathanaël Blanchet

Hi all,

I've setup a foreman server, but when adding a new host by "discovered 
hosts", I can't modify the address item which is default filled with a 
built "mac-DNS".
In ovirt setup, I want to identify my future hosts by their IP and not 
their unknown DNS name like it is described here: 
http://www.ovirt.org/Features/ForemanIntegration.
How can I setup foreman to do such a thing? Is the setup of the DNS 
proxy related?


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] [QE][ACTION REQUIRED] oVirt 3.5.3 status

2015-05-13 Thread Sandro Bonazzola
Il 13/05/2015 08:13, Sandro Bonazzola ha scritto:
> Hi,
> 
> We have no blockers for 3.5.3[1] and no blocking dependencies.
> 
> We still have 18 bugs in MODIFIED and 2 on QA[2]:
> 
>   MODIFIEDON_QA   Total
> external  1   0   1
> gluster   1   0   1
> infra 5   1   6
> integration   2   1   3
> network   1   0   1
> sla   1   0   1
> storage   5   0   5
> Total 16  2   18
> 
> 
> ACTION: Testers: you're welcome to verify bugs currently ON_QA.

In order to permit an easier testing of the fixes, a 3.5.3 RC1 will be issued 
tomorrow.



> ACTION: Testers: please add yourself to the test page [5]
> 
> We have 3 bugs currently targeted to 3.5.3[3]:
> 
> WhiteboardNEW POSTTotal   
> external  1   0   1
> infra 0   1   1
> network   0   1   1
> Total 1   2   3
> 
> 
> ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 and mark 
> them as blockers or postpone to 3.5.4.
> ACTION: Maintainers: to fill release notes for 3.5.3, the page has been 
> created and updated here [4]
> 
> A release management entry has been added for tracking the schedule of 
> 3.5.3[6] and tentative schedule has been proposed as follow:
> 
> 2015-05-27Release candidate
> 2015-06-09General availability
> 
> 
> [1] https://bugzilla.redhat.com/1198142
> [2] http://goo.gl/WqkJnn
> [3] 
> https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20target_release%3A3.5.3
> [4] http://www.ovirt.org/OVirt_3.5.3_Release_Notes
> [5] http://www.ovirt.org/Testing/oVirt_3.5.3_Testing
> [6] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3
> 
> 
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Greetings from Argentina

2015-05-13 Thread Eyal Edri
Welcome javier!

Checkout the latest edition of "get involved in oVirt" from this month:
http://lists.ovirt.org/pipermail/users/2015-May/032747.html

I'm sure you can find something you like :)

good luck!

Eyal E.
oVirt Infra team

- Original Message -
> From: "javier coscia" 
> To: users@ovirt.org
> Sent: Wednesday, May 13, 2015 3:29:44 AM
> Subject: [ovirt-users] Greetings from Argentina
> 
> Hey guys, my name is Javier and I'm currently working for Red Hat Argentina
> as technical support for the LATAM GEO. I've been working @ RH for 2 years
> now and in the virtualization team. I'm looking forward to learn more on the
> upstream project and contribute as I can.
> 
> Cheers!!
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] messages from journal in centos7 hosts

2015-05-13 Thread Nicolas Ecarnot

Le 13/05/2015 10:28, Francesco Romani a écrit :

Hi,

- Original Message -

From: "Nicolas Ecarnot" 
To: users@ovirt.org
Sent: Wednesday, May 13, 2015 9:30:22 AM
Subject: Re: [ovirt-users] messages from journal in centos7 hosts

Hello list,

Coming from : https://www.mail-archive.com/users@ovirt.org/msg24878.html

I'm also being disturb by this ennoyance.
Is there a workaround?

(oVirt 3.5.1, CentOS 7 hosts, iSCSI storage)


This message should have been removed in recent VDSM versions.
Which version of VDSM and libvirt are you running?

Bests,



[root@xxx]# rpm -qa|grep -i virt
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.2.x86_64
virt-what-1.13-5.el7.x86_64
libvirt-daemon-1.2.8-16.el7_1.2.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.2.x86_64
ovirt-release35-003-1.noarch
fence-virt-0.3.2-1.el7.x86_64
libvirt-client-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.2.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.2.x86_64



[root@xxx]# rpm -qa|grep -i vdsm
vdsm-cli-4.16.14-0.el7.noarch
vdsm-python-zombiereaper-4.16.14-0.el7.noarch
vdsm-xmlrpc-4.16.14-0.el7.noarch
vdsm-jsonrpc-4.16.14-0.el7.noarch
vdsm-4.16.14-0.el7.x86_64
vdsm-python-4.16.14-0.el7.noarch
vdsm-yajsonrpc-4.16.14-0.el7.noarch


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to Delete a VM snapshot .

2015-05-13 Thread Juan Hernández
On 05/11/2015 02:12 PM, Kumar R, Prashanth (Prashanth) wrote:
> Hi All,
> 
>  
> 
> I am facing issue in deleting a VM snapshot.
> 
> I am using ovirt-engine-sdk-java-3.5.0.5.jar
> 
>  
> 
>  
> 
> According to the SDK API,I can delete a VM using :
> 
> api.getVMs().get(vmName).getSnapshots().getById(snapshotId).delete();
> 
>  
> 
> But there is no way to fetch the snapshot ID of a particular snapshot.
> 

Why not?

  List snapshots = vm.getSnapshots().list();
  for (VMSnapshot snapshot : snapshots) {
System.out.println(snapshot.getId());
  }

>  
> 
> Or
> 
>  
> 
> List vmSnapshots =
> api.getVMs().get(vmName).getSnapshots().list();
> 
>  *for*(VMSnapshot vmSnapshot : vmSnapshots) {
> 
>   *if*(vmSnapshot.getDescription() == _snapshotdescription_){
> 
>  vmSnapshot.delete();
> 
>   }
> 
>  
> 
> I cannot use the above snippet of code to delete a snapshot,but the
> multiple snapshots for a vm can be created with the same description.
> 
>  
> 
>  
> 
> So is there a way to create a snapshot with a snapshot name assigned to it.
> 
> SO that delete,restore opeartions can be performed based on the snapshot
> name .
> 
>  
> 
> Thanks,
> 
> Prashanth R
> 

What you can't do, if understand correctly, is assign your own unique
symbolic name to a snapshot, like you do with the VM "name" attribute,
for example. That is a limitation of the engine, snapshots don't have a
"name" attribute.

Currently your only chance is to use the "description" attribute, but as
you said there is no guarantee that it will be unique.

Note that this happens with the Java SDK, with the Python SDK or with
any other client.

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] messages from journal in centos7 hosts

2015-05-13 Thread Francesco Romani
Hi,

- Original Message -
> From: "Nicolas Ecarnot" 
> To: users@ovirt.org
> Sent: Wednesday, May 13, 2015 9:30:22 AM
> Subject: Re: [ovirt-users] messages from journal in centos7 hosts
> 
> Hello list,
> 
> Coming from : https://www.mail-archive.com/users@ovirt.org/msg24878.html
> 
> I'm also being disturb by this ennoyance.
> Is there a workaround?
> 
> (oVirt 3.5.1, CentOS 7 hosts, iSCSI storage)

This message should have been removed in recent VDSM versions.
Which version of VDSM and libvirt are you running?

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-reports portal and cpu cores

2015-05-13 Thread Alexandr Krivulya
Ок, I understand it. But in admin portal on hosts detail tab I see for
example 32 logical cores vs 16 in reports because of Hyper-Threading
enabled. All VM's on this host consumes 26 vCores. Does it
overallocation? According to report - yes, but actually I think not.

Can you clarify?

13.05.2015 10:32, Shirly Radco пишет:
> H Alexandr,
>
> The vCores are the logical cpu cores, 
> and if you has more vCores than Host Cores it means that you have over 
> allocation for the hosts.
> So, if on a specific vm one of the cores will be on 100% usage than all the 
> vms using this core will show 100% as well.
>
> Best,
> --- 
> Shirly Radco 
> BI Software Engineer 
> Red Hat Israel Ltd.
>
>
>
> - Original Message -
>> From: "Alexandr Krivulya" 
>> To: users@ovirt.org
>> Sent: Tuesday, May 12, 2015 3:30:32 PM
>> Subject: [ovirt-users] engine-reports portal and cpu cores
>>
>> Hello,
>> why we have in reports host cores but not logical cpu cores? For example
>> in "Cluster Capacity Vs. Usage" report I always have more vCores then
>> Host Cores.
>> Thank you.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-reports portal and cpu cores

2015-05-13 Thread Shirly Radco
H Alexandr,

The vCores are the logical cpu cores, 
and if you has more vCores than Host Cores it means that you have over 
allocation for the hosts.
So, if on a specific vm one of the cores will be on 100% usage than all the vms 
using this core will show 100% as well.

Best,
--- 
Shirly Radco 
BI Software Engineer 
Red Hat Israel Ltd.



- Original Message -
> From: "Alexandr Krivulya" 
> To: users@ovirt.org
> Sent: Tuesday, May 12, 2015 3:30:32 PM
> Subject: [ovirt-users] engine-reports portal and cpu cores
> 
> Hello,
> why we have in reports host cores but not logical cpu cores? For example
> in "Cluster Capacity Vs. Usage" report I always have more vCores then
> Host Cores.
> Thank you.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] messages from journal in centos7 hosts

2015-05-13 Thread Nicolas Ecarnot

Hello list,

Coming from : https://www.mail-archive.com/users@ovirt.org/msg24878.html

I'm also being disturb by this ennoyance.
Is there a workaround?

(oVirt 3.5.1, CentOS 7 hosts, iSCSI storage)

--
Nicolas ECARNOT


Vadq Thu, 19 Feb 2015 06:24:57 -0800

Hi,

ovirt 3.5.1, engine on centos7 + storage type iscsi( using LIO™ on one of the
vdsm host) + four hosts on centos7


After installation of vdsm at all cenos7 hosts in cluster in /var/log/messages
every few seconds have such record

Feb 19 16:47:08 kvm05 journal: metadata not found: Requested metadata element
is not present
Feb 19 16:47:19 kvm05 journal: metadata not found: Requested metadata element
is not present
Feb 19 16:47:23 kvm05 journal: metadata not found: Requested metadata element
is not present
Feb 19 16:47:34 kvm05 journal: metadata not found: Requested metadata element
is not present
Feb 19 16:47:38 kvm05 journal: metadata not found: Requested metadata element
is not present

is it possible to fix this without disabling systemd-journald?


--
Thanks,
Vadim

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HE setup failing: Cannot set temporary password for console connection

2015-05-13 Thread Daniel Helgenberger
Hello Didi,
this is a clean install of the first host. (I am trying a disaster scenario and 
will later restore the backup)

--
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767




On Wed, May 13, 2015 at 12:22 AM -0700, "Yedidyah Bar David" 
mailto:d...@redhat.com>> wrote:

- Original Message -
> From: "Daniel Helgenberger" 
> To: users@ovirt.org
> Sent: Tuesday, May 12, 2015 8:48:42 PM
> Subject: [ovirt-users] HE setup failing: Cannot set temporary password for 
> console connection
>
> Hello,
>
> a week ago I set up a HE (3.5.2) with no problems; now this fails.

You mean you re-deploy the same host (clean setup), or adding another host?

> I
> suspect storage trouble but cannot pinpoint it. Did someth. change there
> with the recent async release?
>
> The storage domain is NFS3 via a 1G link to a ZFS storage appliance. The
> export is working quite fast; tested with iozone running for over 24h
> without problems.
>
> It starts with the "Verifying sanlock lockspace initialization" phase;
> which lasts about 7min (!).
>
> In the end it seems sanlock is locking the domain down. I can only get
> it back by stopping sanlock and vdsm.
>
> Attached is the full vdsm lock.
>
>
> Thanks!
>
> vdsm:
>
> Thread-27::ERROR::2015-05-12
> 19:17:51,470::domainMonitor::256::Storage.DomainMonitorThread::(_monitorDomain)
> Error while collecting domain 2f83ea82-e110-465d-a2a6-ac30534a9f6a
> monitoring information
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/domainMonitor.py", line 232, in
>   _monitorDomain
> self.domain.selftest()
>   File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
> return getattr(self.getRealDomain(), attrName)
>   File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
> dom = findMethod(sdUUID)
>   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
> return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> ('2f83ea82-e110-465d-a2a6-ac30534a9f6a',)
>
>
> Detector thread::DEBUG::2015-05-12
> 19:24:33,163::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http
> detected from ('127.0.0.1', 47076)
> Thread-93::DEBUG::2015-05-12
> 19:24:33,166::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call
> vmSetTicket with ('e3d350c5-6254-4908-99e9-d60ca2e046f7', '5435uOVP',
> '10800', 'disconnect', {}) {}
> Thread-93::ERROR::2015-05-12
> 19:24:33,166::BindingXMLRPC::1152::vds::(wrapper) unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 383, in vmSetTicket
> return vm.setTicket(password, ttl, existingConnAction, params)
>   File "/usr/share/vdsm/API.py", line 642, in setTicket
> return v.setTicket(password, ttl, existingConnAction, params)
>   File "/usr/share/vdsm/virt/vm.py", line 4834, in setTicket
> graphics = _domParseStr(self._dom.XMLDesc(0)).childNodes[0]. \
> AttributeError: 'NoneType' object has no attribute 'XMLDesc'
>
> and:
>
> Thread-91::DEBUG::2015-05-12
> 19:25:15,152::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
> ecode: 38 edom: 42 level: 2 message: Failed to acquire lock: No space left
> on device
> Thread-91::DEBUG::2015-05-12
> 19:25:15,154::vm::2294::vm.Vm::(_startUnderlyingVm)
> vmId=`e3d350c5-6254-4908-99e9-d60ca2e046f7`::_ongoingCreations released
> Thread-91::ERROR::2015-05-12
> 19:25:15,154::vm::2331::vm.Vm::(_startUnderlyingVm)
> vmId=`e3d350c5-6254-4908-99e9-d60ca2e046f7`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 2271, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 3335, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
>   111, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3427, in
>   createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: Failed to acquire lock: No space left on device
>
> sanlock:
> 2015-05-12 19:11:27+0200 4338 [708]: s2 lockspace
> 2f83ea82-e110-465d-a2a6-ac30534a9f6a:1:/rhev/data-center/mnt/10.11.0.30:_volumes

Re: [ovirt-users] HE setup failing: Cannot set temporary password for console connection

2015-05-13 Thread Yedidyah Bar David
- Original Message -
> From: "Daniel Helgenberger" 
> To: users@ovirt.org
> Sent: Tuesday, May 12, 2015 8:48:42 PM
> Subject: [ovirt-users] HE setup failing: Cannot set temporary password for 
> console connection
> 
> Hello,
> 
> a week ago I set up a HE (3.5.2) with no problems; now this fails.

You mean you re-deploy the same host (clean setup), or adding another host?

> I
> suspect storage trouble but cannot pinpoint it. Did someth. change there
> with the recent async release?
> 
> The storage domain is NFS3 via a 1G link to a ZFS storage appliance. The
> export is working quite fast; tested with iozone running for over 24h
> without problems.
> 
> It starts with the "Verifying sanlock lockspace initialization" phase;
> which lasts about 7min (!).
> 
> In the end it seems sanlock is locking the domain down. I can only get
> it back by stopping sanlock and vdsm.
> 
> Attached is the full vdsm lock.
> 
> 
> Thanks!
> 
> vdsm:
> 
> Thread-27::ERROR::2015-05-12
> 19:17:51,470::domainMonitor::256::Storage.DomainMonitorThread::(_monitorDomain)
> Error while collecting domain 2f83ea82-e110-465d-a2a6-ac30534a9f6a
> monitoring information
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/domainMonitor.py", line 232, in
>   _monitorDomain
> self.domain.selftest()
>   File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
> return getattr(self.getRealDomain(), attrName)
>   File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
> dom = findMethod(sdUUID)
>   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
> return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> ('2f83ea82-e110-465d-a2a6-ac30534a9f6a',)
> 
> 
> Detector thread::DEBUG::2015-05-12
> 19:24:33,163::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http
> detected from ('127.0.0.1', 47076)
> Thread-93::DEBUG::2015-05-12
> 19:24:33,166::BindingXMLRPC::1133::vds::(wrapper) client [127.0.0.1]::call
> vmSetTicket with ('e3d350c5-6254-4908-99e9-d60ca2e046f7', '5435uOVP',
> '10800', 'disconnect', {}) {}
> Thread-93::ERROR::2015-05-12
> 19:24:33,166::BindingXMLRPC::1152::vds::(wrapper) unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 383, in vmSetTicket
> return vm.setTicket(password, ttl, existingConnAction, params)
>   File "/usr/share/vdsm/API.py", line 642, in setTicket
> return v.setTicket(password, ttl, existingConnAction, params)
>   File "/usr/share/vdsm/virt/vm.py", line 4834, in setTicket
> graphics = _domParseStr(self._dom.XMLDesc(0)).childNodes[0]. \
> AttributeError: 'NoneType' object has no attribute 'XMLDesc'
> 
> and:
> 
> Thread-91::DEBUG::2015-05-12
> 19:25:15,152::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
> ecode: 38 edom: 42 level: 2 message: Failed to acquire lock: No space left
> on device
> Thread-91::DEBUG::2015-05-12
> 19:25:15,154::vm::2294::vm.Vm::(_startUnderlyingVm)
> vmId=`e3d350c5-6254-4908-99e9-d60ca2e046f7`::_ongoingCreations released
> Thread-91::ERROR::2015-05-12
> 19:25:15,154::vm::2331::vm.Vm::(_startUnderlyingVm)
> vmId=`e3d350c5-6254-4908-99e9-d60ca2e046f7`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 2271, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 3335, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
>   111, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3427, in
>   createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: Failed to acquire lock: No space left on device
> 
> sanlock:
> 2015-05-12 19:11:27+0200 4338 [708]: s2 lockspace
> 2f83ea82-e110-465d-a2a6-ac30534a9f6a:1:/rhev/data-center/mnt/10.11.0.30:_volumes_ovirt_engine/2f83ea82-e110-465d-a2a6-ac30534a9f6a/dom_md/ids:0
> 2015-05-12 19:11:48+0200 4359 [702]: s2 host 1 1 4338
> 9f9fb274-9ae8-490e-bfba-72a743b0ebb2.hv01.sec.i
> 2015-05-12 19:11:48+0200 4359 [702]: s2 host 250 1 0
> 9f9fb274-9ae8-490e-bfba-72a743b0ebb2.hv01.sec.i
> 2015-05-12 19:11:48+0200 4359 [707]: s2:r2 resource
> 2f83ea82-e110-465d-a2a6-ac30534a9f6a:SDM:/rhev/data-center/mnt/10.11.0.30:_volumes_ovirt_engine/2f83ea82-e110-465d-a2a6-ac30534a9f6a/dom_md/leases:1048576
> for 3,11,4167
> 2015-05-12 19:23