Re: [Users] Glance with oVirt

2013-10-09 Thread Tim Hildred
I raised this bug, as I hit this issue.

https://bugzilla.redhat.com/show_bug.cgi?id=1017538

Tim Hildred, RHCE, RHCVA
Content Author II - Engineering Content Services, Red Hat, Inc.
Brisbane, Australia
Email: thild...@redhat.com
Internal: 8588287
Mobile: +61 4 666 25242
IRC: thildred

- Original Message -
> From: "Federico Simoncelli" 
> To: "Itamar Heim" 
> Cc: users@ovirt.org
> Sent: Thursday, October 3, 2013 8:34:54 AM
> Subject: Re: [Users] Glance with oVirt
> 
> - Original Message -
> > From: "Itamar Heim" 
> > To: "Riccardo Brunetti" 
> > Cc: "Jason Brooks" , users@ovirt.org, "Federico
> > Simoncelli" 
> > Sent: Wednesday, October 2, 2013 9:18:57 PM
> > Subject: Re: [Users] Glance with oVirt
> > 
> > On 09/25/2013 05:03 PM, Riccardo Brunetti wrote:
> > > On 09/24/2013 08:24 PM, Itamar Heim wrote:
> > >> On 09/24/2013 06:06 PM, Jason Brooks wrote:
> > >>
> > > Dear all.
> > > Unfortunately I did manage to work out only the first step of the
> > > procedure.
> > > I can successfully define the glance external provider and from the
> > > storage tab I can list the images available in glance, but when I try to
> > > import an image (ie. a Fedora19 qcow2 image which is working inside
> > > OpenStack) the disk is stuck in "Illegal" state and in the logs
> > > (ovirt-engine) I see messages like:
> > >
> > > ...
> > >
> > > Can you help me?
> > >
> > > Thanks a lot
> > > Riccardo
> > >
> > 
> > was this resolved?
> 
> Several improvements have been introduced lately (both in engine and in
> vdsm).
> We might have two issues involved here (qcow2 case):
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1013643
> http://gerrit.ovirt.org/#/c/19222/
> 
> as far as I know the ovirt-engine-3.3 branch is in a good state now with
> regard to the glance integration (bz1013643 is still pending but it will
> be merged soon) and I expect to have a much more stable import for qcow2
> in the upcoming 3.3.1.
> 
> --
> Federico
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [libvirt] Migration issues with ovirt 3.3

2013-10-09 Thread Gianluca Cecchi
On Wed, Oct 9, 2013 at 5:52 PM, Daniel Berteaud  wrote:
> Le mercredi 09 octobre 2013 à 16:18 +0100, Dan Kenigsberg a écrit :
>
>>
>> Since libvirt has been using this port range first, would you open a
>> bug on gluster to avoid it?
>
> Already done: https://bugzilla.redhat.com/show_bug.cgi?id=987555
>
> (not using ovirt, but the problem is easy to trigger as soon as you use
> GlusterFS and libvirt on the same boxes)
>
> Regards, Daniel
>
>>
>> Dan. (prays that Vdsm is not expected to edit libvirt's migration port
>> range on installation)

I added my point to the bugzilla and everyone on oVirt should do so in
my opinion... very frustrating.. ;-(
And it seems not so easy to change gluster range.
I read in past threads for previous versions that the range 24009+ was
written inside the code.
In 3.4 it should be an option such as
--volfile-server-port=PORT
from a client point of view... but I tried some things and not able to
arrive at a significant point... always 49152 and 49153 used for my
two bricks

Furthermore, as a side note, I wanted to try migration reattempt it
several times as each time it chooses next port.
I have two bricks so that the first two attempts fail; in VM.log I get
...
-incoming tcp:[::]:49152: Failed to bind socket: Address already in use
...
-incoming tcp:[::]:49153: Failed to bind socket: Address already in use

Unfortunately, tested three times with same result, after the second
attempt the dest node goes into a loop of failing, recoverying from
crash, non reposnive.
I easily correct the situation with

systemctl restart vdsmd

on it, but possibly I could have raised some sort of bug.
And then if I try again to migrate it restarts from 49152... so I
cannot test it.

See here below a file with what written in vdsmd.log, with the two
attempts and the failure and the loop of type

Thread-61::DEBUG::2013-10-10
01:15:32,161::BindingXMLRPC::986::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Recovering from crash or
Initializing', 'code': 99}}

https://docs.google.com/file/d/0BwoPbcrMv8mvamR3RmpzLU11OFE/edit?usp=sharing
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [libvirt] Migration issues with ovirt 3.3

2013-10-09 Thread Daniel Berteaud
Le mercredi 09 octobre 2013 à 16:18 +0100, Dan Kenigsberg a écrit :

> 
> Since libvirt has been using this port range first, would you open a
> bug on gluster to avoid it?

Already done: https://bugzilla.redhat.com/show_bug.cgi?id=987555

(not using ovirt, but the problem is easy to trigger as soon as you use
GlusterFS and libvirt on the same boxes)

Regards, Daniel

> 
> Dan. (prays that Vdsm is not expected to edit libvirt's migration port
> range on installation)
> 

-- 
Daniel Berteaud
FIREWALL-SERVICES SARL.
Société de Services en Logiciels Libres
Technopôle Montesquieu
33650 MARTILLAC
Tel : 05 56 64 15 32
Fax : 05 56 64 15 32
Web : http://www.firewall-services.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt Weekly Meeting Minutes -- 2013-10-09

2013-10-09 Thread Dan Kenigsberg
On Wed, Oct 09, 2013 at 04:45:22PM +0100, Dan Kenigsberg wrote:
> On Wed, Oct 09, 2013 at 11:15:41AM -0400, Mike Burns wrote:
> > Minutes:
> > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.html
> > Minutes (text):
> > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.txt
> > Log:
> > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.log.html
> > 
> > =
> > #ovirt: oVirt Weekly sync
> > =
> > 
> > 
> > Meeting started by mburns at 14:06:41 UTC. The full logs are available
> > at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.log.html
> > .
> > 
> > 
> > 
> > Meeting summary
> > ---
> > * agenda and roll call  (mburns, 14:07:00)
> >   * 3.3 updates  (mburns, 14:07:17)
> >   * 3.4 planning  (mburns, 14:07:24)
> >   * conferences and workshops  (mburns, 14:07:31)
> >   * infra update  (mburns, 14:07:34)
> > 
> > * 3.3 updates  (mburns, 14:08:42)
> >   * 3.3.0.1 vdsm packages are posted to updates-testing  (mburns,
> > 14:09:04)
> >   * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1009100
> > (sbonazzo, 14:10:33)
> >   * 2 open bugs blocking 3.3.0.1  (mburns, 14:29:35)
> >   * 1 is deferred due to qemu-kvm feature set in el6  (mburns, 14:29:49)
> >   * other is allowed versions for vdsm  (mburns, 14:30:01)
> >   * vdsm version bug will be backported to 3.3.0.1 today  (mburns,
> > 14:30:13)
> >   * ACTION: sbonazzo to build engine 3.3.0.1 tomorrow  (mburns,
> > 14:30:22)
> >   * ACTION: mburns to post 3.3.0.1 to ovirt.org tomorrow  (mburns,
> > 14:30:32)
> >   * expected release:  next week  (mburns, 14:30:46)
> >   * ACTION: danken and sbonazzo to provide release notes for 3.3.0.1
> > (mburns, 14:37:56)
> 
> """
> A vdsm bug (BZ#1007980) made it impossible to migrate or re-run a VM
> with a glusterfs-backed virtual disk if the VM was originally started
> with an empty cdrom.
> 
> If you have encountered this bug, you would have to manually find the
> affected VMs with
> 
> psql -U engine -d engine -c "select distinct vm_name from vm_static, 
> vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
> 
> and remove their junk cdrom address with
> 
> psql -U engine -d engine -c "update vm_device set address='' where 
> device='cdrom' and address ilike '%pci%';"
> """

Apparently, another bug is hampering VM migration on ovirt/gluster

Bug 987555 - Glusterfs ports conflict with qemu live migration

Could gluster choose a disjoint range of ports?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt Weekly Meeting Minutes -- 2013-10-09

2013-10-09 Thread Dan Kenigsberg
On Wed, Oct 09, 2013 at 11:15:41AM -0400, Mike Burns wrote:
> Minutes:
> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.html
> Minutes (text):
> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.txt
> Log:
> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.log.html
> 
> =
> #ovirt: oVirt Weekly sync
> =
> 
> 
> Meeting started by mburns at 14:06:41 UTC. The full logs are available
> at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.log.html
> .
> 
> 
> 
> Meeting summary
> ---
> * agenda and roll call  (mburns, 14:07:00)
>   * 3.3 updates  (mburns, 14:07:17)
>   * 3.4 planning  (mburns, 14:07:24)
>   * conferences and workshops  (mburns, 14:07:31)
>   * infra update  (mburns, 14:07:34)
> 
> * 3.3 updates  (mburns, 14:08:42)
>   * 3.3.0.1 vdsm packages are posted to updates-testing  (mburns,
> 14:09:04)
>   * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1009100
> (sbonazzo, 14:10:33)
>   * 2 open bugs blocking 3.3.0.1  (mburns, 14:29:35)
>   * 1 is deferred due to qemu-kvm feature set in el6  (mburns, 14:29:49)
>   * other is allowed versions for vdsm  (mburns, 14:30:01)
>   * vdsm version bug will be backported to 3.3.0.1 today  (mburns,
> 14:30:13)
>   * ACTION: sbonazzo to build engine 3.3.0.1 tomorrow  (mburns,
> 14:30:22)
>   * ACTION: mburns to post 3.3.0.1 to ovirt.org tomorrow  (mburns,
> 14:30:32)
>   * expected release:  next week  (mburns, 14:30:46)
>   * ACTION: danken and sbonazzo to provide release notes for 3.3.0.1
> (mburns, 14:37:56)

"""
A vdsm bug (BZ#1007980) made it impossible to migrate or re-run a VM
with a glusterfs-backed virtual disk if the VM was originally started
with an empty cdrom.

If you have encountered this bug, you would have to manually find the
affected VMs with

psql -U engine -d engine -c "select distinct vm_name from vm_static, 
vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"

and remove their junk cdrom address with

psql -U engine -d engine -c "update vm_device set address='' where 
device='cdrom' and address ilike '%pci%';"
"""

>   * 3.3.1 -- vdsm should be ready for beta posting next week  (mburns,
> 14:38:53)
>   * engine looks to be in good shape for 3.3.1 (only 3 bugs, all in
> post)  (mburns, 14:43:24)
>   * plan is to post beta by next Wednesday (16-Oct)  (mburns, 14:43:35)
>   * with release around end of October  (mburns, 14:43:43)
> 
> * 3.4 planning  (mburns, 14:47:27)
>   * rough planning -- dev until end of december  (mburns, 14:57:11)
>   * stabilization/beta/etc during january  (mburns, 14:57:21)
>   * release late January or early February  (mburns, 14:57:34)
>   * ACTION: itamar to send email to board@ to discuss high level
> schedules  (mburns, 14:59:06)
> 
> * Conferences and Workshops  (mburns, 15:00:40)
>   * see the wiki home page for upcoming conferences  (mburns, 15:01:15)
>   * big one is KVM Forum in Edinburgh in 2 weeks  (mburns, 15:01:29)
> 
> * infra update  (mburns, 15:02:01)
>   * continued work on existing tasks  (mburns, 15:03:32)
>   * artifactory.ovirt.org setup  (mburns, 15:03:39)
>   * increased effort into continuous integration in jenkins  (mburns,
> 15:04:14)
>   * apuimedo and dcaro worked on checking installability of vdsm when
> spec is modified  (mburns, 15:10:36)
>   * adding ovirt-3.3 branch to vdsm jobs (instead of just master)
> (mburns, 15:11:32)
> 
> * Other Topics  (mburns, 15:12:47)
> 
> Meeting ended at 15:14:59 UTC.
> 
> 
> 
> 
> Action Items
> 
> * sbonazzo to build engine 3.3.0.1 tomorrow
> * mburns to post 3.3.0.1 to ovirt.org tomorrow
> * danken and sbonazzo to provide release notes for 3.3.0.1
> * itamar to send email to board@ to discuss high level schedules
> 
> 
> 
> 
> Action Items, by person
> ---
> * danken
>   * danken and sbonazzo to provide release notes for 3.3.0.1
> * itamar
>   * itamar to send email to board@ to discuss high level schedules
> * mburns
>   * mburns to post 3.3.0.1 to ovirt.org tomorrow
> * sbonazzo
>   * sbonazzo to build engine 3.3.0.1 tomorrow
>   * danken and sbonazzo to provide release notes for 3.3.0.1
> * **UNASSIGNED**
>   * (none)
> 
> 
> 
> 
> People Present (lines said)
> ---
> * mburns (111)
> * apuimedo (37)
> * danken (37)
> * itamar (34)
> * sbonazzo (20)
> * eedri_ (12)
> * ewoud (8)
> * orc_orc (5)
> * fsimonce` (3)
> * ovirtbot (3)
> * dneary (1)
> * lvernia (1)
> * sahina (1)
> * fabiand (1)
> 
> 
> 
> 
> Generated by `MeetBot`_ 0.1.4
> 
> .. _`MeetBot`: http://wiki.debian.org/MeetBot
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Dan Kenigsberg
On Wed, Oct 09, 2013 at 04:52:20PM +0200, Gianluca Cecchi wrote:
> On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg  wrote:
> > On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
> >> On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
> >>
> >> >
> >> >>
> >> >> But migration still fails
> >> >>
> >> >
> >> > It seems like an unrelated failure. I do not know what's blocking
> >> > migration traffic. Could you see if libvirtd.log and qemu logs at source
> >> > and destinaiton have clues?
> >> >
> >>
> >> It seems that on VM.log under qemu of desdt host I have:
> >> ...
> >> -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
> >
> > Is that port really taken (`ss -ntp` should tell by whom)?
> 
> yeah !
> It seems gluster uses it on both sides

Since libvirt has been using this port range first, would you open a
bug on gluster to avoid it?

Dan. (prays that Vdsm is not expected to edit libvirt's migration port
range on installation)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt Weekly Meeting Minutes -- 2013-10-09

2013-10-09 Thread Mike Burns
Minutes: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.html
Minutes (text): 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.txt
Log: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.log.html


=
#ovirt: oVirt Weekly sync
=


Meeting started by mburns at 14:06:41 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-09-14.06.log.html
.



Meeting summary
---
* agenda and roll call  (mburns, 14:07:00)
  * 3.3 updates  (mburns, 14:07:17)
  * 3.4 planning  (mburns, 14:07:24)
  * conferences and workshops  (mburns, 14:07:31)
  * infra update  (mburns, 14:07:34)

* 3.3 updates  (mburns, 14:08:42)
  * 3.3.0.1 vdsm packages are posted to updates-testing  (mburns,
14:09:04)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1009100
(sbonazzo, 14:10:33)
  * 2 open bugs blocking 3.3.0.1  (mburns, 14:29:35)
  * 1 is deferred due to qemu-kvm feature set in el6  (mburns, 14:29:49)
  * other is allowed versions for vdsm  (mburns, 14:30:01)
  * vdsm version bug will be backported to 3.3.0.1 today  (mburns,
14:30:13)
  * ACTION: sbonazzo to build engine 3.3.0.1 tomorrow  (mburns,
14:30:22)
  * ACTION: mburns to post 3.3.0.1 to ovirt.org tomorrow  (mburns,
14:30:32)
  * expected release:  next week  (mburns, 14:30:46)
  * ACTION: danken and sbonazzo to provide release notes for 3.3.0.1
(mburns, 14:37:56)
  * 3.3.1 -- vdsm should be ready for beta posting next week  (mburns,
14:38:53)
  * engine looks to be in good shape for 3.3.1 (only 3 bugs, all in
post)  (mburns, 14:43:24)
  * plan is to post beta by next Wednesday (16-Oct)  (mburns, 14:43:35)
  * with release around end of October  (mburns, 14:43:43)

* 3.4 planning  (mburns, 14:47:27)
  * rough planning -- dev until end of december  (mburns, 14:57:11)
  * stabilization/beta/etc during january  (mburns, 14:57:21)
  * release late January or early February  (mburns, 14:57:34)
  * ACTION: itamar to send email to board@ to discuss high level
schedules  (mburns, 14:59:06)

* Conferences and Workshops  (mburns, 15:00:40)
  * see the wiki home page for upcoming conferences  (mburns, 15:01:15)
  * big one is KVM Forum in Edinburgh in 2 weeks  (mburns, 15:01:29)

* infra update  (mburns, 15:02:01)
  * continued work on existing tasks  (mburns, 15:03:32)
  * artifactory.ovirt.org setup  (mburns, 15:03:39)
  * increased effort into continuous integration in jenkins  (mburns,
15:04:14)
  * apuimedo and dcaro worked on checking installability of vdsm when
spec is modified  (mburns, 15:10:36)
  * adding ovirt-3.3 branch to vdsm jobs (instead of just master)
(mburns, 15:11:32)

* Other Topics  (mburns, 15:12:47)

Meeting ended at 15:14:59 UTC.




Action Items

* sbonazzo to build engine 3.3.0.1 tomorrow
* mburns to post 3.3.0.1 to ovirt.org tomorrow
* danken and sbonazzo to provide release notes for 3.3.0.1
* itamar to send email to board@ to discuss high level schedules




Action Items, by person
---
* danken
  * danken and sbonazzo to provide release notes for 3.3.0.1
* itamar
  * itamar to send email to board@ to discuss high level schedules
* mburns
  * mburns to post 3.3.0.1 to ovirt.org tomorrow
* sbonazzo
  * sbonazzo to build engine 3.3.0.1 tomorrow
  * danken and sbonazzo to provide release notes for 3.3.0.1
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* mburns (111)
* apuimedo (37)
* danken (37)
* itamar (34)
* sbonazzo (20)
* eedri_ (12)
* ewoud (8)
* orc_orc (5)
* fsimonce` (3)
* ovirtbot (3)
* dneary (1)
* lvernia (1)
* sahina (1)
* fabiand (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Gianluca Cecchi
On Wed, Oct 9, 2013 at 3:43 PM, Dan Kenigsberg  wrote:
> On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
>> On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
>>
>> >
>> >>
>> >> But migration still fails
>> >>
>> >
>> > It seems like an unrelated failure. I do not know what's blocking
>> > migration traffic. Could you see if libvirtd.log and qemu logs at source
>> > and destinaiton have clues?
>> >
>>
>> It seems that on VM.log under qemu of desdt host I have:
>> ...
>> -incoming tcp:[::]:49153: Failed to bind socket: Address already in use
>
> Is that port really taken (`ss -ntp` should tell by whom)?

yeah !
It seems gluster uses it on both sides

On destination
[root@f18ovn01 qemu]# ss -ntp |egrep "State|49153"
State  Recv-Q Send-QLocal Address:Port  Peer Address:Port
ESTAB  0  0   192.168.3.1:975
192.168.3.1:49153  users:(("glusterfs",31166,7))
ESTAB  0  0   192.168.3.1:49153
192.168.3.3:972users:(("glusterfsd",18615,14))
ESTAB  0  0   192.168.3.1:49153
192.168.3.1:965users:(("glusterfsd",18615,13))
ESTAB  0  0   192.168.3.1:963
192.168.3.3:49153  users:(("glusterfs",31152,17))
ESTAB  0  0   192.168.3.1:49153
192.168.3.1:975users:(("glusterfsd",18615,9))
ESTAB  0  0   192.168.3.1:49153
192.168.3.3:966users:(("glusterfsd",18615,15))
ESTAB  0  0   192.168.3.1:965
192.168.3.1:49153  users:(("glusterfs",31152,7))
ESTAB  0  0   192.168.3.1:960
192.168.3.3:49153  users:(("glusterfs",31166,11))
...

[root@f18ovn01 qemu]# ps -ef|grep 31166
root 14950 10958  0 16:50 pts/000:00:00 grep --color=auto 31166
root 31166 1  0 Oct07 ?00:00:04 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/626066f6d74e376808c27ad679a1e85c.socket --xlator-option
*replicate*.node-uuid=ebaf2f1a-65a8-409a-b911-6e631a5f182f

[root@f18ovn01 qemu]# lsof -Pp 31166|grep 49153
glusterfs 31166 root7u IPv44703891  0t0
TCP f18ovn01.mydomain:975->f18ovn01.mydomain:49153 (ESTABLISHED)
glusterfs 31166 root   11u IPv44780563  0t0
TCP f18ovn01.mydomain:960->f18ovn03.mydomain:49153 (ESTABLISHED)

not so good indeed ;-)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Dan Kenigsberg
On Wed, Oct 09, 2013 at 03:41:58PM +0200, Gianluca Cecchi wrote:
> Il 09/ott/2013 15:39 "Gianluca Cecchi"  ha
> scritto:
> >
> >
> > Il 09/ott/2013 15:35 "Dan Kenigsberg"  ha scritto:
> >
> > >
> > > On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
> > > > On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
> > > >
> > > > >> >
> > > > >> > so now I'm able to start VM without having to select run once and
> > > > >> > attaching a cd iso
> > > > >> > (note that is only valid for newly created VMs though)
> > > > >>
> > > > >> Yes. Old VMs are trashed with the bogus address reproted by the
> buggy
> > > > >> Vdsm. Can someone from engine supply a script to clear all device
> > > > >> addresses from the VM database table?
> > > > >>
> > > > >
> > > > >
> > > > > you can use this line (assuming engine as db and user), make sure
> only 'bad' vms return:
> > > > > psql -U engine -d engine -c "select distinct vm_name from
> vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address
> ilike '%pci%';"
> > > > >
> > > > > if so, you can run this to clear the address field for them, so
> they could run again:
> > > > > psql -U engine -d engine -c "update vm_device set address='' where
> device='cdrom' and address ilike '%pci%';"
> > > > >
> > > >
> > > > I wanted to test this but for some reason it seems actually it solved
> itself.
> > > > I first ran the query and already had no value:
> > > > engine=# select distinct vm_name from vm_static, vm_device where
> > > > vm_guid=vm_id and device='cdrom' and address ilike '%pci%';
> > > >  vm_name
> > > > -
> > > > (0 rows)
> > > >
> > > > (overall my VMs are
> > > > engine=# select distinct vm_name from vm_static;
> > > >   vm_name
> > > > ---
> > > >  Blank
> > > >  c6s
> > > >  c8again32
> > > > (3 rows)
> > >
> > > Which of these 3 is the one that was started up with an empty cdrom on a
> > > vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.
> >
> > It is c6s
> 
> Note that the query suggested by Omer had 0 rows.
> My further query with 3 results was to show you all rows from vm_static at
> the moment

Yeah, understood. I have no idea how your c6s cleaned itslef up.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Dan Kenigsberg
On Wed, Oct 09, 2013 at 02:42:22PM +0200, Gianluca Cecchi wrote:
> On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:
> 
> >
> >>
> >> But migration still fails
> >>
> >
> > It seems like an unrelated failure. I do not know what's blocking
> > migration traffic. Could you see if libvirtd.log and qemu logs at source
> > and destinaiton have clues?
> >
> 
> It seems that on VM.log under qemu of desdt host I have:
> ...
> -incoming tcp:[::]:49153: Failed to bind socket: Address already in use

Is that port really taken (`ss -ntp` should tell by whom)?

> 
> 
> See all:
> - In libvirtd.log of source host
> 2013-10-07 23:20:54.471+: 1209: debug :
> qemuMonitorOpenInternal:751 : QEMU_MONITOR_NEW: mon=0x7fc66412e820
> refs=2 fd=30
> 2013-10-07 23:20:54.472+: 1209: warning :
> qemuDomainObjEnterMonitorInternal:1136 : This thread seems to be the
> async job owner; entering monitor without asking for a nested job is
> dangerous
> 2013-10-07 23:20:54.472+: 1209: debug :
> qemuMonitorSetCapabilities:1145 : mon=0x7fc66412e820
> 2013-10-07 23:20:54.472+: 1209: debug : qemuMonitorSend:887 :
> QEMU_MONITOR_SEND_MSG: mon=0x7fc66412e820
> msg={"execute":"qmp_capabilities","id":"libvirt-1"}
>  fd=-1
> 2013-10-07 23:20:54.769+: 1199: error : qemuMonitorIORead:505 :
> Unable to read from monitor: Connection reset by peer
> 2013-10-07 23:20:54.769+: 1199: debug : qemuMonitorIO:638 : Error
> on monitor Unable to read from monitor: Connection reset by peer
> 2013-10-07 23:20:54.769+: 1199: debug : qemuMonitorIO:672 :
> Triggering error callback
> 2013-10-07 23:20:54.769+: 1199: debug :
> qemuProcessHandleMonitorError:351 : Received error on 0x7fc664124fb0
> 'c8again32'
> 2013-10-07 23:20:54.769+: 1209: debug : qemuMonitorSend:899 : Send
> command resulted in error Unable to read from monitor: Connection
> reset by peer
> 2013-10-07 23:20:54.770+: 1199: debug : qemuMonitorIO:638 : Error
> on monitor Unable to read from monitor: Connection reset by peer
> 2013-10-07 23:20:54.770+: 1209: debug : virFileMakePathHelper:1283
> : path=/var/run/libvirt/qemu mode=0777
> 2013-10-07 23:20:54.770+: 1199: debug : qemuMonitorIO:661 :
> Triggering EOF callback
> 2013-10-07 23:20:54.770+: 1199: debug :
> qemuProcessHandleMonitorEOF:294 : Received EOF on 0x7fc664124fb0
> 'c8again32'
> 2013-10-07 23:20:54.770+: 1209: debug : qemuProcessStop:3992 :
> Shutting down VM 'c8again32' pid=18053 flags=0
> 2013-10-07 23:20:54.771+: 1209: error :
> virNWFilterDHCPSnoopEnd:2135 : internal error ifname "vnet0" not in
> key map
> 2013-10-07 23:20:54.782+: 1209: debug : virCommandRunAsync:2251 :
> About to run /bin/sh -c 'IPT="/usr/sbin/iptables"
> $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> vnet0 -g FO-vnet0
> $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
> $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0
> $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0
> $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT
> $IPT -F FO-vnet0
> $IPT -X FO-vnet0
> $IPT -F FI-vnet0
> $IPT -X FI-vnet0
> $IPT -F HI-vnet0
> $IPT -X HI-vnet0
> IPT="/usr/sbin/ip6tables"
> $IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> vnet0 -g FO-vnet0
> $IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
> $IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0
> $IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0
> $IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT
> $IPT -F FO-vnet0
> $IPT -X FO-vnet0
> $IPT -F FI-vnet0
> $IPT -X FI-vnet0
> $IPT -F HI-vnet0
> $IPT -X HI-vnet0
> EBT="/usr/sbin/ebtables"
> $EBT -t nat -D PREROUTING -i vnet0 -j libvirt-I-vnet0
> $EBT -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0
> EBT="/usr/sbin/ebtables"
> collect_chains()
> {
>   for tmp2 in $*; do
> for tmp in $($EBT -t nat -L $tmp2 | \
>   sed -n "/Bridge chain/,\$ s/.*-j \\([IO]-.*\\)/\\1/p");
> do
>   echo $tmp
>   collect_chains $tmp
> done
>   done
> }
> rm_chains()
> {
>   for tmp in $*; do $EBT -t nat -F $tmp; done
>   for tmp in $*; do $EBT -t nat -X $tmp; done
> }
> tmp='\''
> '\''
> IFS='\'' '\'''\''   '\''$tmp
> chains="$(collect_chains libvirt-I-vnet0 libvirt-O-vnet0)"
> $EBT -t nat -F libvirt-I-vnet0
> $EBT -t nat -F libvirt-O-vnet0
> rm_chains $chains
> $EBT -t nat -F libvirt-I-vnet0
> $EBT -t nat -X libvirt-I-vnet0
> $EBT -t nat -F libvirt-O-vnet0
> $EBT -t nat -X libvirt-O-vnet0
> '
> 2013-10-07 23:20:54.784+: 1209: debug : virCommandRunAsync:2256 :
> Command result 0, with PID 18076
> 2013-10-07 23:20:54.863+: 1209: debug : virCommandRun:2125 :
> Result exit status 255, stdout: '' stderr: 'iptables v1.4.18: goto
> 'FO-vnet0' is not a chain
> 
> Try `iptables -h' or 'iptables --help' for more information.
> iptables v1.4.18: goto 'FO-vnet0' is not a chain
> 
> Try `iptables -h' or 'iptables --help' for more information.
> iptables v1.4.18: goto 'FI

Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Gianluca Cecchi
Il 09/ott/2013 15:39 "Gianluca Cecchi"  ha
scritto:
>
>
> Il 09/ott/2013 15:35 "Dan Kenigsberg"  ha scritto:
>
> >
> > On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
> > > On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
> > >
> > > >> >
> > > >> > so now I'm able to start VM without having to select run once and
> > > >> > attaching a cd iso
> > > >> > (note that is only valid for newly created VMs though)
> > > >>
> > > >> Yes. Old VMs are trashed with the bogus address reproted by the
buggy
> > > >> Vdsm. Can someone from engine supply a script to clear all device
> > > >> addresses from the VM database table?
> > > >>
> > > >
> > > >
> > > > you can use this line (assuming engine as db and user), make sure
only 'bad' vms return:
> > > > psql -U engine -d engine -c "select distinct vm_name from
vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address
ilike '%pci%';"
> > > >
> > > > if so, you can run this to clear the address field for them, so
they could run again:
> > > > psql -U engine -d engine -c "update vm_device set address='' where
device='cdrom' and address ilike '%pci%';"
> > > >
> > >
> > > I wanted to test this but for some reason it seems actually it solved
itself.
> > > I first ran the query and already had no value:
> > > engine=# select distinct vm_name from vm_static, vm_device where
> > > vm_guid=vm_id and device='cdrom' and address ilike '%pci%';
> > >  vm_name
> > > -
> > > (0 rows)
> > >
> > > (overall my VMs are
> > > engine=# select distinct vm_name from vm_static;
> > >   vm_name
> > > ---
> > >  Blank
> > >  c6s
> > >  c8again32
> > > (3 rows)
> >
> > Which of these 3 is the one that was started up with an empty cdrom on a
> > vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.
>
> It is c6s

Note that the query suggested by Omer had 0 rows.
My further query with 3 results was to show you all rows from vm_static at
the moment
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Gianluca Cecchi
Il 09/ott/2013 15:35 "Dan Kenigsberg"  ha scritto:
>
> On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
> > On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
> >
> > >> >
> > >> > so now I'm able to start VM without having to select run once and
> > >> > attaching a cd iso
> > >> > (note that is only valid for newly created VMs though)
> > >>
> > >> Yes. Old VMs are trashed with the bogus address reproted by the buggy
> > >> Vdsm. Can someone from engine supply a script to clear all device
> > >> addresses from the VM database table?
> > >>
> > >
> > >
> > > you can use this line (assuming engine as db and user), make sure
only 'bad' vms return:
> > > psql -U engine -d engine -c "select distinct vm_name from vm_static,
vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
> > >
> > > if so, you can run this to clear the address field for them, so they
could run again:
> > > psql -U engine -d engine -c "update vm_device set address='' where
device='cdrom' and address ilike '%pci%';"
> > >
> >
> > I wanted to test this but for some reason it seems actually it solved
itself.
> > I first ran the query and already had no value:
> > engine=# select distinct vm_name from vm_static, vm_device where
> > vm_guid=vm_id and device='cdrom' and address ilike '%pci%';
> >  vm_name
> > -
> > (0 rows)
> >
> > (overall my VMs are
> > engine=# select distinct vm_name from vm_static;
> >   vm_name
> > ---
> >  Blank
> >  c6s
> >  c8again32
> > (3 rows)
>
> Which of these 3 is the one that was started up with an empty cdrom on a
> vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.

It is c6s
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Dan Kenigsberg
On Wed, Oct 09, 2013 at 02:27:16PM +0200, Gianluca Cecchi wrote:
> On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
> 
> >> >
> >> > so now I'm able to start VM without having to select run once and
> >> > attaching a cd iso
> >> > (note that is only valid for newly created VMs though)
> >>
> >> Yes. Old VMs are trashed with the bogus address reproted by the buggy
> >> Vdsm. Can someone from engine supply a script to clear all device
> >> addresses from the VM database table?
> >>
> >
> >
> > you can use this line (assuming engine as db and user), make sure only 
> > 'bad' vms return:
> > psql -U engine -d engine -c "select distinct vm_name from vm_static, 
> > vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
> >
> > if so, you can run this to clear the address field for them, so they could 
> > run again:
> > psql -U engine -d engine -c "update vm_device set address='' where 
> > device='cdrom' and address ilike '%pci%';"
> >
> 
> I wanted to test this but for some reason it seems actually it solved itself.
> I first ran the query and already had no value:
> engine=# select distinct vm_name from vm_static, vm_device where
> vm_guid=vm_id and device='cdrom' and address ilike '%pci%';
>  vm_name
> -
> (0 rows)
> 
> (overall my VMs are
> engine=# select distinct vm_name from vm_static;
>   vm_name
> ---
>  Blank
>  c6s
>  c8again32
> (3 rows)

Which of these 3 is the one that was started up with an empty cdrom on a
vanilla ovirt-3.3.0 vdsm? The script is expect to show only those.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Gianluca Cecchi
On Tue, Oct 8, 2013 at 10:40 AM, Dan Kenigsberg wrote:

>
>>
>> But migration still fails
>>
>
> It seems like an unrelated failure. I do not know what's blocking
> migration traffic. Could you see if libvirtd.log and qemu logs at source
> and destinaiton have clues?
>

It seems that on VM.log under qemu of desdt host I have:
...
-incoming tcp:[::]:49153: Failed to bind socket: Address already in use


See all:
- In libvirtd.log of source host
2013-10-07 23:20:54.471+: 1209: debug :
qemuMonitorOpenInternal:751 : QEMU_MONITOR_NEW: mon=0x7fc66412e820
refs=2 fd=30
2013-10-07 23:20:54.472+: 1209: warning :
qemuDomainObjEnterMonitorInternal:1136 : This thread seems to be the
async job owner; entering monitor without asking for a nested job is
dangerous
2013-10-07 23:20:54.472+: 1209: debug :
qemuMonitorSetCapabilities:1145 : mon=0x7fc66412e820
2013-10-07 23:20:54.472+: 1209: debug : qemuMonitorSend:887 :
QEMU_MONITOR_SEND_MSG: mon=0x7fc66412e820
msg={"execute":"qmp_capabilities","id":"libvirt-1"}
 fd=-1
2013-10-07 23:20:54.769+: 1199: error : qemuMonitorIORead:505 :
Unable to read from monitor: Connection reset by peer
2013-10-07 23:20:54.769+: 1199: debug : qemuMonitorIO:638 : Error
on monitor Unable to read from monitor: Connection reset by peer
2013-10-07 23:20:54.769+: 1199: debug : qemuMonitorIO:672 :
Triggering error callback
2013-10-07 23:20:54.769+: 1199: debug :
qemuProcessHandleMonitorError:351 : Received error on 0x7fc664124fb0
'c8again32'
2013-10-07 23:20:54.769+: 1209: debug : qemuMonitorSend:899 : Send
command resulted in error Unable to read from monitor: Connection
reset by peer
2013-10-07 23:20:54.770+: 1199: debug : qemuMonitorIO:638 : Error
on monitor Unable to read from monitor: Connection reset by peer
2013-10-07 23:20:54.770+: 1209: debug : virFileMakePathHelper:1283
: path=/var/run/libvirt/qemu mode=0777
2013-10-07 23:20:54.770+: 1199: debug : qemuMonitorIO:661 :
Triggering EOF callback
2013-10-07 23:20:54.770+: 1199: debug :
qemuProcessHandleMonitorEOF:294 : Received EOF on 0x7fc664124fb0
'c8again32'
2013-10-07 23:20:54.770+: 1209: debug : qemuProcessStop:3992 :
Shutting down VM 'c8again32' pid=18053 flags=0
2013-10-07 23:20:54.771+: 1209: error :
virNWFilterDHCPSnoopEnd:2135 : internal error ifname "vnet0" not in
key map
2013-10-07 23:20:54.782+: 1209: debug : virCommandRunAsync:2251 :
About to run /bin/sh -c 'IPT="/usr/sbin/iptables"
$IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
vnet0 -g FO-vnet0
$IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
$IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0
$IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0
$IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT
$IPT -F FO-vnet0
$IPT -X FO-vnet0
$IPT -F FI-vnet0
$IPT -X FI-vnet0
$IPT -F HI-vnet0
$IPT -X HI-vnet0
IPT="/usr/sbin/ip6tables"
$IPT -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
vnet0 -g FO-vnet0
$IPT -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
$IPT -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0
$IPT -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0
$IPT -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT
$IPT -F FO-vnet0
$IPT -X FO-vnet0
$IPT -F FI-vnet0
$IPT -X FI-vnet0
$IPT -F HI-vnet0
$IPT -X HI-vnet0
EBT="/usr/sbin/ebtables"
$EBT -t nat -D PREROUTING -i vnet0 -j libvirt-I-vnet0
$EBT -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0
EBT="/usr/sbin/ebtables"
collect_chains()
{
  for tmp2 in $*; do
for tmp in $($EBT -t nat -L $tmp2 | \
  sed -n "/Bridge chain/,\$ s/.*-j \\([IO]-.*\\)/\\1/p");
do
  echo $tmp
  collect_chains $tmp
done
  done
}
rm_chains()
{
  for tmp in $*; do $EBT -t nat -F $tmp; done
  for tmp in $*; do $EBT -t nat -X $tmp; done
}
tmp='\''
'\''
IFS='\'' '\'''\''   '\''$tmp
chains="$(collect_chains libvirt-I-vnet0 libvirt-O-vnet0)"
$EBT -t nat -F libvirt-I-vnet0
$EBT -t nat -F libvirt-O-vnet0
rm_chains $chains
$EBT -t nat -F libvirt-I-vnet0
$EBT -t nat -X libvirt-I-vnet0
$EBT -t nat -F libvirt-O-vnet0
$EBT -t nat -X libvirt-O-vnet0
'
2013-10-07 23:20:54.784+: 1209: debug : virCommandRunAsync:2256 :
Command result 0, with PID 18076
2013-10-07 23:20:54.863+: 1209: debug : virCommandRun:2125 :
Result exit status 255, stdout: '' stderr: 'iptables v1.4.18: goto
'FO-vnet0' is not a chain

Try `iptables -h' or 'iptables --help' for more information.
iptables v1.4.18: goto 'FO-vnet0' is not a chain

Try `iptables -h' or 'iptables --help' for more information.
iptables v1.4.18: goto 'FI-vnet0' is not a chain
Try `iptables -h' or 'iptables --help' for more information.
iptables v1.4.18: goto 'HI-vnet0' is not a chain

Try `iptables -h' or 'iptables --help' for more information.
iptables: Bad rule (does a matching rule exist in that chain?).
iptables: No chain/target/match by that name.
iptables: No chain/target/match by that name.
iptables: No chain/target/match

Re: [Users] Migration issues with ovirt 3.3

2013-10-09 Thread Gianluca Cecchi
On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:

>> >
>> > so now I'm able to start VM without having to select run once and
>> > attaching a cd iso
>> > (note that is only valid for newly created VMs though)
>>
>> Yes. Old VMs are trashed with the bogus address reproted by the buggy
>> Vdsm. Can someone from engine supply a script to clear all device
>> addresses from the VM database table?
>>
>
>
> you can use this line (assuming engine as db and user), make sure only 'bad' 
> vms return:
> psql -U engine -d engine -c "select distinct vm_name from vm_static, 
> vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
>
> if so, you can run this to clear the address field for them, so they could 
> run again:
> psql -U engine -d engine -c "update vm_device set address='' where 
> device='cdrom' and address ilike '%pci%';"
>

I wanted to test this but for some reason it seems actually it solved itself.
I first ran the query and already had no value:
engine=# select distinct vm_name from vm_static, vm_device where
vm_guid=vm_id and device='cdrom' and address ilike '%pci%';
 vm_name
-
(0 rows)

(overall my VMs are
engine=# select distinct vm_name from vm_static;
  vm_name
---
 Blank
 c6s
 c8again32
(3 rows)

)

Then I tried to start the powered off VM that gave problems and it started ok.
Tried to run shutdown inside it and power on again and it worked too.
As soon as I updated the bugzilla with my comment, I restarted vdsmd
on both nodes but I wasn't able to run it...

strange... any other thing that could have resolved by itself.
Things done in sequence related to that VM were:

modify vm.py on both nodes and restart vdsmd while VM was powered off
verify that power on gave the error
verify that run once worked as before
shutdown and power off VM
after two days (no activity on my side at all) I wanted to try the DB
workaround to solve this VM status but apparently the column was
already empty and VM able to start normally

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ComputeNode installation failed ovirt 3.3

2013-10-09 Thread Sven Kieske
Hi,

we have successfully deployed ovirt engine 3.3.
However, when adding a node based on ovirt node 2.6.1.
we get the following error after registration completed successfully
and we try to activate the host via webadmin:

"Installing Host server4 Starting vdsm"
"Host server4 Installation failed. Unexpected connection termination."

I attached an excerpt from the engine.log, which shows that an ssh
commands fails, but I don't know why.

manual ssh connections to the node from the management node work just fine.

The node has the IP 10.0.1.4 in the log.

Any help would be appreciated.

Regards

Sven

PS:
The same Node works fine with oVirt management 3.2.
We then did use the "reinstall" feature from the node
iso to redeploy the node to the management 3.3 server.
2013-10-09 10:50:49,697 INFO  [org.ovirt.engine.core.bll.InstallerMessages] (VdsDeploy) Installation 10.0.1.4: Starting vdsm
2013-10-09 10:50:49,725 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (VdsDeploy) Correlation ID: 38f064c, Call Stack: null, Custom Event ID: -1, Message: Installing Host server4. Starting vdsm.
2013-10-09 10:50:51,423 ERROR [org.ovirt.engine.core.bll.GetoVirtISOsQuery] (ajp--127.0.0.1-8702-11) ovirt ISOs directory not found. Search in: /usr/share/ovirt-node-iso
2013-10-09 10:50:52,784 ERROR [org.ovirt.engine.core.bll.VdsDeploy] (VdsDeploy) Error during deploy dialog: java.io.IOException: Unexpected connection termination
at org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:388) [otopi.jar:]
at org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:405) [otopi.jar:]
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:750) [bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$1800(VdsDeploy.java:77) [bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$45.run(VdsDeploy.java:893) [bll.jar:]
at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]

2013-10-09 10:50:52,785 ERROR [org.ovirt.engine.core.utils.ssh.SSHDialog] (pool-6-thread-32) SSH stderr during command root@10.0.1.4:'umask 0077; MYTMP="$(mktemp -t ovirt-XX)"; trap "chmod -R u+rwX \"${M
YTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm -fr "${MYTMP}" && mkdir "${MYTMP}" && tar --warning=no-timestamp -C "${MYTMP}" -x &&  "${MYTMP}"/setup DIALOG/dialect=str:machine DIALOG/cust
omization=bool:True': stderr: bash: line 1:  8635 Segmentation fault  "${MYTMP}"/setup DIALOG/dialect=str:machine DIALOG/customization=bool:True

2013-10-09 10:50:52,787 ERROR [org.ovirt.engine.core.utils.ssh.SSHDialog] (pool-6-thread-32) SSH error running command root@10.0.1.4:'umask 0077; MYTMP="$(mktemp -t ovirt-XX)"; trap "chmod -R u+rwX \"${M
YTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm -fr "${MYTMP}" && mkdir "${MYTMP}" && tar --warning=no-timestamp -C "${MYTMP}" -x &&  "${MYTMP}"/setup DIALOG/dialect=str:machine DIALOG/cust
omization=bool:True': java.io.IOException: Command returned failure code 139 during SSH session 'root@10.0.1.4'
at org.ovirt.engine.core.utils.ssh.SSHClient.executeCommand(SSHClient.java:508) [utils.jar:]
at org.ovirt.engine.core.utils.ssh.SSHDialog.executeCommand(SSHDialog.java:311) [utils.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.execute(VdsDeploy.java:1039) [bll.jar:]
at org.ovirt.engine.core.bll.InstallVdsCommand.installHost(InstallVdsCommand.java:192) [bll.jar:]
at org.ovirt.engine.core.bll.InstallVdsCommand.executeCommand(InstallVdsCommand.java:105) [bll.jar:]
at org.ovirt.engine.core.bll.ApproveVdsCommand.executeCommand(ApproveVdsCommand.java:49) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1128) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1213) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1871) [bll.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:174) [utils.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:116) [utils.jar:]
at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1231) [bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:365) [bll.jar:]
at org.ovirt.engine.core.bll.MultipleActionsRunner.executeValidatedCommand(MultipleActionsRunner.java:175) [bll.jar:]
at org.ovirt.engine.core.bll.MultipleActionsRunner.RunCommands(MultipleActionsRunner.java:156) [bll.jar:]
at org.ovirt.engine.core.bll.MultipleActionsRunner$1.run(MultipleActionsRunner.java:94) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:71) [utils.jar:]
at java.util.concur

[Users] ovirt-engine-cli 3.3.0.5-1 released

2013-10-09 Thread Michael Pasternak


More details can be found at [1].

[1] http://wiki.ovirt.org/Cli-changelog

-- 

Michael Pasternak
RedHat, ENG-Virtualization R&D


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt-engine-sdk-python 3.3.07-1 released

2013-10-09 Thread Michael Pasternak

For more details see [1].

* note for pypi users: 3.3 sdk was renamed to ovirt-engine-sdk-python and hosted
at [2], old repository [3] contains 3.2 artifacts only.

[1] http://wiki.ovirt.org/Python-sdk-changelog
[2] https://pypi.python.org/pypi/ovirt-engine-sdk-python
[3] https://pypi.python.org/pypi/ovirt-engine-sdk

-- 

Michael Pasternak
RedHat, ENG-Virtualization R&D


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt-engine-sdk-java 1.0.0.18-1 released

2013-10-09 Thread Michael Pasternak

More details can be found at [1].


[1] http://www.ovirt.org/Java-sdk-changelog

-- 

Michael Pasternak
RedHat, ENG-Virtualization R&D


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users