Re: [vdsm] Test failure not free loop devices

2013-08-12 Thread David Caro
On Mon 12 Aug 2013 04:13:26 AM CEST, Zhou Zheng Sheng wrote:
 Hi David,

 on 2013-08-09 21:45, David Caro Estevez wrote:
 Sometimes we get this error when running the vdsm tests:

 ==
 ERROR: testLoopMount (mountTests.MountTests)
 --
 Traceback (most recent call last):
   File /ephemeral0/vdsm_unit_tests_gerrit_el/tests/mountTests.py, line 69, 
 in testLoopMount
 m.mount(mntOpts=loop)
   File /ephemeral0/vdsm_unit_tests_gerrit_el/vdsm/storage/mount.py, line 
 222, in mount
 return self._runcmd(cmd, timeout)
   File /ephemeral0/vdsm_unit_tests_gerrit_el/vdsm/storage/mount.py, line 
 238, in _runcmd
 raise MountError(rc, ;.join((out, err)))
 MountError: (2, ';mount: could not find any free loop device\n')
   begin captured logging  
 Storage.Misc.excCmd: DEBUG: '/sbin/mkfs.ext2 -F /tmp/tmpq95svr' (cwd None)
 Storage.Misc.excCmd: DEBUG: SUCCESS: err = 'mke2fs 1.41.12 
 (17-May-2010)\n'; rc = 0
 Storage.Misc.excCmd: DEBUG: '/usr/bin/sudo -n /bin/mount -o loop 
 /tmp/tmpq95svr /tmp/tmpcS29EU' (cwd None)

 The problem is that it seems that the loop devices that are not being 
 released (maybe when the test fails?) and the system runs out of devices 
 eventually.
 Can you take a look to see where the cleanup fails and fix it?


 I think this may be related to a known bug [1] of gvfs.

 To workaround the bug,
 1. Reboot Linux
 2. See if ps aux | grep gvfs produces any results, then kill all gvfs
 related processes.
 3. cd tests, and run the tests ./run_tests_local.sh mkimageTests.py
 mountTests.py for 10 times, and losetup -a should give empty result
 after each run. When gvfs is running, losetup -a would show 2 new loop
 devices are occupied after each run.

gvfs is not installed on the machines, so I doubt this is the same 
error :(
I suggest to add some cleanup code at the end of the tests to free all 
the used loop devices (if you free only the ones you use,  it will not 
fail when running parallel tests).

Adding more loop devices or cleaning them all manually is only a 
temporary solution.

Thanks!



--
David Caro

Red Hat Czech s.r.o.
Continuous Integration Engineer - EMEA ENG Virtualization RD

Tel.: +420 532 294 605
Email: dc...@redhat.com
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
RHT Global #: 82-62605
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Andrew Cathrow
 - Forwarded Message -
  From: Itamar Heim ih...@redhat.com
  To: Sahina Bose sab...@redhat.com
  Cc: engine-devel engine-de...@ovirt.org, VDSM Project
  Development vdsm-devel@lists.fedorahosted.org
  Sent: Wednesday, August 7, 2013 1:30:54 PM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
  Domain
  
  On 08/07/2013 08:21 AM, Sahina Bose wrote:
   [Adding engine-devel]
  
   On 08/06/2013 10:48 AM, Deepak C Shetty wrote:
   Hi All,
   There were 2 learnings from BZ
   https://bugzilla.redhat.com/show_bug.cgi?id=988299
  
   1) Gluster RPM deps were not proper in VDSM when using Gluster
   Storage
   Domain. This has been partly addressed
   by the gluster-devel thread @
   http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
   and will be fully addressed once Gluster folks ensure their
   packaging
   is friendly enuf for VDSM to consume
   just the needed bits. Once that happens, i will be sending a
   patch to
   vdsm.spec.in to update the gluster
   deps correctly. So this issue gets addressed in near term.
  
   2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
   1.3.
  
   libvirt 1.0.1 has the support for representing gluster as a
   network
   block device and qemu 1.3 has the
   native support for gluster block backend which supports
   gluster://...
   URI way of representing a gluster
   based file (aka volume/vmdisk in VDSM case). Many distros (incl.
   centos 6.4 in the BZ) won't have qemu
   1.3 in their distro repos! How do we handle this dep in VDSM ?
  
   Do we disable gluster storage domain in oVirt engine if VDSM
   reports
   qemu  1.3 as part of getCapabilities ?
   or
   Do we ensure qemu 1.3 is present in ovirt.repo assuming
   ovirt.repo is
   always present on VDSM hosts in which
   case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will
   install qemu 1.3 from the ovirt.repo
   instead of the distro repo. This means vdsm.spec.in will have
   qemu =
   1.3 under Requires.
  
   Is this possible to make this a conditional install? That is,
   only if
   Storage Domain = GlusterFS in the Data center, the bootstrapping
   of host
   will install the qemu 1.3 and dependencies.
  
   (The question still remains as to where the qemu 1.3 rpms will be
   available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so we 
shouldn't need to require qemu 1.3 just the appropriate qemu-kvm version from 
6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070

  
  
  hosts are installed prior to storage domain definition usually.
  we need to find a solution to having a qemu  1.3 for .el6 (or
  another
  version of qemu with this feature set).
  

 
   What will be a good way to handle this ?
   Appreciate your response
  
   thanx,
   deepak
  
   ___
   vdsm-devel mailing list
   vdsm-devel@lists.fedorahosted.org
   https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
  
   ___
   vdsm-devel mailing list
   vdsm-devel@lists.fedorahosted.org
   https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
  
  ___
  vdsm-devel mailing list
  vdsm-devel@lists.fedorahosted.org
  https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
  
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 04:51 PM, Andrew Cathrow wrote:

- Forwarded Message -

From: Itamar Heim ih...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: engine-devel engine-de...@ovirt.org, VDSM Project
Development vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, August 7, 2013 1:30:54 PM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
Domain

On 08/07/2013 08:21 AM, Sahina Bose wrote:

[Adding engine-devel]

On 08/06/2013 10:48 AM, Deepak C Shetty wrote:

Hi All,
 There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299

1) Gluster RPM deps were not proper in VDSM when using Gluster
Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their
packaging
is friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a
patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
1.3.

libvirt 1.0.1 has the support for representing gluster as a
network
block device and qemu 1.3 has the
native support for gluster block backend which supports
gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros (incl.
centos 6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM ?

Do we disable gluster storage domain in oVirt engine if VDSM
reports
qemu  1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming
ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will
install qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have
qemu =
1.3 under Requires.


Is this possible to make this a conditional install? That is,
only if
Storage Domain = GlusterFS in the Data center, the bootstrapping
of host
will install the qemu 1.3 and dependencies.

(The question still remains as to where the qemu 1.3 rpms will be
available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so we 
shouldn't need to require qemu 1.3 just the appropriate qemu-kvm version from 
6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070


So IIUC this means we don't do anything special in vdsm.spec.in to 
handle qemu 1.3 dep ?
If so... what happens when User uses F17/F18 ( as an example) on the 
VDSM host.. their repos probably
won't have qemu-kvm which has libgfapi support... how do we handle it. 
Do we just release-note it ?


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 06:32 PM, Andrew Cathrow wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: vdsm-devel@lists.fedorahosted.org
Sent: Monday, August 12, 2013 8:59:37 AM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

On 08/12/2013 04:51 PM, Andrew Cathrow wrote:

- Forwarded Message -

From: Itamar Heim ih...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: engine-devel engine-de...@ovirt.org, VDSM Project
Development vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, August 7, 2013 1:30:54 PM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
Storage
Domain

On 08/07/2013 08:21 AM, Sahina Bose wrote:

[Adding engine-devel]

On 08/06/2013 10:48 AM, Deepak C Shetty wrote:

Hi All,
  There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299

1) Gluster RPM deps were not proper in VDSM when using Gluster
Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their
packaging
is friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a
patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu
1.3.

libvirt 1.0.1 has the support for representing gluster as a
network
block device and qemu 1.3 has the
native support for gluster block backend which supports
gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros
(incl.
centos 6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM ?

Do we disable gluster storage domain in oVirt engine if VDSM
reports
qemu  1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming
ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in
will
install qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have
qemu =
1.3 under Requires.


Is this possible to make this a conditional install? That is,
only if
Storage Domain = GlusterFS in the Data center, the bootstrapping
of host
will install the qemu 1.3 and dependencies.

(The question still remains as to where the qemu 1.3 rpms will
be
available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support so
we shouldn't need to require qemu 1.3 just the appropriate
qemu-kvm version from 6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070

So IIUC this means we don't do anything special in vdsm.spec.in to
handle qemu 1.3 dep ?
If so... what happens when User uses F17/F18 ( as an example) on the
VDSM host.. their repos probably
won't have qemu-kvm which has libgfapi support... how do we handle
it.
Do we just release-note it ?


For Fedora SPEC we'd need to handle use a =1.3 dependency but for *EL6 it'd need 
to be 0.12-whaterver-6.5-has


I would love to hear how. I am waiting on some resolution for this, so 
that I can close the 3.3 blocker BZ


For Fedora if I put qemu-kvm = 1.3 in vdsm.spec.in, then F17/F18 can't 
be used as a VDSM host, that may not be acceptable.


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 07:22 PM, Andrew Cathrow wrote:


- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
Sent: Monday, August 12, 2013 9:39:21 AM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

On 08/12/2013 06:32 PM, Andrew Cathrow wrote:

- Original Message -

From: Deepak C Shetty deepa...@linux.vnet.ibm.com
To: vdsm-devel@lists.fedorahosted.org
Sent: Monday, August 12, 2013 8:59:37 AM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
Domain

On 08/12/2013 04:51 PM, Andrew Cathrow wrote:

- Forwarded Message -

From: Itamar Heim ih...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: engine-devel engine-de...@ovirt.org, VDSM Project
Development vdsm-devel@lists.fedorahosted.org
Sent: Wednesday, August 7, 2013 1:30:54 PM
Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
Storage
Domain

On 08/07/2013 08:21 AM, Sahina Bose wrote:

[Adding engine-devel]

On 08/06/2013 10:48 AM, Deepak C Shetty wrote:

Hi All,
   There were 2 learnings from BZ
https://bugzilla.redhat.com/show_bug.cgi?id=988299

1) Gluster RPM deps were not proper in VDSM when using
Gluster
Storage
Domain. This has been partly addressed
by the gluster-devel thread @
http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
and will be fully addressed once Gluster folks ensure their
packaging
is friendly enuf for VDSM to consume
just the needed bits. Once that happens, i will be sending a
patch to
vdsm.spec.in to update the gluster
deps correctly. So this issue gets addressed in near term.

2) Gluster storage domain needs minimum libvirt 1.0.1 and
qemu
1.3.

libvirt 1.0.1 has the support for representing gluster as a
network
block device and qemu 1.3 has the
native support for gluster block backend which supports
gluster://...
URI way of representing a gluster
based file (aka volume/vmdisk in VDSM case). Many distros
(incl.
centos 6.4 in the BZ) won't have qemu
1.3 in their distro repos! How do we handle this dep in VDSM
?

Do we disable gluster storage domain in oVirt engine if VDSM
reports
qemu  1.3 as part of getCapabilities ?
or
Do we ensure qemu 1.3 is present in ovirt.repo assuming
ovirt.repo is
always present on VDSM hosts in which
case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in
will
install qemu 1.3 from the ovirt.repo
instead of the distro repo. This means vdsm.spec.in will have
qemu =
1.3 under Requires.


Is this possible to make this a conditional install? That is,
only if
Storage Domain = GlusterFS in the Data center, the
bootstrapping
of host
will install the qemu 1.3 and dependencies.

(The question still remains as to where the qemu 1.3 rpms will
be
available)

RHEL6.5 (and so CentOS 6.5) will get backported libgfapi support
so
we shouldn't need to require qemu 1.3 just the appropriate
qemu-kvm version from 6.5

https://bugzilla.redhat.com/show_bug.cgi?id=848070

So IIUC this means we don't do anything special in vdsm.spec.in to
handle qemu 1.3 dep ?
If so... what happens when User uses F17/F18 ( as an example) on
the
VDSM host.. their repos probably
won't have qemu-kvm which has libgfapi support... how do we handle
it.
Do we just release-note it ?


For Fedora SPEC we'd need to handle use a =1.3 dependency but for
*EL6 it'd need to be 0.12-whaterver-6.5-has

I would love to hear how. I am waiting on some resolution for this,
so
that I can close the 3.3 blocker BZ

For Fedora if I put qemu-kvm = 1.3 in vdsm.spec.in, then F17/F18
can't
be used as a VDSM host, that may not be acceptable.


What options do we have for fedora f19?
virt-preview may be an option for F18 but F17 is out of luck ..


what do you mean by 'out of luck'.. I thot virt-preview had F17/F18 
repos, no ?
Another Q to answer would be.. Do we support F17 as a valid vdsm host 
for 3.3 ?


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Andrew Cathrow


- Original Message -
 From: Deepak C Shetty deepa...@linux.vnet.ibm.com
 To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
 Sent: Monday, August 12, 2013 9:55:45 AM
 Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain
 
 On 08/12/2013 07:22 PM, Andrew Cathrow wrote:
 
  - Original Message -
  From: Deepak C Shetty deepa...@linux.vnet.ibm.com
  To: VDSM Project Development vdsm-devel@lists.fedorahosted.org
  Sent: Monday, August 12, 2013 9:39:21 AM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage
  Domain
 
  On 08/12/2013 06:32 PM, Andrew Cathrow wrote:
  - Original Message -
  From: Deepak C Shetty deepa...@linux.vnet.ibm.com
  To: vdsm-devel@lists.fedorahosted.org
  Sent: Monday, August 12, 2013 8:59:37 AM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
  Storage
  Domain
 
  On 08/12/2013 04:51 PM, Andrew Cathrow wrote:
  - Forwarded Message -
  From: Itamar Heim ih...@redhat.com
  To: Sahina Bose sab...@redhat.com
  Cc: engine-devel engine-de...@ovirt.org, VDSM Project
  Development vdsm-devel@lists.fedorahosted.org
  Sent: Wednesday, August 7, 2013 1:30:54 PM
  Subject: Re: [vdsm] How to handle qemu 1.3 dep for Gluster
  Storage
  Domain
 
  On 08/07/2013 08:21 AM, Sahina Bose wrote:
  [Adding engine-devel]
 
  On 08/06/2013 10:48 AM, Deepak C Shetty wrote:
  Hi All,
 There were 2 learnings from BZ
  https://bugzilla.redhat.com/show_bug.cgi?id=988299
 
  1) Gluster RPM deps were not proper in VDSM when using
  Gluster
  Storage
  Domain. This has been partly addressed
  by the gluster-devel thread @
  http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html
  and will be fully addressed once Gluster folks ensure their
  packaging
  is friendly enuf for VDSM to consume
  just the needed bits. Once that happens, i will be sending
  a
  patch to
  vdsm.spec.in to update the gluster
  deps correctly. So this issue gets addressed in near term.
 
  2) Gluster storage domain needs minimum libvirt 1.0.1 and
  qemu
  1.3.
 
  libvirt 1.0.1 has the support for representing gluster as a
  network
  block device and qemu 1.3 has the
  native support for gluster block backend which supports
  gluster://...
  URI way of representing a gluster
  based file (aka volume/vmdisk in VDSM case). Many distros
  (incl.
  centos 6.4 in the BZ) won't have qemu
  1.3 in their distro repos! How do we handle this dep in
  VDSM
  ?
 
  Do we disable gluster storage domain in oVirt engine if
  VDSM
  reports
  qemu  1.3 as part of getCapabilities ?
  or
  Do we ensure qemu 1.3 is present in ovirt.repo assuming
  ovirt.repo is
  always present on VDSM hosts in which
  case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in
  will
  install qemu 1.3 from the ovirt.repo
  instead of the distro repo. This means vdsm.spec.in will
  have
  qemu =
  1.3 under Requires.
 
  Is this possible to make this a conditional install? That
  is,
  only if
  Storage Domain = GlusterFS in the Data center, the
  bootstrapping
  of host
  will install the qemu 1.3 and dependencies.
 
  (The question still remains as to where the qemu 1.3 rpms
  will
  be
  available)
  RHEL6.5 (and so CentOS 6.5) will get backported libgfapi
  support
  so
  we shouldn't need to require qemu 1.3 just the appropriate
  qemu-kvm version from 6.5
 
  https://bugzilla.redhat.com/show_bug.cgi?id=848070
  So IIUC this means we don't do anything special in vdsm.spec.in
  to
  handle qemu 1.3 dep ?
  If so... what happens when User uses F17/F18 ( as an example) on
  the
  VDSM host.. their repos probably
  won't have qemu-kvm which has libgfapi support... how do we
  handle
  it.
  Do we just release-note it ?
 
  For Fedora SPEC we'd need to handle use a =1.3 dependency but
  for
  *EL6 it'd need to be 0.12-whaterver-6.5-has
  I would love to hear how. I am waiting on some resolution for
  this,
  so
  that I can close the 3.3 blocker BZ
 
  For Fedora if I put qemu-kvm = 1.3 in vdsm.spec.in, then F17/F18
  can't
  be used as a VDSM host, that may not be acceptable.
 
  What options do we have for fedora f19?
  virt-preview may be an option for F18 but F17 is out of luck ..
 
 what do you mean by 'out of luck'.. I thot virt-preview had F17/F18
 repos, no ?

As I said, virt-preview may be an option for F18 as it has a newer version of 
qemu in there but F17's virt-preview doesn't have a new enough version of qemu

 Another Q to answer would be.. Do we support F17 as a valid vdsm host
 for 3.3 ?
 
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
 
 
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Itamar Heim

On 08/12/2013 04:55 PM, Deepak C Shetty wrote:


what do you mean by 'out of luck'.. I thot virt-preview had F17/F18
repos, no ?
Another Q to answer would be.. Do we support F17 as a valid vdsm host
for 3.3 ?


iirc, F17 isn't supported by fedora once F19 is out, so no more updates 
to it. using fedora you are moving fast, but with a shorter 
support/update cycle i guess.

I don't think anyone tested 3.3 on F17.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Vijay Bellur
On Mon, Aug 12, 2013 at 9:45 PM, Itamar Heim ih...@redhat.com wrote:

 On 08/12/2013 04:55 PM, Deepak C Shetty wrote:


 what do you mean by 'out of luck'.. I thot virt-preview had F17/F18
 repos, no ?
 Another Q to answer would be.. Do we support F17 as a valid vdsm host
 for 3.3 ?


 iirc, F17 isn't supported by fedora once F19 is out, so no more updates to
 it. using fedora you are moving fast, but with a shorter support/update
 cycle i guess.


F17 entered EOL on 07/30:

http://fedoraproject.org/wiki/End_of_life

-Vijay


 I don't think anyone tested 3.3 on F17.

 __**_
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.**org vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.**org/mailman/listinfo/vdsm-**develhttps://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain

2013-08-12 Thread Deepak C Shetty

On 08/12/2013 09:50 PM, Vijay Bellur wrote:




On Mon, Aug 12, 2013 at 9:45 PM, Itamar Heim ih...@redhat.com 
mailto:ih...@redhat.com wrote:


On 08/12/2013 04:55 PM, Deepak C Shetty wrote:


what do you mean by 'out of luck'.. I thot virt-preview had
F17/F18
repos, no ?
Another Q to answer would be.. Do we support F17 as a valid
vdsm host
for 3.3 ?


iirc, F17 isn't supported by fedora once F19 is out, so no more
updates to it. using fedora you are moving fast, but with a
shorter support/update cycle i guess.


F17 entered EOL on 07/30:

http://fedoraproject.org/wiki/End_of_life

-Vijay


Thanks all.. I was concerned abt F17, since vdsm.spec still has = F17 
in many places.
Good 2 know that i don't have to worry abt F17. Will post vdsm.spec 
changes in a patch soon.



I don't think anyone tested 3.3 on F17.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
mailto:vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] vdsm-sync meeting August 12th 2013

2013-08-12 Thread Yaniv Bronheim
Hey,

I couldn't join to the call from the Israeli number,

I wanted to raise that recently we added locking mechanism to vdsmd.init script 
(http://gerrit.ovirt.org/#/c/17662/) that leaded to some regressions over 
fedora 19
The fixes should be provided as part of http://gerrit.ovirt.org/#/c/17926/ - 
reviews are welcome :) 

Other than that, I'll try to check about the call issues. I understand that it 
worked for some.. hope to be there next time

If someone can share what was said, it will be great

Thanks,
Yaniv Bronhaim
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Test failure not free loop devices

2013-08-12 Thread Zhou Zheng Sheng

Hi David,

on 2013-08-12 14:59, David Caro wrote:
 On Mon 12 Aug 2013 04:13:26 AM CEST, Zhou Zheng Sheng wrote:
 Hi David,

 on 2013-08-09 21:45, David Caro Estevez wrote:
 Sometimes we get this error when running the vdsm tests:

 ==
 ERROR: testLoopMount (mountTests.MountTests)
 --
 Traceback (most recent call last):
   File /ephemeral0/vdsm_unit_tests_gerrit_el/tests/mountTests.py, line 
 69, in testLoopMount
 m.mount(mntOpts=loop)
   File /ephemeral0/vdsm_unit_tests_gerrit_el/vdsm/storage/mount.py, line 
 222, in mount
 return self._runcmd(cmd, timeout)
   File /ephemeral0/vdsm_unit_tests_gerrit_el/vdsm/storage/mount.py, line 
 238, in _runcmd
 raise MountError(rc, ;.join((out, err)))
 MountError: (2, ';mount: could not find any free loop device\n')
   begin captured logging  
 Storage.Misc.excCmd: DEBUG: '/sbin/mkfs.ext2 -F /tmp/tmpq95svr' (cwd None)
 Storage.Misc.excCmd: DEBUG: SUCCESS: err = 'mke2fs 1.41.12 
 (17-May-2010)\n'; rc = 0
 Storage.Misc.excCmd: DEBUG: '/usr/bin/sudo -n /bin/mount -o loop 
 /tmp/tmpq95svr /tmp/tmpcS29EU' (cwd None)

 The problem is that it seems that the loop devices that are not being 
 released (maybe when the test fails?) and the system runs out of devices 
 eventually.
 Can you take a look to see where the cleanup fails and fix it?


 I think this may be related to a known bug [1] of gvfs.

 To workaround the bug,
 1. Reboot Linux
 2. See if ps aux | grep gvfs produces any results, then kill all gvfs
 related processes.
 3. cd tests, and run the tests ./run_tests_local.sh mkimageTests.py
 mountTests.py for 10 times, and losetup -a should give empty result
 after each run. When gvfs is running, losetup -a would show 2 new loop
 devices are occupied after each run.
 
 gvfs is not installed on the machines, so I doubt this is the same 
 error :(
 I suggest to add some cleanup code at the end of the tests to free all 
 the used loop devices (if you free only the ones you use,  it will not 
 fail when running parallel tests).
 
 Adding more loop devices or cleaning them all manually is only a 
 temporary solution.
 
 Thanks!

From what I can there is 3 places would consume a loop device. In
tests/parted_utils_tests.py it calls losetup directly and tears down the
loop device at the end of each test. In tests/mountTests.py and
tests/mkimageTests.py, it consumes loop device via loop mount, and
umount the device so the loop device is freed automatically. All three
places free loop device at the end of the test, unless there is
exception or test fails. In this case we can catch that exception and
free the device.

Another reason could be other process occupying the mount point or loop
device, in this case trying to free it again will fail and can not solve
this problem. So I think we should investigate why it is not released
and locate the bug. Could you lsof the mount point and the device to see
if any process is occupying them? And could you try to run
parted_utils_tests.py, mountTests.py and mkimageTests.py separately and
see losetup -a's report, to see which of the test module consumes but
not free the device?

 
 --
 David Caro
 
 Red Hat Czech s.r.o.
 Continuous Integration Engineer - EMEA ENG Virtualization RD
 
 Tel.: +420 532 294 605
 Email: dc...@redhat.com
 Web: www.cz.redhat.com
 Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
 RHT Global #: 82-62605
 

-- 
Thanks and best regards!

Zhou Zheng Sheng / 周征晟
E-mail: zhshz...@linux.vnet.ibm.com
Telephone: 86-10-82454397

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel