[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-03-01 Thread Niels de Vos
On Wed, Feb 27, 2019 at 12:02:07PM +0100, Sandro Bonazzola wrote:
> Il giorno lun 25 feb 2019 alle ore 11:10 Niels de Vos 
> ha scritto:
> 
> > On Mon, Feb 25, 2019 at 10:05:52AM +0100, Sandro Bonazzola wrote:
> > > Il giorno gio 21 feb 2019 alle ore 08:48 Sandro Bonazzola <
> > > sbona...@redhat.com> ha scritto:
> > >
> > > >
> > > >
> > > > Il giorno mer 20 feb 2019 alle ore 21:02 Strahil Nikolov <
> > > > hunter86...@yahoo.com> ha scritto:
> > > >
> > > >> Hi Sahina, Sandro,
> > > >>
> > > >> can you guide me through the bugzilla.redhat.com in order to open a
> > bug
> > > >> for the missing package. ovirt-4.3-centos-gluster5 still lacks package
> > > >> for the 'glusterfs-gnfs' (which is a dependency for vdsm-gluster):
> > > >>
> > > >
> > > >
> > > > The issue is being tracked in
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1672711
> > > > it's still missing a fix on gluster side for CentOS Storage SIG.
> > > > Niels, Yaniv, we are escalating this, can you please help getting this
> > > > fixed?
> > > >
> > >
> > > Update for users: a fix has been merged for Gluster 6 and the backport to
> > > Gluster 5 has been verified, reviewed and pending merge (
> > > https://review.gluster.org/#/c/glusterfs/+/22258/).
> >
> > I plan to push an update with only this change later today. That means
> > the updated glusterfs-5 version is expected to hit the CentOS mirrors
> > tomorrow.
> >
> 
> Niels, I see glusterfs-5.4-1.el7 being tagged for testing, not yet for
> release (https://cbs.centos.org/koji/buildinfo?buildID=25227).
> Sahina did your team test this build to see if it solves the upgrade path
> for glusterfs-gnfs?
> Users: your help testing this would be very useful. You can find test repo
> here: https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/

glusterfs-5.3-2.el7.x86_64 has been the build with the Obsoletes for
glusterfs-gnfs. Upgrading is expected to work with that version (and
Parth Dhanjal has confirmed that). glusterfs-5.4 has been released this
week, and these packages are still in Testing. They also contain the
same Obsoletes addition.

Niels


> 
> Thanks,
> 
> 
> >
> > Niels
> >
> >
> > >
> > >
> > >
> > > >
> > > >
> > > >
> > > >
> > > >>
> > > >> [root@ovirt2 ~]# yum --disablerepo=*
> > > >> --enablerepo=ovirt-4.3-centos-gluster5 list available
> > --show-duplicates |
> > > >> grep gluster-gnfs
> > > >> Repository centos-sclo-rh-release is listed more than once in the
> > > >> configuration
> > > >> Repository centos-sclo-rh-release is listed more than once in the
> > > >> configuration
> > > >> Cannot upload enabled repos report, is this client registered?
> > > >>
> > > >>
> > > >> This leads to the following (output truncated):
> > > >> --> Finished Dependency Resolution
> > > >> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
> > > >> (@ovirt-4.2-centos-gluster312)
> > > >>Requires: glusterfs(x86-64) = 3.12.15-1.el7
> > > >>Removing: glusterfs-3.12.15-1.el7.x86_64
> > > >> (@ovirt-4.2-centos-gluster312)
> > > >>glusterfs(x86-64) = 3.12.15-1.el7
> > > >>Updated By: glusterfs-5.3-1.el7.x86_64
> > > >> (ovirt-4.3-centos-gluster5)
> > > >>glusterfs(x86-64) = 5.3-1.el7
> > > >>Available: glusterfs-3.12.0-1.el7.x86_64
> > > >> (ovirt-4.2-centos-gluster312)
> > > >>glusterfs(x86-64) = 3.12.0-1.el7
> > > >>
> > > >>Available: glusterfs-3.12.1-1.el7.x86_64
> > > >> (ovirt-4.2-centos-gluster312)
> > > >>glusterfs(x86-64) = 3.12.1-1.el7
> > > >>Available: glusterfs-3.12.1-2.el7.x86_64
> > > >> (ovirt-4.2-centos-gluster312)
> > > >>glusterfs(x86-64) = 3.12.1-2.el7
> > > >>Available: glusterfs-3.12.2-18.el7.x86_64 (base)
> > > >>glusterfs(x86-64) = 3.12.2-18.el7
> > > >>Available: glusterfs-3.12.3-1.el7.x86_64
> > > >> (ovirt-4.2-centos-gluster312)
&g

[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-25 Thread Niels de Vos
On Mon, Feb 25, 2019 at 10:05:52AM +0100, Sandro Bonazzola wrote:
> Il giorno gio 21 feb 2019 alle ore 08:48 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
> 
> >
> >
> > Il giorno mer 20 feb 2019 alle ore 21:02 Strahil Nikolov <
> > hunter86...@yahoo.com> ha scritto:
> >
> >> Hi Sahina, Sandro,
> >>
> >> can you guide me through the bugzilla.redhat.com in order to open a bug
> >> for the missing package. ovirt-4.3-centos-gluster5 still lacks package
> >> for the 'glusterfs-gnfs' (which is a dependency for vdsm-gluster):
> >>
> >
> >
> > The issue is being tracked in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1672711
> > it's still missing a fix on gluster side for CentOS Storage SIG.
> > Niels, Yaniv, we are escalating this, can you please help getting this
> > fixed?
> >
> 
> Update for users: a fix has been merged for Gluster 6 and the backport to
> Gluster 5 has been verified, reviewed and pending merge (
> https://review.gluster.org/#/c/glusterfs/+/22258/).

I plan to push an update with only this change later today. That means
the updated glusterfs-5 version is expected to hit the CentOS mirrors
tomorrow.

Niels


> 
> 
> 
> >
> >
> >
> >
> >>
> >> [root@ovirt2 ~]# yum --disablerepo=*
> >> --enablerepo=ovirt-4.3-centos-gluster5 list available --show-duplicates |
> >> grep gluster-gnfs
> >> Repository centos-sclo-rh-release is listed more than once in the
> >> configuration
> >> Repository centos-sclo-rh-release is listed more than once in the
> >> configuration
> >> Cannot upload enabled repos report, is this client registered?
> >>
> >>
> >> This leads to the following (output truncated):
> >> --> Finished Dependency Resolution
> >> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>Requires: glusterfs(x86-64) = 3.12.15-1.el7
> >>Removing: glusterfs-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.15-1.el7
> >>Updated By: glusterfs-5.3-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.3-1.el7
> >>Available: glusterfs-3.12.0-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.0-1.el7
> >>
> >>Available: glusterfs-3.12.1-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.1-1.el7
> >>Available: glusterfs-3.12.1-2.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.1-2.el7
> >>Available: glusterfs-3.12.2-18.el7.x86_64 (base)
> >>glusterfs(x86-64) = 3.12.2-18.el7
> >>Available: glusterfs-3.12.3-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.3-1.el7
> >>Available: glusterfs-3.12.4-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.4-1.el7
> >>Available: glusterfs-3.12.5-2.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.5-2.el7
> >>Available: glusterfs-3.12.6-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.6-1.el7
> >>Available: glusterfs-3.12.8-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.8-1.el7
> >>Available: glusterfs-3.12.9-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.9-1.el7
> >>Available: glusterfs-3.12.11-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.11-1.el7
> >>Available: glusterfs-3.12.13-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.13-1.el7
> >>Available: glusterfs-3.12.14-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.14-1.el7
> >>Available: glusterfs-5.0-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.0-1.el7
> >>Available: glusterfs-5.1-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.1-1.el7
> >>Available: glusterfs-5.2-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.2-1.el7
> >> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>Requires: glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
> >>Removing: glusterfs-client-xlators-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
> >>
> >>Updated By: glusterfs-client-xlators-5.3-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs-client-xlators(x86-64) = 5.3-1.el7
> >>Available: glusterfs-client-xlators-3.12.0-1.el7.x86_64
> >> 

Re: [ovirt-users] [Gluster-infra] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-28 Thread Niels de Vos
On Wed, Apr 27, 2016 at 04:51:10PM +0200, Sandro Bonazzola wrote:
> On Wed, Apr 27, 2016 at 11:09 AM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Wed, Apr 27, 2016 at 02:30:57PM +0530, Ravishankar N wrote:
> > > @gluster infra  - FYI.
> > >
> > > On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> > > >Hi,
> > > >The GlusterFS repository became unavailable this morning, as a result
> > all
> > > >Jenkins jobs that use the repository will fail, the common error would
> > be:
> > > >
> > > >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml
> > :
> > > >[Errno 14] HTTP Error 403 - Forbidden
> > > >
> > > >
> > > >Also, installations of oVirt will fail.
> >
> > I thought oVirt moved to using the packages from the CentOS Storage SIG?
> >
> 
> We did that for CentOS Virt SIG builds.
> On oVirt upstream we're still on Gluster upstream.
> We'll move to Storage SIG there as well.

Ah, ok, thanks!
Niels


> 
> 
> 
> > In any case, automated tests should probably use those instead of the
> > packages on download.gluster.org. We're trying to minimize the work
> > packagers need to do, and get the glusterfs and other components in the
> > repositories that are provided by different distributions.
> >
> > For more details, see the quickstart for the Storage SIG here:
> >   https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
> >
> > HTH,
> > Niels
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-infra] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Niels de Vos
On Wed, Apr 27, 2016 at 02:30:57PM +0530, Ravishankar N wrote:
> @gluster infra  - FYI.
> 
> On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> >Hi,
> >The GlusterFS repository became unavailable this morning, as a result all
> >Jenkins jobs that use the repository will fail, the common error would be:
> >
> >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml:
> >[Errno 14] HTTP Error 403 - Forbidden
> >
> >
> >Also, installations of oVirt will fail.

I thought oVirt moved to using the packages from the CentOS Storage SIG?
In any case, automated tests should probably use those instead of the
packages on download.gluster.org. We're trying to minimize the work
packagers need to do, and get the glusterfs and other components in the
repositories that are provided by different distributions.

For more details, see the quickstart for the Storage SIG here:
  https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

HTH,
Niels


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt glusterfs performance

2016-04-12 Thread Niels de Vos
On Tue, Apr 12, 2016 at 11:11:54AM +0200, Roderick Mooi wrote:
> Hi
> 
> > It is not removed. Can you try 'gluster volume set volname 
> > cluster.eager-lock enable`?
> 
> This works. BTW by default this setting is “on”

Thanks for reporting back!

> What’s the difference between “on” and “enable”?

Both are the same, you could also use "yes", "true" and possibly others.

Cheers,
Niels


> 
> Thanks for the clarification.
> 
> Regards,
> 
> Roderick
> 
> > On 06 Apr 2016, at 10:56 AM, Ravishankar N  wrote:
> > 
> > On 04/06/2016 02:08 PM, Roderick Mooi wrote:
> >> Hi Ravi and colleagues
> >> 
> >> (apologies for hijacking this thread but I’m not sure where else to report 
> >> this (and it is related).)
> >> 
> >> With gluster 3.7.10, running
> >> #gluster volume set  group virt
> >> fails with:
> >> volume set: failed: option : eager-lock does not exist
> >> Did you mean eager-lock?
> >> 
> >> I had to remove the eager-lock setting from /var/lib/glusterd/groups/virt 
> >> to get this to work. It seems like setting eager-lock has been removed 
> >> from latest gluster. Is this correct? Either way, is there anything else I 
> >> should do?
> > 
> > It is not removed. Can you try 'gluster volume set volname 
> > cluster.eager-lock enable`?
> > I think the disperse (EC) translator introduced a `disperse.eager-lock` 
> > which is why you would need to mention entire volume option name to avoid 
> > ambiguity.
> > We probably need to fix the virt profile setting to include the entire 
> > name. By the way 'gluster volume set help` should give you the list of all 
> > options.
> > 
> > -Ravi
> > 
> >> 
> >> Cheers,
> >> 
> >> Roderick
> >> 
> >>> On 12 Feb 2016, at 6:18 AM, Ravishankar N  >>> > wrote:
> >>> 
> >>> Hi Bill,
> >>> Can you enable virt-profile setting for your volume and see if that 
> >>> helps? You need to enable this optimization when you create the volume 
> >>> using ovrit, or use the following command for an existing volume:
> >>> 
> >>> #gluster volume set  group virt
> >>> 
> >>> -Ravi
> >>> 
> >>> 
> >>> On 02/12/2016 05:22 AM, Bill James wrote:
>  My apologies, I'm showing how much of a noob I am.
>  Ignore last direct to gluster numbers, as that wasn't really glusterfs.
>  
>  
>  [root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com 
>  :/gv1 /mnt/tmp/
>  [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M 
>  count=1000 oflag=direct
>  1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s
>  
>  That's more how I expected, it is pointing to glusterfs performance.
>  
>  
>  
>  On 02/11/2016 03:27 PM, Bill James wrote:
> > don't know if it helps, but I ran a few more tests, all from the same 
> > hardware node.
> > 
> > The VM:
> > [root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
> > count=1000 oflag=direct
> > 1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
> > 
> > Writing directly to gluster volume:
> > [root@ovirt2 test ~]# time dd if=/dev/zero 
> > of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct
> > 1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s
> > 
> > 
> > Writing to NFS volume:
> > [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile 
> > bs=1M count=1000 oflag=direct
> > 1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s
> > 
> > NFS & Gluster are using the same interface. Tests were not run at same 
> > time.
> > 
> > This would suggest my problem isn't glusterfs, but the VM performance.
> > 
> > 
> > 
> > On 02/11/2016 03:13 PM, Bill James wrote:
> >> xml attached. 
> >> 
> >> 
> >> On 02/11/2016 12:28 PM, Nir Soffer wrote: 
> >>> On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
> >>>  
> >>>  wrote: 
>  thank you for the reply. 
>  
>  We setup gluster using the names associated with  NIC 2 IP. 
>    Brick1: ovirt1-ks.test.j2noc.com 
>  :/gluster-store/brick1/gv1 
>    Brick2: ovirt2-ks.test.j2noc.com 
>  :/gluster-store/brick1/gv1 
>    Brick3: ovirt3-ks.test.j2noc.com 
>  :/gluster-store/brick1/gv1 
>  
>  That's NIC 2's IP. 
>  Using 'iftop -i eno2 -L 5 -t' : 
>  
>  dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct 
>  1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s 
> >>> Can you share the xml of this vm? You can find it in vdsm log, 
> >>> at the time you start the vm. 
> >>> 
> >>> Or you can do (on the host): 
> >>> 
> >>> # 

Re: [ovirt-users] QEMU GlusterFS support in oVirt

2016-03-12 Thread Niels de Vos
On Sat, Mar 12, 2016 at 05:04:16PM +0200, Nir Soffer wrote:
> On Sat, Mar 12, 2016 at 1:55 PM, Samuli Heinonen  
> wrote:
> > Hello all,
> >
> > It seems that oVirt 3.6 is still using FUSE to access GlusterFS storage 
> > domains instead of using QEMU driver (libgfapi). As far as I know libgfapi 
> > support should be available in Libvirt and QEMU packages provided in CentOS 
> > 7.
> 
> We started to work this during 3.6 development, but the work was
> suspended because
> libvirt and qemu do not support multiple gluster servers [1]. This
> means that if your single
> server is down, you will not be able to connect to gluster.
> 
> Recently Niels suggested that we use DNS for this purpose - if the DNS
> return multiple
> servers, libgafpi should be able to failover to one of these servers,
> so connecting with
> single server address should good as multiple server support in libvirt or 
> qemu.

And in case the local oVirt Node is part of the Gluster Trusted Storage
Pool (aka running GlusterD), qemu can use "localhost" to connect to the
storage too. It is only the initial connection that would benefit from
the added fail-over by multiple hosts. Once the connection is
established, qemu/libgfapi will connect to all the bricks that
participate in the volume. That means that only starting or attaching a
new disk to a running VM is impacted when the gluster:// URL is used
with a storage server that is down. In case oVirt/VDSM knows what
storage servers are up, it could even select one of those and not use a
server that is down.

I've left a similar note in [1], maybe it encourages to start with a
"single host" solution. Extending it for multiple hostnames should then
be pretty simple, and it allows us to start furter testing and doing
other integration bits.

And in case someone cares about (raw) sparse files (not possible over
FUSE, only with Linux 4.5), glusterfs-3.8 will provide a huge
improvement. A qemu patch for utilizing it is under review at [4].

HTH,
Niels


> The changes needed to support this are not big as you can see in [2],
> [3]. However the work
> was not completed and I don't know if it will completed for 4.0.
> 
> > Is there any workarounds how to use libgfapi with oVirt before it’s
> > officially available?
> 
> I don't know about any.
> 
> [1] https://bugzilla.redhat.com/1247521
> [2] https://gerrit.ovirt.org/44061
> [3] https://gerrit.ovirt.org/33768

[4] http://lists.nongnu.org/archive/html/qemu-block/2016-03/msg00288.html

> 
> Nir


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Blog on Hyperconverged Infrastructure using oVirt and Gluster

2016-01-14 Thread Niels de Vos
On Tue, Jan 12, 2016 at 05:10:23PM +0530, Ramesh Nachimuthu wrote:
> Hi Folks,
> 
>   Have you ever wondered about Hyperconverged Ovirt and Gluster Setup. Here
> is an answer[1]. I wrote a blog explaining how to setup oVirt in a
> hyper-converged mode with Gluster.
> 
> [1] 
> http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.htm
>  
> 

Thanks for posting this! Of course we would like to see articles like
this on http://planet.gluster.org as well. Could you send a pull request
with the RSS-feed for your Gluster tagged posts to
https://github.com/gluster/planet-gluster/blob/master/data/feeds.yml ?

Of course, others are welcome to add their Gluster related blogs too :)

Thanks,
Niels


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine HA problem

2014-11-01 Thread Niels de Vos
On Thu, Oct 30, 2014 at 09:07:24PM +0530, Vijay Bellur wrote:
 On 10/30/2014 06:45 PM, Jiri Moskovcak wrote:
 On 10/30/2014 09:22 AM, Jaicel R. Sabonsolin wrote:
 Hi Guys,
 
 I need help with my ovirt Hosted-Engine HA setup. I am running on 2
 ovirt hosts and 2 gluster nodes with replicated volumes. i already have
 VMs running on my hosts and they can migrate normally once i for example
 power off the host that they are running on. the problem is that the
 engine can't migrate once i switch off the host that hosts the engine.
 
 oVirt3.4.3-1.el6
 KVM 0.12.1.2 - 2.415.el6_5.10
 LIBVIRT   libvirt-0.10.2-29.el6_5.9
 VDSM  vdsm-4.14.17-0.el6
 
 
 right now, i have this result from hosted-engine --vm-status.
 
File /usr/lib64/python2.6/runpy.py, line 122, in
 _run_module_as_main
  __main__, fname, loader, pkg_name)
File /usr/lib64/python2.6/runpy.py, line 34, in _run_code
  exec code in run_globals
File
 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_setup/vm_status.py,
 
 line 111, in module
  if not status_checker.print_status():
File
 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_setup/vm_status.py,
 
 line 58, in print_status
  all_host_stats = ha_cli.get_all_host_stats()
File
 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py,
 
 line 137, in get_all_host_stats
  return self.get_all_stats(self.StatModes.HOST)
File
 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py,
 
 line 86, in get_all_stats
  constants.SERVICE_TYPE)
File
 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py,
 
 line 171, in get_stats_from_storage
  result = self._checked_communicate(request)
File
 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py,
 
 line 199, in _checked_communicate
  .format(message or response))
 ovirt_hosted_engine_ha.lib.exceptions.RequestError: Request failed:
 type 'exceptions.OSError'
 
 
 restarting ha-broker and ha-agent normalizes the status but eventually
 it would become false and then return to the result above. hope you
 guys could help me with this.
 
 
 Hi Jaicel,
 please attach agent.log and broker.log from the host where you trying to
 run hosted-engine --vm-status. I have a feeling that you ran into a
 known problem on gluster - stalled file descriptor, in that case the
 only known solution at this time is to restart the broker  agent as you
 have already found out.
 
 
 Adding Niels and gluster-devel to troubleshoot from Gluster NFS perspective.

I'd welcome any details on this stalled file descriptor problem. Is
there a bug filed with some details like logs, sysrq-t and maybe even
tcpdumps? If there is an easy way to reproduce this behaviour, I can
surely look into it and hopefully come up with some advise or fix.

Thanks,
Niels
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users