Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-20 Thread Giuseppe Ragusa
On Tue, Dec 20, 2016, at 09:16, Ramesh Nachimuthu wrote:
> - Original Message -
> > From: "Giuseppe Ragusa" 
> > To: "Ramesh Nachimuthu" 
> > Cc: users@ovirt.org, gluster-us...@gluster.org, "Ravishankar 
> > Narayanankutty" 
> > Sent: Tuesday, December 20, 2016 4:15:18 AM
> > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > GlusterFS 3.7.17
> > 
> > On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > > - Original Message -
> > > > From: "Giuseppe Ragusa" 
> > > > To: "Ramesh Nachimuthu" 
> > > > Cc: users@ovirt.org
> > > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > > GlusterFS 3.7.17
> > > > 
> > > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, 
> > > > fare
> > > > clic sul collegamento seguente.
> > > > 
> > > > 
> > > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > 
> > > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > 
> > > > 
> > > > 
> > > > Da: Ramesh Nachimuthu 
> > > > Inviato: lunedì 12 dicembre 2016 09.32
> > > > A: Giuseppe Ragusa
> > > > Cc: users@ovirt.org
> > > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > > 
> > > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > > Hi all,
> > > > >
> > > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup 
> > > > > all
> > > > > on
> > > > > CentOS 7.2):
> > > > >
> > > > >  From /var/log/messages:
> > > > >
> > > > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012  File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012res = method(**params)#012  File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012return callMethod()#012  File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in #012
> > > > > **kwargs)#012
> > > > > File "", line 2, in glusterVolumeStatus#012  File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > > > >   'device'
> > > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > > Engine
> > > > > VM OVF from the OVF_STORE
> > > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE 
> > > > > volume
> > > > > path:
> > > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > > 

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-20 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org, gluster-us...@gluster.org, "Ravishankar Narayanankutty" 
> 
> Sent: Tuesday, December 20, 2016 4:15:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > - Original Message -
> > > From: "Giuseppe Ragusa" 
> > > To: "Ramesh Nachimuthu" 
> > > Cc: users@ovirt.org
> > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > GlusterFS 3.7.17
> > > 
> > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > > clic sul collegamento seguente.
> > > 
> > > 
> > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > 
> > > 
> > > Da: Ramesh Nachimuthu 
> > > Inviato: lunedì 12 dicembre 2016 09.32
> > > A: Giuseppe Ragusa
> > > Cc: users@ovirt.org
> > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > 
> > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > Hi all,
> > > >
> > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all
> > > > on
> > > > CentOS 7.2):
> > > >
> > > >  From /var/log/messages:
> > > >
> > > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in #012
> > > > **kwargs)#012
> > > > File "", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > Engine
> > > > VM OVF from the OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > > path:
> > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > > an OVF for HE VM, trying to convert
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > > vm.conf from OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current
> > > > state
> > > > En

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-19 Thread Giuseppe Ragusa
On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> - Original Message -
> > From: "Giuseppe Ragusa" 
> > To: "Ramesh Nachimuthu" 
> > Cc: users@ovirt.org
> > Sent: Friday, December 16, 2016 2:42:18 AM
> > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > GlusterFS 3.7.17
> > 
> > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > clic sul collegamento seguente.
> > 
> > 
> > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > 
> > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > 
> > 
> > 
> > Da: Ramesh Nachimuthu 
> > Inviato: lunedì 12 dicembre 2016 09.32
> > A: Giuseppe Ragusa
> > Cc: users@ovirt.org
> > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > 
> > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > Hi all,
> > >
> > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > > CentOS 7.2):
> > >
> > >  From /var/log/messages:
> > >
> > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR 
> > > Internal
> > > server error#012Traceback (most recent call last):#012  File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012res = method(**params)#012  File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012return callMethod()#012  File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > > File "", line 2, in glusterVolumeStatus#012  File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > >   'device'
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > > VM OVF from the OVF_STORE
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > path:
> > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > an OVF for HE VM, trying to convert
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > vm.conf from OVF_STORE
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > > EngineUp (score: 3400)
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > > host read.mgmt.private (id: 2, score: 3400)
> > > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR 
> > > Internal
> > > server error#012Traceback (most recent call last):#012  File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012res = method(**params)#012  File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org
> Sent: Friday, December 16, 2016 2:42:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> clic sul collegamento seguente.
> 
> 
> <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> 
> vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> 
> 
> 
> Da: Ramesh Nachimuthu 
> Inviato: lunedì 12 dicembre 2016 09.32
> A: Giuseppe Ragusa
> Cc: users@ovirt.org
> Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> 
> On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > Hi all,
> >
> > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > CentOS 7.2):
> >
> >  From /var/log/messages:
> >
> > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 2, in glusterVolumeStatus#012  File
> > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> >   'device'
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > VM OVF from the OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > path:
> > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > an OVF for HE VM, trying to convert
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > vm.conf from OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > EngineUp (score: 3400)
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > host read.mgmt.private (id: 2, score: 3400)
> > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Giuseppe Ragusa
Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare clic 
sul collegamento seguente.


<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
[https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>

vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>



Da: Ramesh Nachimuthu 
Inviato: lunedì 12 dicembre 2016 09.32
A: Giuseppe Ragusa
Cc: users@ovirt.org
Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> Hi all,
>
> I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7 
> GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on 
> CentOS 7.2):
>
>  From /var/log/messages:
>
> Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
> server error#012Traceback (most recent call last):#012  File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
> _serveRequest#012res = method(**params)#012  File 
> "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
> fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, 
> in status#012return self._gluster.volumeStatus(volumeName, brick, 
> statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
> wrapper#012rv = func(*args, **kwargs)#012  File 
> "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
> __call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
> line 48, in #012**kwargs)#012  File "", line 2, in 
> glusterVolumeStatus#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
>   llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine VM 
> OVF from the OVF_STORE
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume path: 
> /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found an 
> OVF for HE VM, trying to convert
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got 
> vm.conf from OVF_STORE
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state 
> EngineUp (score: 3400)
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote host 
> read.mgmt.private (id: 2, score: 3400)
> Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
> server error#012Traceback (most recent call last):#012  File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
> _serveRequest#012res = method(**params)#012  File 
> "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
> fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, 
> in status#012return self._gluster.volumeStatus(volumeName, brick, 
> statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
> wrapper#012rv = func(*args, **kwargs)#012  File 
> "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
> __call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
> line 48, in #012**kwargs)#012  File "", line 2, in 
> glusterVolumeStatus#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
>   llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> closed
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 1

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-12 Thread Ramesh Nachimuthu



On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:

Hi all,

I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7 
GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on 
CentOS 7.2):

 From /var/log/messages:

Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in _serveRequest#012res = method(**params)#012  File "/usr/share/vdsm/rpc/Bridge.py", 
line 275, in _dynamicMethod#012result = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status#012return 
self._gluster.volumeStatus(volumeName, brick, statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper#012rv = func(*args, **kwargs)#012  File 
"/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012return 
callMethod()#012  File "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012  File "", line 2, in glusterVolumeStatus#012 
 File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _

ca

  llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine VM OVF 
from the OVF_STORE
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume path: 
/rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found an 
OVF for HE VM, trying to convert
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got vm.conf 
from OVF_STORE
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state 
EngineUp (score: 3400)
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote host 
read.mgmt.private (id: 2, score: 3400)
Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in _serveRequest#012res = method(**params)#012  File "/usr/share/vdsm/rpc/Bridge.py", 
line 275, in _dynamicMethod#012result = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status#012return 
self._gluster.volumeStatus(volumeName, brick, statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper#012rv = func(*args, **kwargs)#012  File 
"/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012return 
callMethod()#012  File "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012  File "", line 2, in glusterVolumeStatus#012 
 File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _

ca

  llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: INFO:mem_free.MemFree:memFree: 7392
Dec  9 15:27:50 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in _serveRequest#012res = method(**params)#012  File "/usr/share/vdsm/rpc/Bridge.py", 
line 275, in _dynamicMethod#012result = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status#012return 
self._gluster.volumeStatus(volumeName, brick, statusOpti