[ovirt-users] (no subject)

2022-08-20 Thread parallax
oVirt version:4.4.4.7-1.el8

I have several servers in cluster and I got this error:

Data Center is being initialized, please wait for initialization to
complete.
VDSM command GetStoragePoolInfoVDS failed: PKIX path validation failed:
java.security.cert.CertPathValidatorException: validity check failed

the SPM role is constantly being transferred to the servers and I can't do
anything

in StorageDomains storages are in inactive status but virtual machines are
runnig

how ti fix it?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6MGHC2NU3BDQ7DJSER2VH757C2Y5YKJ/


[ovirt-users] (no subject)

2022-04-26 Thread Abe E
Seems the Gluster Node that booted from 4.5 into emergency mode was due to not 
finding the mounts for gluster_bricks data and engine, its as if it cant see 
them any more. 
Once i removed the mounts from /etc/fstab it lets me in, I could boot up to the 
node past emergency mode but odd issue.

One of the errors:
dependency failed for /gluster_bricks/data
Subject: Unit gluster_bricks-data.mount has failed

This is node 3 of the Gluster, Arbiter.

[root@ovirt-3 ~]# mount -a
mount: /gluster_bricks/engine: can't find 
UUID=45349e37-7a97-4a64-81eb-5fac3fec6477.
mount: /gluster_bricks/data: can't find 
UUID=c0fa-0db2-4e08-bb79-be677530aeaa.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBVU4P7VNNUNMJH64IV6GLDDEMVWWBR7/


[ovirt-users] (no subject)

2021-06-01 Thread david
hi

i have a 4.4 ovirt cluster with a sigle server in it

the emulated chipset in this cluster is q35

i need to change emulated chipset from q35 to i440fx to add a new host that
don't support the q35 chipset type

how it will affect on vm's that are running on ovirt 4.4 cluster ?

what does this change process look like, how can i do it correctly?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HE3MYODAUZSUM36F3GRE4ECYPFOIEIHE/


[ovirt-users] (no subject)

2019-08-07 Thread Crazy Ayansh
Hi Team,

I am trying to convert/import few vm from KVM to ovirt but i am getting
below error message.

Aug  7 15:56:11 IONDELSVR46 journal: authentication failed: Failed to start
SASL negotiation: -20 (SASL(-13): user not found: unable to canonify user
and get auxprops)
Aug  7 15:56:11 IONDELSVR46 journal: authentication failed: authentication
failed

Please help!

Thanks
Shashank
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LSQYVLRS7TS3VPOIVZKC7GQ5Y7SH37D6/


[ovirt-users] (no subject)

2018-12-03 Thread Abhishek Sahni
Hello Team,


We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg





-- 

ABHISHEK SAHNI
IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/26C32RPGG6OF7L2FUFGCVHYKRWWWX7K7/


[ovirt-users] (no subject)

2018-08-12 Thread jibeh dodo
 مرحبا ، هناك أموال الميراث لم يطالب بها أحد نيابة عنك من عمي المتوفى الذي يحمل 
نفس الاسم والجنسية معك. الاتصال بي للحصول على التفاصيل
..
Hello, there are unclaimed inheritance funds left on your behalf from my 
deceased client who has the same last name and nationality with you. Contact me 
for details___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMDQP4KMVPBOGO64WGEG2RJZLCPQ3XLH/


Re: [ovirt-users] (no subject)

2018-01-15 Thread Kasturi Narra
Hello,

Can you attach ovirt-ha-agent and ovirt-ha-broker logs ?

Thanks
kasturi

On Fri, Jan 12, 2018 at 9:38 PM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:

> Trying to fix one thing I broke another :(
>
> I fixed mnt_options for hosted engine storage domain and installed latest
> security patches to my hosts and hosted engine. All VM's up and running,
> but  hosted_engine --vm-status reports about issues:
>
> [root@ovirt1 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : ovirt2
> Host ID: 1
> Engine status  : unknown stale-data
> Score  : 0
> stopped: False
> Local maintenance  : False
> crc32  : 193164b8
> local_conf_timestamp   : 8350
> Host timestamp : 8350
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=8350 (Fri Jan 12 19:03:54 2018)
> host-id=1
> score=0
> vm_conf_refresh_time=8350 (Fri Jan 12 19:03:54 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUnexpectedlyDown
> stopped=False
> timeout=Thu Jan  1 05:24:43 1970
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : ovirt1.telia.ru
> Host ID: 2
> Engine status  : unknown stale-data
> Score  : 0
> stopped: True
> Local maintenance  : False
> crc32  : c7037c03
> local_conf_timestamp   : 7530
> Host timestamp : 7530
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=7530 (Fri Jan 12 16:10:12 2018)
> host-id=2
> score=0
> vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=AgentStopped
> stopped=True
> [root@ovirt1 ~]#
>
>
>
> from second host situation looks a bit different:
>
>
> [root@ovirt2 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : ovirt2
> Host ID: 1
> Engine status  : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 0
> stopped: False
> Local maintenance  : False
> crc32  : 78eabdb6
> local_conf_timestamp   : 8403
> Host timestamp : 8402
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=8402 (Fri Jan 12 19:04:47 2018)
> host-id=1
> score=0
> vm_conf_refresh_time=8403 (Fri Jan 12 19:04:47 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUnexpectedlyDown
> stopped=False
> timeout=Thu Jan  1 05:24:43 1970
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : ovirt1.telia.ru
> Host ID: 2
> Engine status  : unknown stale-data
> Score  : 0
> stopped: True
> Local maintenance  : False
> crc32  : c7037c03
> local_conf_timestamp   : 7530
> Host timestamp : 7530
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=7530 (Fri Jan 12 16:10:12 2018)
> host-id=2
> score=0
> vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=AgentStopped
> stopped=True
>
>
> WebGUI shows that engine running on host ovirt1.
> Gluster looks fine
> [root@ovirt1 ~]# gluster volume status engine
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
> 
> --
> Brick ovirt1.telia.ru:/oVirt/engine 49169 0  Y
> 3244
> Brick ovirt2.telia.ru:/oVirt/engine 49179 0  Y
> 20372
> Brick ovirt3.telia.ru:/oVirt/engine 49206 0  Y
>

[ovirt-users] (no subject)

2018-01-12 Thread Artem Tambovskiy
Trying to fix one thing I broke another :(

I fixed mnt_options for hosted engine storage domain and installed latest
security patches to my hosts and hosted engine. All VM's up and running,
but  hosted_engine --vm-status reports about issues:

[root@ovirt1 ~]# hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : ovirt2
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 193164b8
local_conf_timestamp   : 8350
Host timestamp : 8350
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=8350 (Fri Jan 12 19:03:54 2018)
host-id=1
score=0
vm_conf_refresh_time=8350 (Fri Jan 12 19:03:54 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan  1 05:24:43 1970


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : ovirt1.telia.ru
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : c7037c03
local_conf_timestamp   : 7530
Host timestamp : 7530
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7530 (Fri Jan 12 16:10:12 2018)
host-id=2
score=0
vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
[root@ovirt1 ~]#



from second host situation looks a bit different:


[root@ovirt2 ~]# hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt2
Host ID: 1
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 78eabdb6
local_conf_timestamp   : 8403
Host timestamp : 8402
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=8402 (Fri Jan 12 19:04:47 2018)
host-id=1
score=0
vm_conf_refresh_time=8403 (Fri Jan 12 19:04:47 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan  1 05:24:43 1970


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : ovirt1.telia.ru
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : c7037c03
local_conf_timestamp   : 7530
Host timestamp : 7530
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7530 (Fri Jan 12 16:10:12 2018)
host-id=2
score=0
vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True


WebGUI shows that engine running on host ovirt1.
Gluster looks fine
[root@ovirt1 ~]# gluster volume status engine
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ovirt1.telia.ru:/oVirt/engine 49169 0  Y
3244
Brick ovirt2.telia.ru:/oVirt/engine 49179 0  Y
20372
Brick ovirt3.telia.ru:/oVirt/engine 49206 0  Y
16609
Self-heal Daemon on localhost   N/A   N/AY
117868
Self-heal Daemon on ovirt2.telia.ru N/A   N/AY
20521
Self-heal Daemon on ovirt3  N/A   N/AY
25093

Task Status of Volume engine
--
There are no active volume tasks

How to resolve this issue?
_

Re: [ovirt-users] (no subject)

2017-12-14 Thread Simone Tiraboschi
On Thu, Dec 14, 2017 at 11:14 AM, Roberto Nunin  wrote:

> Another attempt to fully check deploy of ovirt-node-ng and hoste engine in
> my lab environment.
>
> Ovirt node image used: ovirt-node-ng-installer-ovirt-4.2-pre-2017121215
> <(201)%20712-1215>.iso
>
> Three HPE Proliant BL680cG7, with local storage (Gluster).
>
> Full deploy from scratch, executed two times just to have confirmation.
> In both cases I had the same condition.
>
> Gluster deploy complete without issues.
> Hosted Engine do not complete, hangs at the end of the process, when HE VM
> is answering correctly.
> Imported two additional hosts (now that part works correctly)
> Added first data domain, ok
> Imported automatically the hosted-engine data domain
> Failed to import the Hosted Engine VM is the error that appear in the
> event log.
>
>
It's looping here:

2017-12-14 00:57:07,591+0100 INFO otopi.plugins.gr_he_common.vm.misc
misc._closeup:125 Shutting down the engine VM
2017-12-14 00:57:12,604+0100 DEBUG otopi.ovirt_hosted_engine_setup.tasks
tasks.wait:54 Waiting for VM down
2017-12-14 00:57:17,617+0100 DEBUG otopi.ovirt_hosted_engine_setup.tasks
tasks.wait:54 Waiting for VM down
...
2017-12-14 02:24:11,658+0100 DEBUG otopi.ovirt_hosted_engine_setup.tasks
tasks.wait:54 Waiting for VM down
2017-12-14 02:24:13,157+0100 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
  File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/misc.py",
line 143, in _closeup
if not waiter.wait():
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/tasks.py", line
53, in wait
time.sleep(self.POLLING_INTERVAL)
  File "/usr/lib/python2.7/site-packages/otopi/main.py", line 53, in _signal
raise RuntimeError("SIG%s" % signum)
RuntimeError: SIG1

Could you please attach vdsm and libvirt logs from setup time?



> Attached the HE deploy log and HE engine.log, where are reported all the
> attempts to import HE VM:
>
> 2017-12-14 01:43:38,733+01 ERROR 
> [org.ovirt.engine.core.bll.HostedEngineImporter]
> (EE-ManagedThreadFactory-engine-Thread-259) [1193f931] Failed importing
> the Hosted Engine VM
> 2017-12-14 01:43:38,739+01 ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-259)
> [1193f931] EVENT_ID: HOSTED_ENGINE_VM_IMPORT_FAILED(10,457), Failed
> importing the Hosted Engine VM
> 2017-12-14 01:43:53,703+01 ERROR 
> [org.ovirt.engine.core.bll.HostedEngineImporter]
> (EE-ManagedThreadFactory-engine-Thread-266) [4d597e01] Failed importing
> the Hosted Engine VM
> 2017-12-14 01:44:08,812+01 ERROR 
> [org.ovirt.engine.core.bll.HostedEngineImporter]
> (EE-ManagedThreadFactory-engine-Thread-273) [5cef57aa] Failed importing
> the Hosted Engine VM
>
>
> This is the screenshot of the "frozen"  phase
>
> [image: Immagine incorporata 1]
>
> Any hint about find the reason about the failed import ?
>
> Thanks in advance. Available for further tests.
>
> --
> Roberto
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] (no subject)

2017-06-17 Thread khalid mahmood

Dear Lucai reproduce the vm by hand by create one vm all things is all right 
then create template from this vm after that i create new 10 vm's from this 
template and delete it then create 10 vm's with the same name from template 
again still all thing is all right so i suspect in my ansible . best regards___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] (no subject)

2016-08-22 Thread Matt .
I have one host which has been newly deployed having issues with:

[root@host-01 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2016-08-23
02:39:16,799::hosted_engine::860::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2016-08-23
02:39:21,834::hosted_engine::860::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2016-08-23
02:39:26,894::hosted_engine::860::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING
MainThread::INFO::2016-08-23
02:39:31,931::hosted_engine::860::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
VDSM domain monitor status: PENDING


Now I tried to cleant he metadata, force it, whatever I tried, I get this:


[root@host-01 ~]# hosted-engine --clean-metadata --force-cleanup
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py:25:
DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
deprecated, please use vdsm.jsonrpcvdscli
  from vdsm import vdscli
INFO:ovirt_hosted_engine_ha.agent.agent.Agent:ovirt-hosted-engine-ha
agent 2.0.2 started
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Found
certificate common name: host-01.my.domain
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Initializing VDSM
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Connecting
the storage
INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
storage server
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
storage server
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing
the storage domain
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Preparing images
INFO:ovirt_hosted_engine_ha.lib.image.Image:Preparing images
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
  pending = getattr(dispatcher, 'pending', lambda: 0)
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending inste

Re: [ovirt-users] (no subject)

2014-11-25 Thread Timothy Asir Jeyasingh
Hi Alastair,

Could you please provide the service status of vdsmd and
glusterd in all the nodes. Also try executing
"vdsClient -s localhost glusterHostsList"
in the nodes (command line) and share the output.

Regards,
Tim 

>  Original Message 
> Subject:      [ovirt-users] (no subject)
> Date: Mon, 24 Nov 2014 14:12:55 -0500
> From: Alastair Neil 
> To:   Ovirt Users 
> 
> 
> 
> I see frequents errors in the vdsm log on one of the gluster servers in
> my gluster cluster.  Can anyone suggest a way to troubleshoot this,
>  the host is fine in the ovirt console and gluster functions ok.  The
> Gluster cluster is a replica 2, with 2 hosts.  I am trying to add
> another host to move to replica 3 but having trouble and this error
> seems to cause any  changes to the cluster to fail with unexpected
> errors in ovirt.
> 
> -Thanks, Alastair
> 
> 
> Gluster Server VentOS 6.6:
> vdsmd:Â vdsm-4.16.7-1.gitdb83943.el6
> gluster:Â glusterfs-3.6.1-1.el6
> 
> ovirt host Fedora 20:
> hosted ovirt-engine 3.5.0.1-1
> 
> 
> Thread-53::DEBUG::2014-11-24
> 14:11:45,362::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call hostsList with () {}
> Thread-53::ERROR::2014-11-24
> 14:11:45,363::BindingXMLRPC::1151::vds::(wrapper) unexpected error
> 
> Traceback (most recent call last):
> Â  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> Â  Â  res = f(*args, **kwargs)
> Â  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> Â  Â  rv = func(*args, **kwargs)
> Â  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> Â  Â  return {'hosts': self.svdsmProxy.glusterPeerStatus()}
> Â  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> Â  Â  return callMethod()
> Â  File "/usr/share/vdsm/supervdsm.py", line 48, in 
> Â  Â  **kwargs)
> Â  File "", line 2, in glusterPeerStatus
> Â  File "/usr/lib64/python2.6/multiprocessing/managers.py", line
> 725, in _callmethod
> Â  Â  conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> Thread-53::DEBUG::2014-11-24
> 14:03:08,721::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call volumesList with () {}
> Thread-53::ERROR::2014-11-24
> 14:03:08,721::BindingXMLRPC::1151::vds::(wrapper) unexpected error
> Traceback (most recent call last):
> Â  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> Â  Â  res = f(*args, **kwargs)
> Â  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> Â  Â  rv = func(*args, **kwargs)
> Â  File "/usr/share/vdsm/gluster/api.py", line 78, in volumesList
> Â  Â  return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
> Â  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> Â  Â  return callMethod()
> Â  File "/usr/share/vdsm/supervdsm.py", line 48, in 
> Â  Â  **kwargs)
> Â  File "", line 2, in glusterVolumeInfo
> Â  File "/usr/lib64/python2.6/multiprocessing/managers.py", line
> 725, in _callmethod
> Â  Â  conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> 
> 
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] (no subject)

2014-11-24 Thread Alastair Neil
I see frequents errors in the vdsm log on one of the gluster servers in my
gluster cluster.  Can anyone suggest a way to troubleshoot this,  the host
is fine in the ovirt console and gluster functions ok.  The Gluster cluster
is a replica 2, with 2 hosts.  I am trying to add another host to move to
replica 3 but having trouble and this error seems to cause any  changes to
the cluster to fail with unexpected errors in ovirt.

-Thanks, Alastair


Gluster Server VentOS 6.6:
vdsmd: vdsm-4.16.7-1.gitdb83943.el6
gluster: glusterfs-3.6.1-1.el6

ovirt host Fedora 20:
hosted ovirt-engine 3.5.0.1-1


Thread-53::DEBUG::2014-11-24
> 14:11:45,362::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call hostsList with () {}
> Thread-53::ERROR::2014-11-24
> 14:11:45,363::BindingXMLRPC::1151::vds::(wrapper) unexpected error

Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterPeerStatus
>   File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> Thread-53::DEBUG::2014-11-24
> 14:03:08,721::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call volumesList with () {}
> Thread-53::ERROR::2014-11-24
> 14:03:08,721::BindingXMLRPC::1151::vds::(wrapper) unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 78, in volumesList
> return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterVolumeInfo
>   File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] (no subject)

2014-11-17 Thread Lior Vernia
Hi Harald,

Could you perhaps refine your question? Also, what do you mean by
console user - are you referring to a VM?

The hardware of a host is the principal factor in determining the best
computing power of a VM, but of course there'll always be some overhead
(i.e. it'll never be equal to the host's computing power).

If you elaborate we could give you a better answer.

Yours, Lior.

On 15/11/14 07:15, Harald Wolf wrote:
> Hi,
> is the hardware of a Host (oVirt Node/Hypervisor) responslible for the
> best possible computing power of the console users?
> 
> -- 
> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
> gesendet.
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] (no subject)

2014-11-14 Thread Harald Wolf
Hi,
is the hardware of a Host (oVirt Node/Hypervisor) responslible for the best 
possible computing power of the console users?

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] (no subject)

2014-07-23 Thread Fabien CARRE
Hello,
I am experiencing an issue with on my current ovirt DC.
The setup is :
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users