Re: [ovirt-users] ovirt-ha-broker issue after upgrade CentOS 7.1 => 7.2 and oVirt 3.6.0 => 3.6.1

2015-12-18 Thread Bello Florent
 

Yes it's solved for me too. Thanks for your help 


Cordialement,

Florent BELLO
Service
Informatique
informati...@ville-kourou.fr
0594 22 31 22
Mairie de Kourou


Le 17/12/2015 22:08, Matthew Trent a écrit : 

> Yes! That did it.
All the errors are gone, and HA seems to be functioning normally. Thanks
much! 
> 
> --
> Matthew Trent 
> Network Engineer
> Lewis County IT
Services ​ 
> 
> -
> 
> FROM: Simone
Tiraboschi 
> SENT: Thursday, December 17, 2015
4:50 PM
> TO: Matthew Trent
> CC: users@ovirt.org
> SUBJECT: Re:
[ovirt-users] ovirt-ha-broker issue after upgrade CentOS 7.1 => 7.2 and
oVirt 3.6.0 => 3.6.1 
> 
> On Fri, Dec 18, 2015 at 12:32 AM, Matthew
Trent  wrote:
> 
>> (Sorry if this
reply doesn't thread properly. Just subscribed to reply to this
topic.)
>> 
>> I'm also experiencing this issue. Just upgraded to the
latest packages and both ovirt-ha-agent and ovirt-ha-broker pause for a
long time when being started, then timeout with errors.
> 
> Please try
manually reverting this patch https://gerrit.ovirt.org/#/c/50662/ [2] 
>
by removing the lines that start with PIDFile= from 
>
/usr/lib/systemd/system/ovirt-ha-broker.service and
/usr/lib/systemd/system/ovirt-ha-agent.service 
> Then systemctl
daemon-reload and restart the services 
> 
>> [root@ovirt2 ~]# systemctl
start ovirt-ha-broker
>> Job for ovirt-ha-broker.service failed because
a timeout was exceeded. See "systemctl status ovirt-ha-broker.service"
and "journalctl -xe" for details.
>> [root@ovirt2 ~]# systemctl start
ovirt-ha-agent
>> Job for ovirt-ha-agent.service failed because a
timeout was exceeded. See "systemctl status ovirt-ha-agent.service" and
"journalctl -xe" for details.
>> 
>> Dec 17 15:27:53 ovirt2 systemd:
Failed to start oVirt Hosted Engine High Availability Communications
Broker.
>> Dec 17 15:27:53 ovirt2 systemd: Unit ovirt-ha-broker.service
entered failed state.
>> Dec 17 15:27:53 ovirt2 systemd:
ovirt-ha-broker.service failed.
>> Dec 17 15:27:53 ovirt2 systemd:
ovirt-ha-broker.service holdoff time over, scheduling restart.
>> Dec 17
15:27:53 ovirt2 systemd: Starting oVirt Hosted Engine High Availability
Communications Broker...
>> Dec 17 15:27:53 ovirt2
systemd-ovirt-ha-broker: Starting ovirt-ha-broker: [ OK ]
>> Dec 17
15:27:53 ovirt2 systemd: PID 21125 read from file
/run/ovirt-hosted-engine-ha/broker.pid does not exist or is a zombie.
>>
Dec 17 15:29:22 ovirt2 systemd: ovirt-ha-agent.service
stop-final-sigterm timed out. Killing.
>> Dec 17 15:29:22 ovirt2
systemd: Failed to start oVirt Hosted Engine High Availability
Monitoring Agent.
>> Dec 17 15:29:22 ovirt2 systemd: Unit
ovirt-ha-agent.service entered failed state.
>> Dec 17 15:29:22 ovirt2
systemd: ovirt-ha-agent.service failed.
>> Dec 17 15:29:22 ovirt2
systemd: ovirt-ha-agent.service holdoff time over, scheduling
restart.
>> Dec 17 15:29:23 ovirt2 systemd: ovirt-ha-broker.service
start operation timed out. Terminating.
>> Dec 17 15:29:24 ovirt2
systemd: Failed to start oVirt Hosted Engine High Availability
Communications Broker.
>> Dec 17 15:29:24 ovirt2 systemd: Unit
ovirt-ha-broker.service entered failed state.
>> Dec 17 15:29:24 ovirt2
systemd: ovirt-ha-broker.service failed.
>> Dec 17 15:29:24 ovirt2
systemd: Starting oVirt Hosted Engine High Availability Monitoring
Agent...
>> Dec 17 15:29:24 ovirt2 systemd-ovirt-ha-agent: Starting
ovirt-ha-agent: [ OK ]
>> Dec 17 15:29:24 ovirt2 systemd: PID 21288 read
from file /run/ovirt-hosted-engine-ha/agent.pid does not exist or is a
zombie.
>> Dec 17 15:29:24 ovirt2 systemd: ovirt-ha-broker.service
holdoff time over, scheduling restart.
>> Dec 17 15:29:24 ovirt2
systemd: Starting oVirt Hosted Engine High Availability Communications
Broker...
>> Dec 17 15:29:25 ovirt2 systemd-ovirt-ha-broker: Starting
ovirt-ha-broker: [ OK ]
>> Dec 17 15:29:25 ovirt2 systemd: PID 21304
read from file /run/ovirt-hosted-engine-ha/broker.pid does not exist or
is a zombie.
>> 
>> --
>> Matthew Trent
>> Network Engineer
>> Lewis
County IT Services
>> 360.740.1247 - Helpdesk
>> 360.740.3343 - Direct
line
>> 
>> ___
>> Users
mailing list
>> Users@ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users [1]
> 
>
___
> Users mailing list
>
Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users [1]



Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users
[2]
https://gerrit.ovirt.org/#/c/50662/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-ha-broker issue after upgrade CentOS 7.1 => 7.2 and oVirt 3.6.0 => 3.6.1

2015-12-17 Thread Bello Florent
 

Hi, 

I upgrade my 3 servers to CentOS 7.2 and oVirt 3.6.1, the
oVirt engine works fine and my first host upgraded too. However, my
second and third host have a ovirt-ha-broker issue and is doesn't
start.
When i try to start the broker service, it failed with timeout.


Here my logs of my second upgraded server : 

[root@ovirt01 ~]#
systemctl status ovirt-ha-broker
● ovirt-ha-broker.service - oVirt
Hosted Engine High Availability Communications Broker
 Loaded: loaded
(/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor
preset: disabled)
 Active: activating (start) since jeu. 2015-12-17
17:41:46 GFT; 1min 3s ago
 Process: 15245
ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start (code=exited,
status=0/SUCCESS)
 CGroup: /system.slice/ovirt-ha-broker.service

└─15259 /usr/bin/python
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker

déc. 17 17:41:46
ovirt01 systemd[1]: Starting oVirt Hosted Engine High Availability
Communications Broker...
déc. 17 17:41:46 ovirt01 systemd[1]: PID 15252
read from file /run/ovirt-hosted-engine-ha/broker.pid does not exist or
is a zombie. 

[root@ovirt01 ~]# systemctl start ovirt-ha-broker
Job for
ovirt-ha-broker.service failed because a timeout was exceeded. See
"systemctl status ovirt-ha-broker.service" and "journalctl -xe" for
details. 

[root@ovirt01 ~]# tail -f
/var/log/ovirt-hosted-engine-ha/broker.log
MainThread::INFO::2015-12-17
17:44:28,562::broker::57::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
ovirt-hosted-engine-ha broker 1.3.3.4
started
MainThread::INFO::2015-12-17
17:44:28,588::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Searching for submonitors in
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors
MainThread::INFO::2015-12-17
17:44:28,588::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load
MainThread::INFO::2015-12-17
17:44:28,590::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load-no-engine
MainThread::INFO::2015-12-17
17:44:28,591::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor engine-health
MainThread::INFO::2015-12-17
17:44:28,591::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mem-free
MainThread::INFO::2015-12-17
17:44:28,591::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mem-load
MainThread::INFO::2015-12-17
17:44:28,591::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mgmt-bridge
MainThread::INFO::2015-12-17
17:44:28,592::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor ping
MainThread::INFO::2015-12-17
17:44:28,592::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load
MainThread::INFO::2015-12-17
17:44:28,593::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor cpu-load-no-engine
MainThread::INFO::2015-12-17
17:44:28,593::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor engine-health
MainThread::INFO::2015-12-17
17:44:28,593::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mem-free
MainThread::INFO::2015-12-17
17:44:28,594::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mem-load
MainThread::INFO::2015-12-17
17:44:28,594::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor mgmt-bridge
MainThread::INFO::2015-12-17
17:44:28,594::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Loaded submonitor ping
MainThread::INFO::2015-12-17
17:44:28,594::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Finished loading submonitors
MainThread::INFO::2015-12-17
17:44:28,595::listener::41::ovirt_hosted_engine_ha.broker.listener.Listener::(__init__)
Initializing SocketServer
MainThread::INFO::2015-12-17
17:44:28,595::listener::56::ovirt_hosted_engine_ha.broker.listener.Listener::(__init__)
SocketServer ready
MainThread::INFO::2015-12-17
17:45:59,215::broker::114::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
Server shutting down 

[root@ovirt01 ~]# tail -f
/var/log/vdsm/vdsm.log
Reactor thread::INFO::2015-12-17
17:48:19,800::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:44498
Reactor
thread::DEBUG::2015-12-17
17:48:19,808::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-12-17
17:48:19,808::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:44

Re: [ovirt-users] Identify the vm name in storage

2015-10-14 Thread Bello Florent
 

Hi, 

Look at VM ID and Disk ID in ovirt web UI. 

-- 

Florent
BELLO
Service Informatique
informati...@ville-kourou.fr
0594 22 31
22
Mairie de Kourou 

Le 14/10/2015 07:10, Budur Nagaraju a écrit : 

>
HI 
> 
> how to identify the vm instance in storage ? is there any way
to identify ?
> 
> Thanks, Nagaraju
> 
>
___
> Users mailing list
>
Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users [1]



Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Another way of CTDB to mount engine

2015-09-28 Thread Bello Florent
 
hi, 

It's really not a single point of failure ? In this way i see 2
point of failure, if the hosted-engine break and if the mount server
adress break. And it's not possible to migrate the engine. 

Can you
explain me why is not a point of failure for you ? 

-- 

Florent BELLO


Le 28/09/2015 05:45, Simone Tiraboschi a écrit : 

> On Fri, Sep 25,
2015 at 5:36 PM, Bello Florent  wrote:
>

>> Hi, 
>> 
>> I have 3 nodes with glusterfs installed. I configured
replica 3 on my engine volume.
> 
> Hi, 
> on oVirt 3.6 we still not
support hyper convergence that means that you cannot use the same hosts
for virtualization and for storage with gluster. 
> 
> So the correct
approach is renaming your hosts to: 
> host 1 : "192.168.100.101
gluster1.localdomain gluster1"
> host 2 : "192.168.100.102
gluster2.localdomain gluster2"
> host 3 : "192.168.100.103
gluster3.localdomain gluster3" 
> 
> On an additional host, deploy
hosted-engine choosing the gluster host you prefer as your entry point:
it will not be a single point of failure. 
> You don't really need to
use CTDB to deploy oVirt hosted-engine. 
> 
>> i don't want to use ctdb
for my hosted-engine, but when i start hosted-engine --deploy, use
localhost:/engine or gluster.localdomain:/engine, gluster.localdomain is
configured in all servers /etc/hosts like :
>> host 1 : "192.168.100.101
gluster.localdomain gluster"
>> host 2 : "192.168.100.102
gluster.localdomain gluster"
>> host 3 : "192.168.100.103
gluster.localdomain gluster", 
>> 
>> the setup failed with : 
>> 
>>
--== STORAGE CONFIGURATION ==--
>> 
>> During customization use CTRL-D
to abort.
>> Please specify the storage you would like to use
(glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs
>> [ INFO ] Please
note that Replica 3 support is required for the shared storage.
>>
Please specify the full shared storage connection path to use (example:
host:/path): localhost:/engine
>> [WARNING] Due to several bugs in
mount.glusterfs the validation of GlusterFS share cannot be reliable.
>>
[ INFO ] GlusterFS replica 3 Volume detected
>> [ ERROR ] Failed to
execute stage 'Environment customization': Connection to storage server
failed
>> [ INFO ] Stage: Clean up
>> [ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150925121036.conf'
>>
[ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination 
>> 
>>
I have to use obligatory ctdb or keepalived, or existing another way ?

>> -- 
>> 
>> Florent BELLO
>> Service Informatique
>>
informati...@ville-kourou.fr
>> 0594 22 31 22
>> Mairie de Kourou 
>>
___
>> Users mailing list
>>
Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users [1]



Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Another way of CTDB to mount engine

2015-09-25 Thread Bello Florent
 

Hi, 

I have 3 nodes with glusterfs installed. I configured replica
3 on my engine volume.

i don't want to use ctdb for my hosted-engine,
but when i start hosted-engine --deploy, use localhost:/engine or
gluster.localdomain:/engine, gluster.localdomain is configured in all
servers /etc/hosts like :
host 1 : "192.168.100.101 gluster.localdomain
gluster"
host 2 : "192.168.100.102 gluster.localdomain gluster"
host 3 :
"192.168.100.103 gluster.localdomain gluster", 

the setup failed with :


--== STORAGE CONFIGURATION ==--

 During customization use CTRL-D to
abort.
 Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs
[ INFO ] Please note that
Replica 3 support is required for the shared storage.
 Please specify
the full shared storage connection path to use (example: host:/path):
localhost:/engine
[WARNING] Due to several bugs in mount.glusterfs the
validation of GlusterFS share cannot be reliable.
[ INFO ] GlusterFS
replica 3 Volume detected
[ ERROR ] Failed to execute stage 'Environment
customization': Connection to storage server failed
[ INFO ] Stage:
Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150925121036.conf'
[
INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination 

I have to
use obligatory ctdb or keepalived, or existing another way ? 
--


Florent BELLO
Service Informatique
informati...@ville-kourou.fr
0594
22 31 22
Mairie de Kourou ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users