Hi,
I have resolved this in such way.
On the engine host connect to db:
su postgres
psql -s engine
Then delete host from db manualy:
select vds_id from vds_static where host_name = 'HOST_NAME';
delete from vds_statistics where vds_id = 'id';
delete from vds_dynamic where vds_id = 'id';
delete
2015-06-08 8:32 GMT+03:00 Ravishankar N ravishan...@redhat.com:
On 06/08/2015 02:38 AM, Юрий Полторацкий wrote:
Hi,
I have made a lab with a config listed below and have got unexpected
result. Someone, tell me, please, where did I go wrong?
I am testing oVirt. Data Center has two
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: disable
nfs.disable: on
performance.readdir-ahead: on
cluster.quorum-count: 1
cluster.server-quorum-ratio: 51%
06.06.2015 12:09, Юрий Полторацкий пишет:
Hi,
I want
Hi,
Pardon me for jumping in subj, but...
Could you explain me in a few words or give a link where to read about
split-brain?
I made a lab: cluster (virt and gluster services both) based on two server.
With this options:
cluster.server-quorum-type none
cluster.quorum-type fixed
wrong?
Cheers
Soeren
From: Юрий Полторацкий y.poltorats...@gmail.com
Date: Sunday 31 May 2015 18:32
To: users@ovirt.org users@ovirt.org
Subject: Re: [ovirt-users] gluster config in 4 node cluster
Hi,
As for me, I would build one cluster with gluster service only based on
two nodes
I have installed VPN server with access to the management networks, I think
this is 'best practices'.
2015-06-01 1:54 GMT+03:00 alexmcwhir...@triadic.us:
I have a dual host setup working right now. Host 1 runs the engine and is
also a node. Host 2 does DB storage and NFS storage. The
Hi.
I am testing oVirt 3.5.2 with 3 hosts (Dell R210). Storage type is
GlusterFS (replicate 3): each host has a single 3TB HDD. When I put one
host into maintence mode, then reboot it and after system has been started,
the glusterfsd proccess takes a long time (more then hours with gigabyte
7 matches
Mail list logo