Hi,
Anybody has been able to set up a Galera cluster with the latest available
version of Galera?
Can anybody paste the configuration?
I have tested it but I have not been able to make it run resiliently.
Any help will be welcome!
Thanks a lot.
___
U
Hi,
In my environment I'm not able to bootstrap the cluster after a crash and I
thought it could be a configuration problem at cluster level and I wanted
to know if anybody has been able to configure it.
Thanks a lot.
2017-01-06 9:34 GMT+01:00 Damien Ciabrini :
> Hey Oscar,
>
> - Original M
tarted up!
Thanks a lot.
2017-01-06 11:14 GMT+01:00 Oscar Segarra :
> Hi,
>
> In my environment I'm not able to bootstrap the cluster after a crash and
> I thought it could be a configuration problem at cluster level and I wanted
> to know if anybody has been able to configur
Hi,
I'm using pacemaker and corosync to bootstrap de cluster from zero.
Thanks a lot
2017-01-11 6:17 GMT+01:00 Hao QingFeng :
>
>
> 在 2017-01-06 18:27, Oscar Segarra 写道:
>
> Hi, I get errors like the following:
>
> 017-01-06 11:25:47 139902389713152 [ERROR] W
Hi,
I have configured a two node cluster whewe run 4 kvm guests on.
The hosts are:
vdicnode01
vdicnode02
And I have created a dedicated network card for cluster management. I have
created required entries in /etc/hosts:
vdicnode01-priv
vdicnode02-priv
The four guests have collocation rules in o
u cluster?
>
> 2017-01-17 9:27 GMT+01:00 Oscar Segarra :
> > Hi,
> >
> > I have configured a two node cluster whewe run 4 kvm guests on.
> >
> > The hosts are:
> > vdicnode01
> > vdicnode02
> >
> > And I have created a dedicate
v: grace-active=1
[root@vdicnode01 ~]#
2017-01-17 11:00 GMT+01:00 emmanuel segura :
> show your cluster configuration.
>
> 2017-01-17 10:15 GMT+01:00 Oscar Segarra :
> > Hi,
> >
> > Yes, I will try to explain myself better.
> >
> > Initially
> > On n
t
2017-01-17 15:52 GMT+01:00 Ulrich Windl :
> >>> Oscar Segarra schrieb am 17.01.2017 um
> 10:15 in
> Nachricht
> :
> > Hi,
> >
> > Yes, I will try to explain myself better.
> >
> > *Initially*
> > On node1 (vdicnode01-priv)
> >
GMT+01:00 Ken Gaillot :
> On 01/17/2017 08:52 AM, Ulrich Windl wrote:
> >>>> Oscar Segarra schrieb am 17.01.2017 um
> 10:15 in
> > Nachricht
> > :
> >> Hi,
> >>
> >> Yes, I will try to explain myself better.
> >>
> &g
Hi,
I have a two node cluster... when I try to shutdown the physical host I get
the following message in console: "a stop job is running for pacemaker high
availability cluster manager" and never stops...
This is my configuration:
[root@vdicnode01 ~]# pcs config
Cluster Name: vdic-cluster
Corosy
Hi,
A lot of files appear in /var/lib/pacemaker/pengine and fulls my hard disk.
Is there any way to avoid such amount of files in that directory?
Thanks in advance!
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/list
Hi Ken,
I have set the 3 values to 100.
I think may be enough to diagnose problems!
Thanks a lot!
2017-02-02 21:19 GMT+01:00 Ken Gaillot :
> On 02/02/2017 12:49 PM, Oscar Segarra wrote:
> > Hi,
> >
> > A lot of files appear in /var/lib/pacemaker/pengine and fulls my ha
Hi Ken,
I have checked the /var/log/cluster/corosync.log and there no information
about why system hangs stopping...
¿Can you be more specific about what logs to check?
Thanks a lot.
2017-02-02 21:10 GMT+01:00 Ken Gaillot :
> On 02/02/2017 12:35 PM, Oscar Segarra wrote:
> > Hi,
>
Thanks a lot!
2017-02-06 9:55 GMT+01:00 Ulrich Windl :
> >>> Oscar Segarra schrieb am 02.02.2017 um
> 19:49 in
> Nachricht
> :
> > Hi,
> >
> > A lot of files appear in /var/lib/pacemaker/pengine and fulls my hard
> disk.
>
> Welcome to pacema
Hi,
I have configured two virtualIP resources but one of them does not start
and I'm not able to find the reason.
Nothing appear in logs:
[root@vdicnode01 ~]# crm_mon -1 -r
Stack: corosync
Current DC: vdicnode02-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition
with quorum
Last updated: Wed F
Hi,
In my environment I have deployed 5 VirtualDomains as one can see below:
[root@vdicnode01 ~]# pcs status
Cluster name: vdic-cluster
Stack: corosync
Current DC: vdicnode01-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition
with quorum
Last updated: Thu Feb 16 09:02:53 2017 Last chang
Hi Kaluss
Which is your proposal to fix this behavior?
Thanks a lot!
El 16 feb. 2017 10:57 a. m., "Klaus Wenninger"
escribió:
On 02/16/2017 09:05 AM, Oscar Segarra wrote:
> Hi,
>
> In my environment I have deployed 5 VirtualDomains as one can see below:
> [root@vdi
Hi Klaus,
Thanks a lot, I will try to delete the stop monitor.
Nevertheless, I have 6 domains configured exactly the same... Is there any
reason why just this domain has this behaviour ?
Thanks a lot.
2017-02-16 11:12 GMT+01:00 Klaus Wenninger :
> On 02/16/2017 11:02 AM, Oscar Segarra wr
-vdicone01)[73828]: 2017/02/16_16:43:40 INFO:
Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not readable,
resource considered stopped.
Thanks a lot!
2017-02-16 14:04 GMT+01:00 Klaus Wenninger :
> On 02/16/2017 01:55 PM, Oscar Segarra wrote:
>
> Hi Klaus,
>
> Thanks a lot
or is
not readable.
Any help will be welcome!
Thanks a lot.
2017-02-16 16:47 GMT+01:00 Oscar Segarra :
> Hi Klaus,
>
> I have delted the op stop:
>
> pcs resource op remove vm-vdicone01 stop interval=0s timeout=90
> pcs resource op remove vm-vdicdb01 stop interval=0s timeout=90
behaviour can be caused by that commands?
Thanks in advance!
2017-02-17 8:33 GMT+01:00 Ulrich Windl :
> >>> Oscar Segarra schrieb am 16.02.2017 um
> 13:55 in
> Nachricht
> :
> > Hi Klaus,
> >
> > Thanks a lot, I will try to delete the stop monitor.
> >
Hi,
In my environment I have 5 guestes that have to be started up in a
specified order starting for the MySQL database server.
I have set the order constraints and VirtualDomains start in the right
order but, the problem I have, is that the second host starts up faster
than the database server an
+01:00 Dejan Muhamedagic :
> Hi,
>
> On Thu, Feb 23, 2017 at 08:51:20PM +0100, Oscar Segarra wrote:
> > Hi,
> >
> > In my environment I have 5 guestes that have to be started up in a
> > specified order starting for the MySQL database server.
> >
&
mar. 2017 1:08 p. m., "Dejan Muhamedagic"
escribió:
> Hi,
>
> On Sat, Feb 25, 2017 at 09:58:01PM +0100, Oscar Segarra wrote:
> > Hi,
> >
> > Yes,
> >
> > Database server can be considered started up when it accepts mysql client
> > connectio
21PM +0100, Oscar Segarra wrote:
> Hi Dejan,
>
> In my environment, is it possible to launch the check from the hypervisor.
> A simple telnet against an specific port may be enough tp check if service
> is ready.
telnet is not so practical for scripts, better use ssh or
the mysql
Hi,
In Ceph, by design there is no single point of failure I terms of server
roles, nevertheless, from the client point of view, it might exist.
In my environment:
Mon1: 192.168.100.101:6789
Mon2: 192.168.100.102:6789
Mon3: 192.168.100.103:6789
Client: 192.168.100.104
I have created a line in
Hi,
Is there any way to assign an Virtual IP Address to a host with a running
port?
Thanks a lot!
2017-08-28 4:10 GMT+02:00 Oscar Segarra :
> Hi,
>
> In Ceph, by design there is no single point of failure I terms of server
> roles, nevertheless, from the client point of view, it
Hi,
In my environment, I have just two hosts, where qemu-kvm process is
launched by a regular user (oneadmin) - open nebula -
I have created a VirtualDomain resource that starts and stops the VM
perfectly. Nevertheless, when I change the location weight in order to
force the migration, It raises
+ 0.163.6 (null)
Aug 31 23:38:31 [1531] vdicnode01cib: info: cib_perform_op: +
/cib: @num_updates=6
Aug 31 23:38:31 [1531] vdicnode01cib: info: cib_perform_op: ++
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='vm-vdicdb01
other
user?
What user executes the virsh migrate - - live?
Is there any way to check ssk keys?
Sorry for all theese questions.
Thanks a lot
El 1 sept. 2017 0:12, "Ken Gaillot" escribió:
On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:
> Hi Ken,
>
>
> Than
with any
> > other user?
> >
> >
> > What user executes the virsh migrate - - live?
>
> The cluster executes resource actions as root.
>
> > Is there any way to check ssk keys?
>
> I'd just login once to the host as root from the cluster nodes
Hi,
I'd like to recover this post in order to know if there is any way to
achieve this kind of simple HA system.
Thanks a lot.
2017-08-28 4:10 GMT+02:00 Oscar Segarra :
> Hi,
>
> In Ceph, by design there is no single point of failure I terms of server
> roles, nevertheless
experience developing new resource agents.
Thanks a lot!
2018-03-06 16:05 GMT+01:00 Ken Gaillot :
> On Tue, 2018-03-06 at 10:11 +0100, Oscar Segarra wrote:
> > Hi,
> >
> > I'd like to recover this post in order to know if there is any way to
> > achieve this kind of s
33 matches
Mail list logo