Hi,
Anybody has been able to set up a Galera cluster with the latest available
version of Galera?
Can anybody paste the configuration?
I have tested it but I have not been able to make it run resiliently.
Any help will be welcome!
Thanks a lot.
___
U
On Wed, Jan 4, 2017 at 8:19 AM, Klaus Wenninger wrote:
> You have the attributes pcmk_host_list & pcmk_host_map to control that.
pcmk_host_list= + pcmk_host_check=static-list did the trick ;)
Thanks a lot Klaus and Ulrich for the help!!
Regards,
Ali
__
Arne Jansen napsal(a):
Hi Honza,
On 04.01.2017 13:52, Jan Friesse wrote:
At least those limits doesn't seem to get enforced, as a 64 node cluster
seems to work, although a bit shaky.
No, they are not enforced. 16/32 are official supported number of nodes.
Basically, this is number what was
On 01/04/2017 06:44 PM, Klaus Wenninger wrote:
On 01/04/2017 02:23 PM, Muhammad Sharfuddin wrote:
On 01/04/2017 06:05 PM, Ulrich Windl wrote:
Muhammad Sharfuddin schrieb am
04.01.2017 um 11:58 in
Nachricht <9ff82caa-d16e-13f4-e514-d356224f8...@nds.com.pk>:
On 01/04/2017 12:09 PM, Ulrich Wind
On 01/04/2017 02:23 PM, Muhammad Sharfuddin wrote:
> On 01/04/2017 06:05 PM, Ulrich Windl wrote:
> Muhammad Sharfuddin schrieb am
> 04.01.2017 um 11:58 in
>> Nachricht <9ff82caa-d16e-13f4-e514-d356224f8...@nds.com.pk>:
>>> On 01/04/2017 12:09 PM, Ulrich Windl wrote:
>>> Muhammad Sharfu
On 01/04/2017 06:05 PM, Ulrich Windl wrote:
Muhammad Sharfuddin schrieb am 04.01.2017 um 11:58 in
Nachricht <9ff82caa-d16e-13f4-e514-d356224f8...@nds.com.pk>:
On 01/04/2017 12:09 PM, Ulrich Windl wrote:
Muhammad Sharfuddin schrieb am 03.01.2017 um 17:11
in
Nachricht :
Hello,
pacemaker do
On 01/04/2017 02:06 PM, Alfonso Ali wrote:
> Hi Ulrich,
>
> You're right, it is as if stonithd selected the incorrect device to
> reboot the node. I'm using fence_ilo as the stonith agent, and
> reviewing the params it take is not clear which one (besides the name
> which is irrelevant for stonithd
>>> Alfonso Ali schrieb am 04.01.2017 um 14:06 in
>>> Nachricht
:
> Hi Ulrich,
>
> You're right, it is as if stonithd selected the incorrect device to reboot
> the node. I'm using fence_ilo as the stonith agent, and reviewing the
> params it take is not clear which one (besides the name which is
Hi Honza,
On 04.01.2017 13:52, Jan Friesse wrote:
At least those limits doesn't seem to get enforced, as a 64 node cluster
seems to work, although a bit shaky.
No, they are not enforced. 16/32 are official supported number of nodes.
Basically, this is number what was tested and known to work
>>> Jan Friesse schrieb am 04.01.2017 um 13:52 in
>>> Nachricht
<586ceff9.7070...@redhat.com>:
[...]
>
> No, they are not enforced. 16/32 are official supported number of nodes.
> Basically, this is number what was tested and known to work reliably.
> This doesn't mean corosync doesn't work wi
Hi Ulrich,
You're right, it is as if stonithd selected the incorrect device to reboot
the node. I'm using fence_ilo as the stonith agent, and reviewing the
params it take is not clear which one (besides the name which is irrelevant
for stonithd) should be used to fence each node.
In cman+rgmanage
>>> Muhammad Sharfuddin schrieb am 04.01.2017 um
>>> 11:58 in
Nachricht <9ff82caa-d16e-13f4-e514-d356224f8...@nds.com.pk>:
> On 01/04/2017 12:09 PM, Ulrich Windl wrote:
> Muhammad Sharfuddin schrieb am 03.01.2017 um
> 17:11
> in
>> Nachricht :
>>> Hello,
>>>
>>> pacemaker does not star
Arne Jansen napsal(a):
On 04.01.2017 11:25, Kristoffer Grönlund wrote:
Arne Jansen writes:
Hi,
I've built corosync for solaris and am trying to build a largish
cluster. I started corosync with default configuration on an
increasing number of nodes, one by one. At around 70 nodes the
cluste
On 01/04/2017 12:09 PM, Ulrich Windl wrote:
Muhammad Sharfuddin schrieb am 03.01.2017 um 17:11 in
Nachricht :
Hello,
pacemaker does not start on this machine(Fujitsu PRIMERGY RX2540 M1)
with following error in the logs:
sbd: [13236]: ERROR: Cannot open watchdog device: /dev/watchdog: No suc
On 04.01.2017 11:25, Kristoffer Grönlund wrote:
Arne Jansen writes:
Hi,
I've built corosync for solaris and am trying to build a largish
cluster. I started corosync with default configuration on an
increasing number of nodes, one by one. At around 70 nodes the
cluster breaks down. Below is
Arne Jansen writes:
> Hi,
>
> I've built corosync for solaris and am trying to build a largish
> cluster. I started corosync with default configuration on an
> increasing number of nodes, one by one. At around 70 nodes the
> cluster breaks down. Below is an excerpt from the logfile on the
> first
Hi,
I've built corosync for solaris and am trying to build a largish
cluster. I started corosync with default configuration on an
increasing number of nodes, one by one. At around 70 nodes the
cluster breaks down. Below is an excerpt from the logfile on the
first node.
When the cluster breaks dow
17 matches
Mail list logo