7 juillet 2015 11:26 "Andrei Borzenkov" a écrit:
> On Tue, Jul 7, 2015 at 12:19 PM, Nicolas S. wrote:
>
>> Hello everybody,
>>
>> I'm posting first time on this mailing list for an advice.
>>
>> I try actually trying to build a cluster on Centos 7.
>>
>> The cluster has 3 nodes :
>>
>> - 1 v
7 juillet 2015 16:10 "Ken Gaillot" a écrit:
> On 07/07/2015 04:19 AM, Nicolas S. wrote:
>
>> Hello everybody,
>>
>> I'm posting first time on this mailing list for an advice.
>>
>> I try actually trying to build a cluster on Centos 7.
>>
>> The cluster has 3 nodes :
>>
>> - 1 virtual machine
On 07/07/2015 03:58 AM, Arjun Pandey wrote:
> Hi Ken
>
> If i look at the logs on the other node around the same time i see this. I
> can't figure out the reason based on these.Attaching the corosync.log for
> the other node as well.
I don't see anything there either. The relevant part could be e
On 07/07/2015 04:19 AM, Nicolas S. wrote:
> Hello everybody,
>
> I'm posting first time on this mailing list for an advice.
>
> I try actually trying to build a cluster on Centos 7.
>
> The cluster has 3 nodes :
>
> - 1 virtual machine (machine1). This machine is supposed to be high-available
>
Andrei Borzenkov schrieb am 07.07.2015 10:03:26:
> Von: Andrei Borzenkov
> An: Cluster Labs - All topics related to open-source clustering
> welcomed
> Datum: 07.07.2015 10:04
> Betreff: Re: [ClusterLabs] clear pending fence operation
>
> On Tue, Jul 7, 2015 at 10:41 AM, wrote:
> > hi,
> >
>>> Muhammad Sharfuddin schrieb am 07.07.2015 um
>>> 11:15 in
Nachricht <559b98a2.5060...@nds.com.pk>:
[...]
> I don't understand the advantage of Ocfs2 file system in such a setup.
The advantage depends on the alternatives: If two nodes both want to access the
same filesystem, you can use OCF
On Tue, Jul 7, 2015 at 12:34 PM, Michael Schwartzkopff wrote:
>> > The cluster has 3 nodes :
>> >
>> > - 1 virtual machine (machine1). This machine is supposed to be
>> > high-available
>> > - 2 physical machines identical (machine2 and 3)
>> >
>> It's not going to work. If host where this VM is r
Am Dienstag, 7. Juli 2015, 12:25:56 schrieb Andrei Borzenkov:
> On Tue, Jul 7, 2015 at 12:19 PM, Nicolas S. wrote:
> > Hello everybody,
> >
> > I'm posting first time on this mailing list for an advice.
> >
> > I try actually trying to build a cluster on Centos 7.
> >
> > The cluster has 3 node
On Tue, Jul 7, 2015 at 12:19 PM, Nicolas S. wrote:
> Hello everybody,
>
> I'm posting first time on this mailing list for an advice.
>
> I try actually trying to build a cluster on Centos 7.
>
> The cluster has 3 nodes :
>
> - 1 virtual machine (machine1). This machine is supposed to be
> high-ava
Hello everybody,
I'm posting first time on this mailing list for an advice.
I try actually trying to build a cluster on Centos 7.
The cluster has 3 nodes :
- 1 virtual machine (machine1). This machine is supposed to be high-available
- 2 physical machines identical (machine2 and 3)
The physica
On 07/07/2015 12:14 PM, Ulrich Windl wrote:
Muhammad Sharfuddin schrieb am 06.07.2015 um 12:14 in
Nachricht <559a550a.8010...@nds.com.pk>:
[...]
Ok, so reducing the sbd timeout(or msgwait) would provide the
uninterrupted access to the ocfs2 file system on the surviving/online node ?
or would i
On Tue, Jul 7, 2015 at 10:41 AM, wrote:
> hi,
>
> is there any way to clear/remove pending stonith operation on cluster node?
>
> after some internal testing i got following status:
>
> Jul 4 12:18:02 XXX crmd[1673]: notice: te_fence_node: Executing reboot
> fencing operation (179) on XXX (tim
hi,
is there any way to clear/remove pending stonith operation on cluster
node?
after some internal testing i got following status:
Jul 4 12:18:02 XXX crmd[1673]: notice: te_fence_node: Executing reboot
fencing operation (179) on XXX (timeout=6)
Jul 4 12:18:02 XXX stonith-ng[1668]: n
13 matches
Mail list logo