20.06.2017 06:44, Digimer пишет:
> On 19/06/17 11:40 PM, Andrei Borzenkov wrote:
>> 20.06.2017 02:15, Digimer пишет:
>>> On 19/06/17 06:59 PM, Ferenc Wágner wrote:
Digimer writes:
> So we have a tool that watches for changes to clvmd by running
> pvscan/vgscan/lvscan, but this se
On 19/06/17 11:40 PM, Andrei Borzenkov wrote:
> 20.06.2017 02:15, Digimer пишет:
>> On 19/06/17 06:59 PM, Ferenc Wágner wrote:
>>> Digimer writes:
>>>
So we have a tool that watches for changes to clvmd by running
pvscan/vgscan/lvscan, but this seems to be expensive and occassionally
>>>
20.06.2017 02:15, Digimer пишет:
> On 19/06/17 06:59 PM, Ferenc Wágner wrote:
>> Digimer writes:
>>
>>> So we have a tool that watches for changes to clvmd by running
>>> pvscan/vgscan/lvscan, but this seems to be expensive and occassionally
>>> cause trouble.
>>
>> What kind of trouble did you ex
On 19/06/17 06:59 PM, Ferenc Wágner wrote:
> Digimer writes:
>
>> So we have a tool that watches for changes to clvmd by running
>> pvscan/vgscan/lvscan, but this seems to be expensive and occassionally
>> cause trouble.
>
> What kind of trouble did you experience?
>
>> Is there any other way t
Digimer writes:
> So we have a tool that watches for changes to clvmd by running
> pvscan/vgscan/lvscan, but this seems to be expensive and occassionally
> cause trouble.
What kind of trouble did you experience?
> Is there any other way to be notified or to check when something
> changes?
LV (
One more thing to add.
Two almost identical clusters, with the identical asterisk primitive produce a
different crm_verify output. on one cluster, it returns no warnings, whereas
the other once complains:
On the problematic one:
crm_verify --live-check -VV
warning: get_failcount_full: Setting
I did another experiment, even simpler.
Created one node, one resource, using pacemaker 1.1.14 on ubuntu.
Configured failcount to 1, migration threshold to 2, failure timeout to 1
minute.
crm_mon:
Last updated: Mon Jun 19 19:43:41 2017 Last change: Mon Jun 19
19:37:09 2017 by root vi
Hi Ken,
/sorry for the long text/
I have created a relatively simple setup to localize the issue.
Three nodes, no fencing, just a master/slave mysql with two virual IPs.
Just as a reminden, my primary issue is, that on cluster recheck intervals, tha
failcounts are not cleared.
I simuated a fail
On 06/19/2017 10:23 AM, Lentes, Bernd wrote:
> Hi,
>
> what would you consider to be the best way for removing a node temporary from
> the cluster, e.g. for installing updates ?
> I thought "crm node maintenance node" would be the right way, but i was
> astonished that the resources keep running
So we have a tool that watches for changes to clvmd by running
pvscan/vgscan/lvscan, but this seems to be expensive and occassionally
cause trouble. Is there any other way to be notified or to check when
something changes?
cheers
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, some
Hi,
what would you consider to be the best way for removing a node temporary from
the cluster, e.g. for installing updates ?
I thought "crm node maintenance node" would be the right way, but i was
astonished that the resources keep running on it. I would have expected that
the resources stop.
I
On 06/16/2017 09:08 PM, Ken Gaillot wrote:
> On 06/16/2017 01:18 PM, Dan Ragle wrote:
>>
>> On 6/12/2017 10:30 AM, Ken Gaillot wrote:
>>> On 06/12/2017 09:23 AM, Klaus Wenninger wrote:
On 06/12/2017 04:02 PM, Ken Gaillot wrote:
> On 06/10/2017 10:53 AM, Dan Ragle wrote:
>> So I guess m
On 06/16/2017 07:27 PM, Eric Robinson wrote:
> Rather go with the easiest option available.
And here it comes to the question of support.
For enterprises, be it just servers or High Availability, Storage or
Cloud, it's about the package as they say in the formula 1.
Having something as Open Sour
13 matches
Mail list logo