S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: [email protected]

www.feldhost.cz - FeldHost™ – profesionální hostingové a serverové služby za 
adekvátní ceny.

FELDSAM s.r.o.
V rohu 434/3
Praha 4 – Libuš, PSČ 142 00
IČ: 290 60 958, DIČ: CZ290 60 958
C 200350 vedená u Městského soudu v Praze

Banka: Fio banka a.s.
Číslo účtu: 2400330446/2010
BIC: FIOBCZPPXX
IBAN: CZ82 2010 0000 0024 0033 0446

> On 1 Sep 2017, at 13:15, Klechomir <[email protected]> wrote:
> 
> What I observe is that single monitoring request of different resources with 
> different resource agents is timing out.
> 
> For example LVM resource (the LVM RA) does this sometimes.
> Setting ridiculously high timeouts (5 minutes and more) didn't solve the 
> problem, so I think I'm  out of options there.
> Same for other I/O related resources/RAs.
> 

hmm, so probably is something bad in clvm configuration? I use clvm in three 
node cluster without issues. Which version of centos u use? I experience clvm 
problems only on pre 7.3 version due to bug in libqb.

> Regards,
> Klecho
> 
> One of the typical cases is LVM (LVM RA)monitoring.
> 
> On 1.09.2017 11:07, Jehan-Guillaume de Rorthais wrote:
>> On Fri, 01 Sep 2017 09:07:16 +0200
>> "Ulrich Windl" <[email protected]> wrote:
>> 
>>>>>> Klechomir <[email protected]> schrieb am 01.09.2017 um 08:48 in Nachricht
>>> <[email protected]>:
>>>> Hi Ulrich,
>>>> Have to disagree here.
>>>> 
>>>> I have cases, when for an unknown reason a single monitoring request
>>>> never returns result.
>>>> So having bigger timeouts doesn't resolve this problem.
>>> But if your monitor hangs instead of giving a result, you also cannot ignore
>>> the result that isn't there! OTOH: Isn't the operation timeout for monitors
>>> that hang? If the monitor is killed, it returns an implicit status (it
>>> failed).
>> I agree. It seems to me the problems comes from either the resource agent or
>> the resource itself. Presently, this issue bothers the cluster stack, but 
>> soon
>> or later, it will blows something else. Track where the issue comes from, and
>> fix it.
>> 
> 
> 
> _______________________________________________
> Users mailing list: [email protected]
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

_______________________________________________
Users mailing list: [email protected]
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to