On Wed, Sep 24, 2008 at 09:11:43AM +0200, Andrew Beekhof wrote:
>
> On Sep 24, 2008, at 12:02 AM, Dejan Muhamedagic wrote:
>
>> On Fri, Sep 19, 2008 at 05:28:42PM +0200, Andrew Beekhof wrote:
>>> On Fri, Sep 19, 2008 at 16:44, Dejan Muhamedagic <[EMAIL PROTECTED]> 
>>> wrote:
>>>> Hi,
>>>>
>>>> On Fri, Sep 19, 2008 at 11:17:34AM +0200, Andrew Beekhof wrote:
>>>>> On Fri, Sep 19, 2008 at 09:00, Satomi TANIGUCHI
>>>>> <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>> [snip]
>>>>>
>>>>> If you _really_ want to have a per-plugin value, I suggest making it
>>>>> an extra resource parameter (ie. like hostlist) and teach stonithd to
>>>>> look for and use it _instead_of_ the CRM-supplied value.
>>>>>
>>>>> Dejan: Any objections to something like this?
>>>>
>>>> No.
>>>>
>>>>>> Cascaded stonith setup can be realized by setting two or more plugins 
>>>>>> in a
>>>>>> group
>>>>>> at the moment.
>>>>
>>>> I don't think so. Currently, all stonith plugins (coming from
>>>> various stonith resources) are tried in turn until one succeeds,
>>>> but there is no ordering. The information about them being
>>>> grouped is lost, i.e. stonithd has no idea which stonith plugin
>>>> should be invoked first. Another option (attribute) would have to
>>>> be implemented, something like "priority".
>>>>
>>>>>> As far as I confirm, if the first plugin in a group is failed,
>>>>>> the second one is executed.
>>>>>> And if the first one succeeds, the second one is _not_ executed.
>>>>
>>>> Right.
>>>>
>>>>>> If it is an unexpected behavior, please let me know the correct one.
>>>>>
>>>>> Cool.  I had no idea.  I, like the CRM, am blissfully ignorant of how
>>>>> STONITHd works :-)
>>>>
>>>> Good for you :)
>>>>
>>>> To summarize, we'd need the following new features:
>>>>
>>>> - stonith plugin priority (ordering of stonith resources) (stonithd)
>>>
>>> ack
>>
>> This is implemented in the development branch:
>>
>> http://hg.clusterlabs.org/pacemaker/dev/rev/a3a62aa43d64
>>
>> The priorities are set with the "priority" instance attribute.
>> The lower the number, the higher the priority.
>
> cool
>
>>
>>
>>>> - fencing operation timeouts per stonith resource (stonithd)
>>>
>>> ack
>>
>> http://hg.clusterlabs.org/pacemaker/dev/rev/0f17d8472570
>> http://hg.clusterlabs.org/pacemaker/dev/rev/785fb0d9d821
>>
>> The timeouts are taken from the "start" operation. Even though it
>> may not be obvious that this timeout is used for the fencing
>> operations as well, I think that it still makes more sense than
>> making an extra instance attribute. Any objections?
>
> None here
>
>>
>>
>> Satomi-san: It would be great if you can test these changes with
>> your setup.
>>
>>>> - global fencing timeout (crm cluster property)
>>>
>>> Or, we totally ignore this and rely solely on the per-resource value 
>>> above.
>>> I'd vote for that for the sake of consistency (otherwise its
>>> non-obvious which value takes precedence)
>>
>> That's how it works right now. The users should take care of
>> setting the cluster_delay properly, i.e. to the maximum possible
>> stonith timeout and then double that. Right?
>
> "Why"?
> If stonithd no longer needs a timeout value from us, then the cluster can 
> just forever for stonithd to return (knowing that it eventually will)

Oh, right, that timeout (based on cluster_delay) was provided
only as a hint to stonithd how much the fencing operation should
take. Anyway, please keep sending it (the timeout), because
stonithd falls back on it in case there's no timeout specified
for the start operation.

Thanks,

Dejan

>>
>>
>> BTW, it would probably be a good idea to have a stonith specific
>> property. cluster_delay is namewise (and descriptionwise) a bit
>> far fetched.
>>
>>
>>>> The global fencing timeout would have to be set either to the
>>>> total of single timeouts or maximum timeout depending on the
>>>> nature of devices.
>>>>
>>>> Any objections or pitfalls?
>>>
>>> The timing... if people want this for 1.0, then someone needs to get a
>>> patch to me by the 24th (Wednesday) at the latest.
>>> There'll be no more features after then (bugfixes are of course ok,
>>> just no new features).
>>
>> On time.
>
> Impressive :-)
>
>>
>>
>> Thanks,
>>
>> Dejan
>> _______________________________________________________
>> Linux-HA-Dev: [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
>> Home Page: http://linux-ha.org/
>
> _______________________________________________________
> Linux-HA-Dev: [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
> Home Page: http://linux-ha.org/
_______________________________________________________
Linux-HA-Dev: [email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/

Reply via email to