On 2008-09-03T09:32:11, Edward Capriolo <[EMAIL PROTECTED]> wrote:

> I have found that I can do minor edits to the heartbeat DTD to create
> a single ELEMENT described with ANY to store proprietary data. That
> works well.

That is so totally not supported ...

You can store anything in key/value pairs as node attributes.

> A command is submitted VIA XML RPC. 'node2:Create disk drive /stuff '.
> In the end, I would like to have the disk drive be created on node2. I
> would also like Linux-HA to be aware of this so I make proprietary
> entries in the cib.

Node attribute. The CIB is not meant to be an exhaustive large database.

But why would the cluster care? Does this affect any constraints or
resources? I'd judge it doesn't, because you obviously can't refer to
those private extensions from those.

> configuration, setting up shares, database replication, etc. I want to
> be able to scale from 2-8 nodes automatically. So that is why I am
> thinking to store all my data in the linux HA-CIB.

You can store it as node attributes. If it is so large and complex that
that doesn't fit, the CIB is likely the wrong place for it, unless it is
relevant to the PE.

> However I ran into what might be a problem. Assume node1 and node2
> both get a request to create a disk within a few milliseconds, both
> systems create the disk, then both systems attempt to change the CIB
> and only the second set of changes is stored.

No. The DC serializes the updates, and they are incremental. As long as
the changes don't affect the same object, what you describe won't occur.

> Questions: Is my stated goal a bad idea to begin with consider a tool
> like cfengine? Is the concurrency already there? is there a better
> way, possibly using reasource agents to detect resources?

I don't think you'll be happy if you try to make the CIB an exhaustive
store of all information which could possibly describe the nodes.

> In a nutshell - in an automated cluster environment, how to keep the
> Linux HA in the loop?

The CIB only stores information which is relevant to the Policy Engine.
You shouldn't try to overload it too much. It likely won't scale.

What is the goal, and why? Don't describe the implementation, but the
use case?


Regards,
    Lars

-- 
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to