- "Stewart Walters" wrote:
| I've just had a GFS volume massively corrupt itself.
(snip)
| Is there any way for the user to say "Yes to all"?
|
| At least if the default choice was "Yes" when the Enter key was
| pressed, the user
| could hold down the Enter key until the entire list of blocks
Hi Marek,
This patch helps allow applications which rely on resource-agent
metadata to parse the output of fence_X -o metadata.
The tag may not be needed, but is useful for assisting
in schema generation. We can use if this works better for
you, but the name (e.g. fence_ilo) is needed for the
On Mon, Feb 23, 2009 at 02:24:13PM -0600, Ryan O'Hara wrote:
> On Mon, Feb 23, 2009 at 01:09:58PM -0600, David Teigland wrote:
> > On Mon, Feb 23, 2009 at 07:52:55PM +0100, Fabio M. Di Nitto wrote:
> > > What can stop a user to run fence_node -U from another node to do remote
> > > (un)fencing?
> >
On Mon, Feb 23, 2009 at 01:09:58PM -0600, David Teigland wrote:
> On Mon, Feb 23, 2009 at 07:52:55PM +0100, Fabio M. Di Nitto wrote:
> > What can stop a user to run fence_node -U from another node to do remote
> > (un)fencing?
>
> It would work. Users can do anything they like, that's beside the
On Mon, Feb 23, 2009 at 01:36:04PM -0600, Ryan O'Hara wrote:
> What happens if unfencing fails? Is it safe to say that a node that
> fails to unfence itself will be prohibited from joining the fence
> domain? This is important for fence_scsi, since unfencing is
> equivalient to re-registering with
On Fri, Feb 20, 2009 at 03:44:32PM -0600, David Teigland wrote:
> [Note: we've talked about fence_scsi getting a device list from
> /etc/cluster/fence_scsi.conf instead of from clvm. It would require
> more user configuration, but would create fewer problems and should
> be more robust.]
Agre
On Mon, Feb 23, 2009 at 01:22:26PM -0600, Ryan O'Hara wrote:
> On Mon, Feb 23, 2009 at 01:09:58PM -0600, David Teigland wrote:
> > On Mon, Feb 23, 2009 at 07:52:55PM +0100, Fabio M. Di Nitto wrote:
> > > > A node unfences *itself* when it boots up. As such, power-unfencing
> > > > doesn't
> > > >
On Mon, Feb 23, 2009 at 01:09:58PM -0600, David Teigland wrote:
> On Mon, Feb 23, 2009 at 07:52:55PM +0100, Fabio M. Di Nitto wrote:
> > > A node unfences *itself* when it boots up. As such, power-unfencing
> > > doesn't
> > > make sense; unfencing is only meant to reverse storage fencing.
> >
>
On Mon, Feb 23, 2009 at 07:52:55PM +0100, Fabio M. Di Nitto wrote:
> > A node unfences *itself* when it boots up. As such, power-unfencing doesn't
> > make sense; unfencing is only meant to reverse storage fencing.
>
> What can stop a user to run fence_node -U from another node to do remote
> (un
On Mon, 2009-02-23 at 12:40 -0600, David Teigland wrote:
> On Mon, Feb 23, 2009 at 07:31:29PM +0100, Fabio M. Di Nitto wrote:
> > Given this last example, a reasonable unfence operation would be to try
> > to poweron via apc too.
> >
> > There is no guarantee that it was only method="1" fencing th
On Mon, Feb 23, 2009 at 07:31:29PM +0100, Fabio M. Di Nitto wrote:
> Given this last example, a reasonable unfence operation would be to try
> to poweron via apc too.
>
> There is no guarantee that it was only method="1" fencing the node and
> the node could be powered off.
>
> if we succeed in e
On Mon, 2009-02-23 at 12:15 -0600, David Teigland wrote:
> On Mon, Feb 23, 2009 at 07:27:20AM +0100, Fabio M. Di Nitto wrote:
> > > libfence:fence_node_undo(node_name) logic:
> > > for each device_name under given node_name,
> > > if an unfencedevice exists with name=device_name, then
> > > r
On Mon, Feb 23, 2009 at 07:27:20AM +0100, Fabio M. Di Nitto wrote:
> > libfence:fence_node_undo(node_name) logic:
> > for each device_name under given node_name,
> > if an unfencedevice exists with name=device_name, then
> > run the unfencedevice agent with first arg of "undo"
> > a
The cluster team and its community are proud to announce the
3.0.0.alpha5 release from the STABLE3 branch.
The development cycle for 3.0.0 is about to end. With the new STABLE3
branch that will collect only bug fixes and minimal update required to
build on top of the latest upstream kernel, we are
14 matches
Mail list logo