On 1 November 2010 15:01, Rick Cone rc...@securepaymentsystems.com wrote:
Dejan,
Below I had:
primitive res_stonith stonith:apcmastersnmp \
params ipaddr=192.1.1.109 port=161 community=sps \
op start interval=0 timeout=60s \
op monitor interval=60s timeout=60s \
On Fri, Oct 29, 2010 at 04:52:43PM +0200, Eberhard Kuemmerle wrote:
On 29 Oct 2010 14:43, Dejan Muhamedagic wrote:
stonith -t rcd_serial -p test /dev/ttyS0 rts 2000 test
** (process:21181): DEBUG: rcd_serial_set_config:called
Alarm clock
== RESET WORKS!
stonith -t rcd_serial
Hi,
On Tue, Nov 02, 2010 at 08:08:32AM +0100, Pavlos Parissis wrote:
On 1 November 2010 15:01, Rick Cone rc...@securepaymentsystems.com wrote:
Dejan,
Below I had:
primitive res_stonith stonith:apcmastersnmp \
params ipaddr=192.1.1.109 port=161 community=sps \
op
Hi,
On Fri, Oct 29, 2010 at 08:37:04AM +0200, Pavlos Parissis wrote:
Hi,
I wanted to check what happens when the monitor of a fencing agents
fails, thus I disconnected the PDU from network, reduced the monitor
interval and put debug statements on the fencing script.
here is the debug
On 2 November 2010 11:04, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Tue, Nov 02, 2010 at 08:08:32AM +0100, Pavlos Parissis wrote:
On 1 November 2010 15:01, Rick Cone rc...@securepaymentsystems.com
wrote:
Dejan,
Below I had:
primitive res_stonith
On 2 November 2010 11:22, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Fri, Oct 29, 2010 at 08:37:04AM +0200, Pavlos Parissis wrote:
Hi,
I wanted to check what happens when the monitor of a fencing agents
fails, thus I disconnected the PDU from network, reduced the monitor
Hi,
On Tue, Nov 02, 2010 at 12:10:37PM +0100, Pavlos Parissis wrote:
On 2 November 2010 11:04, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Tue, Nov 02, 2010 at 08:08:32AM +0100, Pavlos Parissis wrote:
On 1 November 2010 15:01, Rick Cone rc...@securepaymentsystems.com
wrote:
Hi,
On Tue, Nov 02, 2010 at 12:13:43PM +0100, Pavlos Parissis wrote:
On 2 November 2010 11:22, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Fri, Oct 29, 2010 at 08:37:04AM +0200, Pavlos Parissis wrote:
Hi,
I wanted to check what happens when the monitor of a fencing
Hi,
On Fri, Oct 29, 2010 at 01:51:18PM -0700, Alan Jones wrote:
I'm trying to configure a simple resource that depends on a local clone.
The configuration is below.
For those familiar with the Veritas Cluster Server, I'm trying to get
something like permanent resources.
Unfortunately, the
On 2 November 2010 12:58, Dejan Muhamedagic deja...@fastmail.fm wrote:
[...snip...]
Do you know under which conditions pacemaker initiates multiple
connections
to a fencing device?
There are no specific conditions. It can happen by chance because
individual clone instances run
On 2 November 2010 13:02, Dejan Muhamedagic deja...@fastmail.fm wrote:
[...snip...]
Definitely not. If you do the monitor action from the command
line does that also return the unexpected exit code:
from the code I pasted you can see it returned 1.
There is a difference.
Hi,
On Sat, Oct 30, 2010 at 04:31:38PM +0200, Pavlos Parissis wrote:
On 30 October 2010 16:03, Pavlos Parissis pavlos.paris...@gmail.com wrote:
Hi,
Does anyone know if the fencing agent ippower9258 works with IP Power
9258HP PDU?
The readme file of the fencing agent mentions the
Hi,
On Tue, Nov 02, 2010 at 01:09:02PM +0100, Pavlos Parissis wrote:
On 2 November 2010 13:02, Dejan Muhamedagic deja...@fastmail.fm wrote:
[...snip...]
Definitely not. If you do the monitor action from the command
line does that also return the unexpected exit code:
from
Hi,
On Sat, Oct 30, 2010 at 01:55:33PM -0400, Lars Kellogg-Stedman wrote:
I have a two node cluster that hosts two virtual ips on the same network:
primitive proxy_0_ip ocf:heartbeat:IPaddr \
params ip=10.10.10.20 cidr_netmask=255.255.255.0 nic=eth3
primitive proxy_1_ip
On 2 November 2010 13:18, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Tue, Nov 02, 2010 at 01:09:02PM +0100, Pavlos Parissis wrote:
On 2 November 2010 13:02, Dejan Muhamedagic deja...@fastmail.fm wrote:
[...snip...]
Definitely not. If you do the monitor action from the
You should set the interleave=true meta attribute for the clones.
Hope that that would help. You need collocations as well.
Dejan,
Thanks for the suggestion. Can you elaborate a little? The documentation
for the interleave option says:
Changes the behavior of ordering constraints (between
On Tue, Nov 02, 2010 at 01:28:09PM +0100, Pavlos Parissis wrote:
On 2 November 2010 13:18, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Tue, Nov 02, 2010 at 01:09:02PM +0100, Pavlos Parissis wrote:
On 2 November 2010 13:02, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Tue, Nov 02, 2010 at 09:57:12AM -0400, Lars Kellogg-Stedman wrote:
You should set the interleave=true meta attribute for the clones.
Hope that that would help. You need collocations as well.
Dejan,
Thanks for the suggestion. Can you elaborate a little? The documentation
for
Hi,
I am trying to figure out how I can resolve the following scenario
Facts
3 nodes
2 DRBD ms resource
2 group resource
by default drbd1/group1 runs on node-01 and drbd2/group2 runs on node2
drbd1/group1 can only run on node-01 and node-03
drbd2/group2 can only run on node-02 and node-03
DRBD
On 2 Nov 2010 10:59, Dejan Muhamedagic wrote:
Then, I said 'kill -9 corosync_pid ' on node2, and stonith on node1
really initiated a REBOOT of node2!
BUT in /var/log/messages of node1, stonith-ng thinks that the operation
failed:
Oct 29 16:06:55 node1 stonith-ng: [31449]: WARN:
Dan Frincu wrote:
Hi,
Pavlos Parissis wrote:
Hi,
I am trying to figure out how I can resolve the following scenario
Facts
3 nodes
2 DRBD ms resource
2 group resource
by default drbd1/group1 runs on node-01 and drbd2/group2 runs on node2
drbd1/group1 can only run on node-01 and node-03
On 2 Nov 2010 16:15 02.11.2010 16:18, Eberhard Kuemmerle wrote:
Hi,
here is what you requested:
TEST 1:
stonith -t rcd_serial -p test /dev/ttyS0 rts 2000 test
** (process:2928): DEBUG: rcd_serial_set_config:called
Alarm clock
# echo $?
142
TEST 2:
stonith -t rcd_serial hostlist=node2
Hi everybody,
I've fix my problem by inserting followed code to the application
consistency checking script.
Please let me know whether you found a better solution.
PS. Thanks for replies
Vladimir
...
check_marker(){
# Hardcoded to avoid extra forks.
MARKER_COUNT=8
On Tue, Nov 02, 2010 at 04:26:40PM +0100, Eberhard Kuemmerle wrote:
On 2 Nov 2010 16:15 02.11.2010 16:18, Eberhard Kuemmerle wrote:
Hi,
here is what you requested:
TEST 1:
stonith -t rcd_serial -p test /dev/ttyS0 rts 2000 test
** (process:2928): DEBUG: rcd_serial_set_config:called
Right now, I don't have a solution for this problem using
clones if you're running pacemaker v1.0.x.
I went ahead and created two sets of Route resources. Each one has to
go into a separate routing table to prevent conflicts, which
complicates things a bit, but it seems to work. If the
On 2 November 2010 16:15, Dan Frincu dfri...@streamwide.ro wrote:
Hi,
Pavlos Parissis wrote:
Hi,
I am trying to figure out how I can resolve the following scenario
Facts
3 nodes
2 DRBD ms resource
2 group resource
by default drbd1/group1 runs on node-01 and drbd2/group2 runs on node2
On 2 November 2010 22:07, Pavlos Parissis pavlos.paris...@gmail.com wrote:
On 2 November 2010 16:15, Dan Frincu dfri...@streamwide.ro wrote:
Hi,
Pavlos Parissis wrote:
Hi,
I am trying to figure out how I can resolve the following scenario
Facts
3 nodes
2 DRBD ms resource
2 group
On Tue, Nov 02, 2010 at 10:07:17PM +0100, Pavlos Parissis wrote:
On 2 November 2010 16:15, Dan Frincu dfri...@streamwide.ro wrote:
Hi,
Pavlos Parissis wrote:
Hi,
I am trying to figure out how I can resolve the following scenario
Facts
3 nodes
2 DRBD ms resource
2 group
Dejan,
I tested the AP7900 with the stonith command turning 2 outlets with the same
name on and off (about 20 times), and I can't get it to fail. I'm not sure
what to think, I guess. Perhaps I'll just use 1 outlet with the node name,
with a power splitter to the 2 redundant power supplies to
29 matches
Mail list logo