Hey guys, hopefully someone can help me here.
If I start both hosts up at the same time, they join just fine, but if I
reboot one, I have to disable iptables on both hosts
Here's the IP tables I have going for now after trying some troubleshooting:
# Firewall configuration written by system-conf
On 04/12/2012 12:04 PM, emmanuel segura wrote:
If don't use the qdisk who is the master in the split-brain?
Remember, DLM/rgmanager recovery is performed -after- fencing. In a two
node cluster:
with no qdisk (two_node="1"):
- by default, both nodes go to fence at the same time :(
+ fencing
On 04/12/2012 12:04 PM, emmanuel segura wrote:
> If don't use the qdisk who is the master in the split-brain?
>
> Who take the resources?
>
> What are you concerns about fence_scsi?
>
> fence_scsi in redhat 5.X doesn't reboot the node, the fenced node neve
> release the resources
>
> Why redhat
On 04/12/2012 11:51 AM, Ryan O'Hara wrote:
> On 04/12/2012 10:18 AM, emmanuel segura wrote:
>> That's right
>>
>> you'll found your cluster partitioned and if you "> expected_votes="1">" as redhat setting our cluster maybe you get data
>> corruption
>
> How? What fence agent are you using? I've us
If don't use the qdisk who is the master in the split-brain?
Who take the resources?
What are you concerns about fence_scsi?
fence_scsi in redhat 5.X doesn't reboot the node, the fenced node neve
release the resources
Why redhat made the qdisk as Tie-breakers and some people from support say
it
On 04/12/2012 11:18 AM, emmanuel segura wrote:
That's right
you'll found your cluster partitioned and if you "" as redhat setting our cluster maybe you get data
corruption
GFS2 and rgmanager depend on fencing completion prior to service
recovery or GFS2's journal recovery.
Most two-node clu
Hello Aderson
Talking about the delay parameter, I think the people work in the redhat
support doesn't the man pages
Because i hate the work around of using delay in the fencing session
I found some times ago
master_wins="0"
I
On 04/12/2012 10:18 AM, emmanuel segura wrote:
That's right
you'll found your cluster partitioned and if you "" as redhat setting our cluster maybe you get data
corruption
How? What fence agent are you using? I've used this configuration for
years and never had data corruption.
Because eve
That's right
you'll found your cluster partitioned and if you "" as redhat setting our cluster maybe you get data
corruption
Because every node can operate with one vote an rich the quorum state
Forse the fencing problem redhat implement a work around as permanent
solution
fence delay for some
I don't normally chime in, but running RHEL on a two node cluster we have found
that the only way to ensure fencing works properly is with a 2gb qdisk. The
option you were told works correctly less than 50% of the time as both attempt
to fence the other. We set up a delay on the passive node s
A qdisk is just another way to maintain quorum in cluster. there is a special
two node cluster mode designed to allow for quorum to be maintained by the
surviving node.
But it's not very clear to me what happens with fencing if both nodes get
partitioned on the network. Do they both try to f
Hello all List
I have a big question about qdisk in a two cluster
one of out clients has too many cluster in two nodes configuration and one
RedHat technical came to us and said that there is no need to use the qdisk
I would to know if this is true
I really think it's a bad idea
--
esta es mi
On 12/04/12 14:04, AK wrote:
> Ah, the evils of mass invite
And the evils of Linkedin in particular.
The only way to stop getting invites is to setup a Linkedin account
yourself and from that point you _cannot_ opt out of receiving mail from
them from time to time.
I regard them as spammers
Ah, the evils of mass invite
On 4/12/12 7:12 AM, anuj chauhan via LinkedIn wrote:
> LinkedIn
>
>
>
>
>
> anuj chauhan requested to add you as a connection on LinkedIn:
>
>
> --
>
> Krishna,
>
> I'd like to add you to my professional networ
14 matches
Mail list logo