If I will plan to add more nodes later, but have only 2 right now,
is it better to make 2-nodes cluster or degraded 3 nodes?
I recently heard that you cannot add more nodes to 2-nodes cluster without a
clusterwide reboot.
--
Linux-cluster mailing list
Linux-cluster@redhat.com
I'm trying to configure a high availability cluster for Squid. There will
be no shared storage device. The problem relates to the time required for
starting and stopping the fencing daemon. Is it possible just to disable
this?
I've tried the clean_start=0 and post_join_delay=-1 but without
and expected votes 2. A single ip ping
test as a tiebreaker is probably not the best way to use qdisk anyway
and I would expect the two-node mode to work almost equally well.
Regards
--
Denis Braekhus
Team Lead Managed Services
Redpill Linpro AS - Changing the game
[1]
http://www.redhat.com/docs
--
Denis Braekhus
Team Lead Managed Services
Redpill Linpro AS - Changing the game
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
on what may be
causing this?
This sounds similar to what I am experiencing. Whenever I do rolling
updates of the cluster the cluster loses quorum when one of the nodes is
taken down and the other one should take over master.
Regards
--
Denis Braekhus
Team Leader Managed Services
Redpill Linpro
Hi,
Upgraded clusternodes from RHEL5.2 - 5.3 today, and the GFS(1) mount I
had was not mounted as usual.
It turns out that GFS no longer accepts noatime or noquota? Removing
these mountoptions and I could again mount my GFS volume. Which one is
now deprecated and why?
Regards
--
Denis
--
Linux
Bob Peterson wrote:
| It turns out that GFS no longer accepts noatime or noquota? Removing
| these mountoptions and I could again mount my GFS volume. Which one
| is
| now deprecated and why?
Hi Denis,
This sounds like a bug. Can you open a bugzilla record for it?
AFAIK, it was not our
evil.
I am still scratching my head over why quorum was dissolved over booting
node B.
Regards
--
Denis Braekhus
Team Lead Managed Services
Redpill Linpro AS - Changing the game
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi,
After getting to know the Configuring and Managing a Red Hat Cluster
documentation [1] fairly well, I have a few enhancement suggestions.
What is the best way to submit these?
[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/
Regards
--
Denis
Services and
Resource Ordering [1] for usage.
Hope this is of some help to you.
[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/s1-clust-rsc-testing-config-CA.html
Regards
--
Denis Braekhus
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https
=node03/
nfsclient ref=nfs/
/nfsexport
/fs
/service
/rm
[1] http://sources.redhat.com/cluster/doc/nfscookbook.pdf
Best Regards
--
Denis Braekhus
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https
connectivity through
the router (probably need to change that!?).
What kind of tests can I use for qdiskd that will prevent router-outages
from killing my cluster completely?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux
David Teigland wrote:
Hi,
We're looking into how cluster.conf updates should be done in future
versions and we'd like some feedback about how you currently do this, and
what you'd like to see.
1. How often do you update cluster.conf? (Never would be valuable
feedback.)
After stabilizing
very interesting to cut the failover time
for my scenario too.
Is lowering the sleep time or removing it safe for clusters without NFS?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
denis wrote:
Lorenz Pfiffner wrote:
I want to add something I recently found out about the relocation time
of IP resources, which I complained about.
The reason why it takes 10 seconds per IP can be found here:
http://sources.redhat.com/cluster/faq.html#rgm_failovertime
Can anyone add
experience.
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Christine Caulfield wrote:
denis wrote:
What does Flags: Dirty mean? Is it anything to worry about?
http://www.redhat.com/archives/cluster-devel/2007-September/msg00091.html
NODE_FLAGS_DIRTY - This node has internal state and must not join
a cluster that also has state.
What
removed the package, and the update transaction test then
passes without errors.
The kmod-gfs2 package should probably have been removed in the
transaction too?
Is this something I should report via bugzilla?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https
mounted during updates or unmount them?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
state: Cluster-Member
Nodes: 2
Expected votes: 3
Total votes: 3
Quorum: 2
Active subsystems: 10
Flags: Dirty
Ports Bound: 0 11 177
Node name: nodename.customername.com
Node ID: 1
What does Flags: Dirty mean? Is it anything to worry about?
Google was unhelpful.
Regards
--
Denis
--
Linux-cluster
Mikko Partio wrote:
On Fri, May 23, 2008 at 11:18 AM, denis denisb wrote:
Scratch that, these appear to be installed too :
kmod-gfs x86_64 0.1.23-5.el5 rhel-x86_64-server-cluster-storage-5
kmod-gfs2 x86_64 1.92-1.1.el5 rhel-x86_64-server-cluster-storage-5
Is this RHEL
cluster.conf has proper fencing setup and that the
fencing devices actually work. If the node isn't locked out of the
cluster when you issue fence_node then there is something wrong..
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
ping works fine for me out of the box..
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
between my clusternodes (i have a SAN available)?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
of concurrent writes /
read access / high number of files)?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Maciej Bogucki wrote:
denis napisaĆ(a):
Maciej Bogucki wrote:
Are you certain you want to continue? [yN] y
can't open /tmp/fence_manual.fifo: No such file or directory
Do You run fence_ack_manual on the node which is master in the cluster[1]?
[1] - http://www.mail-archive.com/linux-cluster
the fence_bladecenter script could be patched to be more robust,
but how would I go about using a patched script without getting it
overwritten by updates?
Regards
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
pointers!
I will bother you with my actual issues in another mail I guess.
Regards
--
Denis B
[1]
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Cluster_Suite_Overview/index.html
[2]
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510
Great stuff, just what I needed.
Thanks
--
Denis
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Dear developers,
is it safe to do /sbin/service clvmd restart in working cluster with some lv
GFS mounted?
What happens if I change locking in lvm.conf to 0 temorarily to change
incorrectly marked as clustered logical volumes?
Denis Medvedev
--
Linux-cluster mailing list
Linux-cluster
30 matches
Mail list logo