On 04/23/2015 12:11 PM, Thorvald Hallvardsson wrote:
> Hi guys,
>
> I need some help and answers related to share GFS2 file system over NFS.
> I have read the RH documentation but still some things are a bit unclear
> to me.
>
> First of all I need to build a POC for the shared storage cluster wh
On 3/5/2015 12:47 PM, Marek "marx" Grac wrote:
> Welcome to the fence-agents 4.0.16 release
>
> This release includes several bugfixes and features:
> * fence_kdump has implemented 'monitor' action that check if local node
> is capable of working with kdump
> * path to smnp(walk|get|set) can be
On 11/28/2014 8:10 PM, Jan Pokorný wrote:
> On 28/11/14 00:37 -0500, Digimer wrote:
>> On 28/11/14 12:33 AM, Fabio M. Di Nitto wrote:
>>> On 11/27/2014 5:52 PM, Digimer wrote:
>>>> I just created a dedicated/fresh wiki for planning and organizing:
>>>
On 11/27/2014 1:33 PM, Kristoffer Grönlund wrote:
>
>>> On 27 Nov 2014, at 2:41 am, Lars Marowsky-Bree wrote:
>>>
>>> On 2014-11-25T16:46:01, David Vossel wrote:
>>>
>>> Okay, okay, apparently we have got enough topics to discuss. I'll
>>> grumble a bit more about Brno, but let's get the organ
On 11/24/2014 4:12 PM, Lars Marowsky-Bree wrote:
> On 2014-11-24T15:54:33, "Fabio M. Di Nitto" wrote:
>
>> dates and location were chosen to piggy-back with devconf.cz and allow
>> people to travel for more than just HA Summit.
>
> Yeah, well, devconf.cz is
On 11/24/2014 3:39 PM, Lars Marowsky-Bree wrote:
> On 2014-09-08T12:30:23, "Fabio M. Di Nitto" wrote:
>
> Folks, Fabio,
>
> thanks for organizing this and getting the ball rolling. And again sorry
> for being late to said game; I was busy elsewhere.
>
> H
hers could organize a dinner event.
Cheers
Fabio
> Lars
>
>> On 01/11/14 01:06 AM, Fabio M. Di Nitto wrote:
>>> just a kind reminder.
>>>
>>> On 9/8/2014 12:30 PM, Fabio M. Di Nitto wrote:
>>>> All,
>>>>
>>>> it'
just a kind reminder.
On 9/8/2014 12:30 PM, Fabio M. Di Nitto wrote:
> All,
>
> it's been almost 6 years since we had a face to face meeting for all
> developers and vendors involved in Linux HA.
>
> I'd like to try and organize a new event and piggy-ba
On 09/09/2014 06:31 PM, Alan Robertson wrote:
> My apologizes for spamming everyone.
>
> I thought I deleted all the other email addresses.
>
> I failed.
>
> Apologies :-(
I think it's good that we have an open discussion with all parties
involved. I hardly fail to see that as an issue.
A
known :)
Cheers
Fabio
>
> -- Alan Robertson
> al...@unix.sh
>
>
> On 09/08/2014 04:30 AM, Fabio M. Di Nitto wrote:
>> All,
>>
>> it's been almost 6 years since we had a face to face meeting for all
>> developers and vendors involved i
All,
it's been almost 6 years since we had a face to face meeting for all
developers and vendors involved in Linux HA.
I'd like to try and organize a new event and piggy-back with DevConf in
Brno [1].
DevConf will start Friday the 6th of Feb 2015 in Red Hat Brno offices.
My suggestion would be
On 6/24/2014 12:32 PM, Amjad Syed wrote:
> Hello
>
> I am getting the following error when i run ccs_config_Validate
>
> ccs_config_validate
> Relax-NG validity error : Extra element clusternodes in interleave
You defined tempfile:12: element clusternodes: Relax-NG validity error : Element
> cl
On 06/12/2014 09:06 PM, Digimer wrote:
> Hrm, I'm not really sure that I am able to interpret this without making
> guesses. I'm cc'ing one of the devs (who I hope will poke the right
> person if he's not able to help at the moment). Lets see what he has to
> say.
>
> I am curious now, too. :)
Ch
On 03/28/2014 05:37 PM, berg...@merctech.com wrote:
>
>
> I've got a 3-node cluster under CentOS5.
>
> I'd like to add 3 additional nodes, running CentOS6.
>
> Are there any known issues, guidelines, or recommendations for having
> a single RHCS cluster with different OS releases on the nodes?
t;>
> name="single">
> name="single">
> name="single">
>
>
>
>
>
>
> (manual fencing just for testing)
>
>
> corosync.conf:
>
> compatibility: whitetank
> totem {
> version: 2
On 02/22/2014 10:33 AM, emmanuel segura wrote:
> I know if you need to modify anything outside ... {used by
> rgmanager} tag in the cluster.conf file, you need to restart the whole
> cluster stack, with cman+rgmanager i have never seen how to add a node
> and remove a node from cluster without rest
On 02/22/2014 06:21 AM, Bjoern Teipel wrote:
> Hi all,
>
> who's using CLVM with CMAN in a cluster with more than 2 nodes in
> production ?
Yeps.
> Did you guys got it to manage to live add a new node to the cluster
> while everything is running ?
Yeps :)
> I'm only able to add nodes while the
On 01/29/2014 02:32 PM, Demetres Pantermalis wrote:
> Hello Fabio,
>
> regarding the support from RH of scsi_fence with powerpath, please have
> a look at
> https://access.redhat.com/site/articles/40112 (which is Updated November
> 21 2013 at 2:16 AM)
>
> Paragraph 4. Limitations:
> First bullet:
On 1/29/2014 12:58 PM, Demetres Pantermalis wrote:
> Hello Fabio,
>
> I was under the impression that this is supported, because there is
> extensive documentation and reference in redhat site.
Maybe I should have been more clear, but it´s the fence_scsi on emcpower
path that is untested/unsuppor
On 1/29/2014 11:36 AM, Demetres Pantermalis wrote:
> Please find attached the cluster.conf file and the relevant logs from
> both servers.
I didn´t have a chance to look at the whole thread but please be aware that:
is something we do not support or test at Re
On 11/15/2013 07:31 PM, Rick Stevens wrote:
> On 11/15/2013 06:09 AM, Digimer issued this missive:
>> On 15/11/13 04:31, Fabio M. Di Nitto wrote:
>>> On 11/15/2013 6:35 AM, Digimer wrote:
>>>> Hi all,
>>>>
>>>>I'm trying to use 'd
On 11/15/2013 6:35 AM, Digimer wrote:
> Hi all,
>
> I'm trying to use 'depend' in rgmanager on rhel 6.4 to delay the
> start-up of VM services until storage starts. The storage is managed as
> a separate service (drbd -> clvmd -> gfs2). The 'depend_mode' is 'soft'.
>
> I got the start-up part
On 06/21/2013 10:43 PM, Subhendu Ghosh wrote:
> On 06/21/2013 02:34 AM, Vladislav Bogdanov wrote:
>> 21.06.2013 04:46, Digimer wrote:
>>> Hi all,
>>>
>>> I want to update the Guest Fencing docs on
>>> http://clusterlabs.org/wiki/Guest_Fencing and write a little tutorial as
>>> well. The CL page s
On 6/18/2013 10:12 AM, Christine Caulfield wrote:
> On 17/06/13 21:10, Fabio M. Di Nitto wrote:
>> On 06/17/2013 09:05 PM, Hammad Siddiqi wrote:
>>> Dear Fabio,
>>>
>>> Thanks for the update. Is there any fix for this bug. Would really
>>> a
On 06/17/2013 09:05 PM, Hammad Siddiqi wrote:
> Dear Fabio,
>
> Thanks for the update. Is there any fix for this bug. Would really
> appreciate if some patch or update is provided.
>
> Thank you,
>
> Hammad Siddiqi
>
>
The fix should be upstream already.
Chrissie do you know if it's been inc
On 4/17/2013 7:09 PM, M wrote:
> I have a 4 node cluster that's running correctly aside frequent fencing
> across all nodes. Even after turning up logging, I'm not able to find
> anything that stands out. However, the following keep presenting itself
> in corosync.log and I don't know to what it'
On 2/13/2013 4:36 PM, Adel Ben Zarrouk wrote:
> Hello,
>
> I have a client where we have installed two different sites and in each
> site we have installed two nodes cluster using RHEL 6.2 and Red Hat HA
> adds-on to failover Oracle DB 11GR2 (Site 2 is the disaster recovery for
> site1).
>
> Plea
On 01/22/2013 06:22 PM, Robert Hayden wrote:
> I am testing RHCS 6.3 and found that the self_fence option for a file
> system resource will now longer function as expected. Before I log an
> SR with RH, I was wondering if the design changed between RHEL 5 and RHEL 6.
>
> In RHEL 5, I see logic in
lease file a ticket with GSS so that we can
access the data required to perform debugging.
Fabio
>
> Regards,
> Ashish
>
> On Tue, Dec 18, 2012 at 12:47 PM, Fabio M. Di Nitto <mailto:fdini...@redhat.com>> wrote:
>
> On 12/5/2012 2:52 PM, Ashish G wrote:
>
On 12/5/2012 2:52 PM, Ashish G wrote:
> hi Experts,
>I has few question on ccsd:
> 1. what is the purpose of ccsd listening on ipv4 and ipv6 addresses as
> follows in my 2 node HA setup? We do not use IPv6 in our setup.
>
> netstat -antp |grep ccsd
>
> tcp0 0 0.0.0.0:
On 11/12/2012 9:37 PM, Andrew Price wrote:
> Hi,
>
> gfs2-utils 3.1.5 has been released. This version features bug fixes and
> performance enhancements for fsck.gfs2 in particular, better handling of
> symlinks in mkfs.gfs2, a small block manipulation language to aid future
> testing, a gfs2_lockc
On 10/17/2012 03:12 PM, Terance Dias wrote:
> Hi,
>
> We're trying to create a cluster in which the nodes lie in 2 different
> LANs. Since the nodes lie in different networks, they cannot resolve the
> other node by their internal IP. So in my cluster.conf file, I've
> provided their external IPs.
On 10/9/2012 2:27 AM, Chris Feist wrote:
> We've been making improvements to the pcs (pacemaker/corosync
> configuration system) command line tool over the past few months.
>
> Currently you can setup a basic cluster (including configuring corosync
> 2.0 udpu).
>
> David Vossel has also created a
On 9/28/2012 2:47 PM, Gianluca Cecchi wrote:
> On Fri, 28 Sep 2012 12:35:26 +0200 Fabio M. Di Nitto wrote:
>> - drop > (this is a general thing)
>
> thanks for your answer Fabio.
> what do you exactly mean with the sentence above? That the "device=.."
> part i
On 9/28/2012 11:40 AM, Gianluca Cecchi wrote:
> Hello,
> I saw some discussions in the past regarding the subject, such as:
> https://www.redhat.com/archives/cluster-devel/2011-September/msg00027.html
>
> My config is with an rhcs based on rhel 5.8 two nodes + quorum disk cluster
> and
> cman-2.
On 09/04/2012 08:20 PM, Terry wrote:
> On Tue, Sep 4, 2012 at 1:04 PM, Fabio M. Di Nitto <mailto:fdini...@redhat.com>> wrote:
>
> On 09/04/2012 05:01 PM, Terry wrote:
> > Hello,
> >
> > I am running an NFS cluster with 3 exports distributed ac
On 09/04/2012 05:01 PM, Terry wrote:
> Hello,
>
> I am running an NFS cluster with 3 exports distributed across 2 nodes.
> When I try to relocate an NFS export, it fails. I then have to disable
> and enable it on the other node. Does anyone have any tricks to get
> around this issue? I am sure
-05-17 at 11:57 +0200, Fabio M. Di Nitto wrote:
>> Hi Colin,
>>
>> On 5/17/2012 11:47 AM, Colin Simpson wrote:
>>> Thanks for all the useful information on this.
>>>
>>> I realise the bz is not for this issue, I just included it as it has the
>&g
Maybe a stupid question..
from node1:
telnet node2 1
do you get anything? are the iptables set correctly? (and check also
from node2 to node1 and from the luci machine to both nodes)
Fabio
On 8/15/2012 11:49 PM, Chip Burke wrote:
> There is nothing in messages or secure on either node1 or
Welcome to the cluster 3.1.93 (Release Candidate) release.
This release addresses a few major issues. Users of previous releases
are strongly encouraged to upgrade to this version.
This release also strictly requires corosync 1.4.4 to build and run.
Unless major issues will be reported, the next
On 8/2/2012 6:36 AM, Fabio M. Di Nitto wrote:
> On 08/01/2012 07:26 PM, Zama Ques wrote:
>
>> Is it not going to be a useful feature if we allow same nodes to be
>> used in different clusters ?
>
> Even if we leave aside the fact that a lot of code would need to be
On 08/01/2012 07:26 PM, Zama Ques wrote:
> Is it not going to be a useful feature if we allow same nodes to be
> used in different clusters ?
Even if we leave aside the fact that a lot of code would need to be
rewritten and retested, it's a feature that would benefit very users, at
the cost
On 07/19/2012 12:34 AM, Mario Salcedo wrote:
> Hi. I am configuring a cluster ha with six pcs Centos 6.3. I want
> share the same apache data with the nodes with GNBD/GFS but I dont
> find gnbd in th repost of Centos 6.3.
>
> Where can i find this packet for Centos 6.3?
GNBD has been deprecated i
On 7/12/2012 11:52 AM, Denis Medvedev wrote:
> If I will plan to add more nodes later, but have only 2 right now,
> is it better to make 2-nodes cluster or degraded 3 nodes?
> I recently heard that you cannot add more nodes to 2-nodes cluster
> without a clusterwide reboot.
Both have advantages a
On 07/02/2012 11:39 PM, urgrue wrote:
> On 2/7/12 19:14, Digimer wrote:
>> On 07/02/2012 01:08 PM, urgrue wrote:
>>> I'm trying to set up a 3-node cluster with clvm. Problem is, one node
>>> can't access the storage, and I'm getting:
>>> Error locking on node node3: Volume group for uuid not found:
On 05/28/2012 04:55 PM, Digimer wrote:
> On 05/28/2012 02:42 AM, Fabio M. Di Nitto wrote:
>> On 05/28/2012 12:02 AM, Digimer wrote:
>>> I'm not sure if this has come up before, but I thought it might be worth
>>> discussing.
>>>
>>> With the clu
On 05/28/2012 12:02 AM, Digimer wrote:
> I'm not sure if this has come up before, but I thought it might be worth
> discussing.
>
> With the cluster stacks merging, it strikes me that having two separate
> channels for effectively the same topic splits up folks. I know that
> #linux-ha technically
On 05/25/2012 06:20 PM, Nicolas Ross wrote:
> I am in the process of upgrading one of our cluster from RHEL 6.1 to
> 6.2. It's an 8-node cluster.
>
> I started with one node. Stop all cluster resources, cman, rgmanager et
> al. yum update, reboot, move to next. The first one did ok.
>
> On the se
ill be
available via RHN in 2/3 weeks.
Fabio
>
> Colin
>
>
> On Thu, 2012-05-17 at 10:26 +0200, Fabio M. Di Nitto wrote:
>> On 05/16/2012 08:19 PM, Colin Simpson wrote:
>>> This is interesting.
>>>
>>> We very often see the filesystems fail to umount
Emmanuel,
On 5/17/2012 10:38 AM, emmanuel segura wrote:
> Fabio
>
> The Ip it's the last to start, as sayed before look vim
> /usr/share/cluster/service.sh
>
> a have a cluster configured like that and i can tell i never found the
> problem
> ==
>
of the people who moved it into kernel space for performance reasons
> in the past (that are no longer relevant):
>
> https://bugzilla.redhat.com/show_bug.cgi?id=580863#c9
>
> , but I doubt this is the fix you have in mind.
No that's a totally different issue.
>
> Coli
On 05/16/2012 08:20 PM, emmanuel segura wrote:
>
> it's must be
>
> nfslock="1" recovery="relocate">
>
> ref="volume01">
>
ople who moved it into kernel space for performance reasons
> in the past (that are no longer relevant):
>
> https://bugzilla.redhat.com/show_bug.cgi?id=580863#c9
>
> , but I doubt this is the fix you have in mind.
>
> Colin
>
> On Tue, 2012-05-15 at 20:21 +0200, Fabio
On 5/16/2012 7:02 PM, Randy Zagar wrote:
> Are you sure that nfslock="1" is a valid option for ""?
Yes.
>
> There doesn't appear to be a way to add that through LUCI, which means
> I'll have to make and propagate those changes manually. I used to do
> this in EL5
>
> /sbin/ccs_tool update
On 05/15/2012 07:33 PM, Randy Zagar wrote:
>
>
>
>
>force_unmount="1" fsid="49388" fstype="ext3" mountpoint="/lvm/volume01"
> name="volume01" self_fence="0"/>
>
On 5/3/2012 12:36 PM, Ralf Aumueller wrote:
> Hello,
>
> recently there was an update of corosync and corosynclib rpms. Is it save to
> just install these updates on a running two node cluster or do I have to use a
> special procedure (e.g. Stop cluster services on node2; apply updates and
> rebo
On 4/4/2012 7:41 AM, Parvez Shaikh wrote:
> Hi all,
>
> As per my understanding, CMAN uses cluster name to internally generate
> multi-cast address. In my cluster.conf
>
> Having a cluster with same name in a given network leads to issue and is
> undesirable.
>
> I want to know is there anyway t
On 04/03/2012 09:58 PM, Jeff Stoner wrote:
> Any luci devs on the list? I'm looking for info on integrating fencing
> agents into luci. Once the Powers That Be allow me to release our
> fencing agent, I'd like to take a stab at making it easier use via luci.
Let's get the agent upstream first and
On 3/28/2012 8:14 AM, Parvez Shaikh wrote:
> Hi experts,
>
> I am running in to a problem in a situation where two clusters in the
> network have same name.
>
> Node A, Node B : cluster name CLUSTER
> Node C, Node D : cluster name CLUSTER
>
> Node C and Node D's cluster is running fine however w
On 1/9/2012 2:27 PM, Alan Brown wrote:
> On 09/01/12 09:36, Fabio M. Di Nitto wrote:
>
>>> RH's advice to use is to "Big Bang" it.
>>
>> It´s not much of an advice, as RH does not officially support this
>> upgrade method.
>
> Indeed, bu
On 1/9/2012 9:52 AM, Alan Brown wrote:
> On 09/01/12 02:38, Digimer wrote:
>
>> Technically yes, practically no. Or rather, not without a lot of
>> testing first.
>
> This is "rather a shame"
>
> I have a similar requirement (EL5 -> EL6 with GFS)
>
Well the cluster stack itself (openais/cman
On 01/02/2012 05:33 AM, Fajar A. Nugraha wrote:
> On Mon, Jan 2, 2012 at 11:20 AM, Fabio M. Di Nitto
> wrote:
>> On 01/02/2012 12:24 AM, Székelyi Szabolcs wrote:
>
>>> 3.0.12: 11-May-2010
>
>>> 3.0.12.1: 27-May-2011
>>> 3.1.2: 16-Jun-2011
>>&g
On 01/01/2012 11:04 PM, Michel Nadeau wrote:
> Hi,
>
> I upgraded to Corosync 1.4.2 and cman 6.2.0, but when I add this to my
> cluster.conf :
cman 6.2.0 does not exists anywhere.
>
>
>
> I get (when starting cman) :
>
>Starting cman... Relax-NG validity error : Extra element cman in
> i
On 01/02/2012 12:24 AM, Székelyi Szabolcs wrote:
> 3.0.11: 21-Apr-2010
> 3.0.12: 11-May-2010
> 3.0.13: 08-Jun-2010
> 3.0.14: 30-Jul-2010
> 3.0.15: 02-Sep-2010
> 3.0.16: 02-Sep-2010
> 3.0.17: 06-Oct-2010
> 3.1.0: 02-Dec-2010
> 3.1.1: 08-Mar-2011
> 3.0.12.1: 27-May-2011
> 3.1.2: 16-Jun-2011
>
> Wow
Hi Nicolas,
On 11/29/2011 09:51 PM, Nicolas Ross wrote:
>
> Starting cluster:
> Checking if cluster has been disabled at boot...[ OK ]
> Checking Network Manager... [ OK ]
> Global setup... [ OK ]
> Loading
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi all,
first of all I am very happy to announce that Digimer is going to be
our new release manager.
Digimer has been contributing to the cluster project in many many
different ways, very active and helpful in the community, with a
strong dedicati
On 10/14/2011 02:38 PM, Daniele Palumbo wrote:
> Il giorno 14/ott/2011, alle ore 14.01, Fabio M. Di Nitto ha scritto:
>> This setup looks very wrong and there is a lot of work on the storage
>> side you need to do.
>>
>> I am not even sure where to start, but a few simpl
On 10/14/2011 11:07 AM, Daniele Palumbo wrote:
> Il giorno 14/ott/2011, alle ore 05.56, Fabio M. Di Nitto ha scritto:
>> What kind of shared storage are you using? Filesystem on top of lvm?
>
> i am using 2 local disk, exported via vblade (i have an aoe storage, this is
> th
On 10/14/2011 12:17 AM, Daniele Palumbo wrote:
> hi,
>
> first of all, sorry for the long subject...
> i do not know how to explain myself, so any faq/manual will be of course
> appreciated.
>
> now,
> i have a test cluster, gentoo based.
> cluster 3.1.7, corosync 1.4.2, lvm2 2.02.88.
> clvm is
On 10/04/2011 03:45 PM, Mark Hlawatschek wrote:
> Hi,
>
> we are currently building up a Cisco Nexus FCoE infrastructure together with
> Red Hat Clusters.
>
> I'd like to use the Nexus 5ks for I/O fencing operations and I'm looking for
> a fencing agent to be used together with the Nexus 5k.
>
On 10/04/2011 06:49 AM, Digimer wrote:
> Here is the answer;
>
> http://www.youtube.com/watch?v=oKI-tD0L18A
>
ROFL... finally a bit humor on this mailing list ;)
Fabio
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
ame
> amount of memory (~129Mb).
>
> Thanks,
> Bill
> On Wed, Sep 28, 2011 at 11:23 PM, Fabio M. Di Nitto <mailto:fdini...@redhat.com>> wrote:
>
> On 09/29/2011 07:24 AM, Bill G. wrote:
> > Hi Lon, and Ryan,
> >
> > If you can get back to m
.x86_64
> cluster-glue-libs-1.0.5-2.el6.x86_64
> cluster-glue-1.0.5-2.el6.x86_64
> luci-0.22.2-14.el6_0.1.x86_64
> ricci-0.16.2-13.el6.x86_64
>
> On Wed, Sep 28, 2011 at 8:47 PM, Fabio M. Di Nitto <mailto:fdini...@redhat.com>> wrote:
>
> On 09/28/2011 11:13 PM,
On 09/28/2011 11:13 PM, Bill G. wrote:
> Hi List,
>
> I was wondering if you were aware of this bug, and if any of you have
> had success in with the suggested work around that is listed as the
> final comment.
>
> Currently this is happening on 5 of my 9 server cluster, one
> was using 35GB of r
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Welcome to the cluster 3.1.7 release.
This release addresses several bugs and especially a serious problem
introduced in the 3.1.6 release. If you are currently running 3.1.6,
it is highly recommended to upgrade to 3.1.7 as soon as possible.
The ne
For all RHEL related problems you need to contact GSS.
You also filed https://bugzilla.redhat.com/show_bug.cgi?id=741345
to track your issue.
Please provide the requested info.
Fabio
On 09/26/2011 05:55 PM, Matthew Painter wrote:
> Hi all,
>
> I have been trying to set up a cluster of 3 on R
On 09/26/2011 06:20 PM, Robert Hayden wrote:
> You might try to add the multicast stanza inside the
> stanza as well. You can specify an specific interface as well.
>
> For example,
> http://node1.company.com>" nodeid="1" votes="1">
>
>
On 09/03/2011 12:34 AM, Thomas Sjolshagen wrote:
> I've been getting:
>
> dlm: dev_write no op 48479213 18508
>
> in dmesg output after I've upgraded to the latest Fedora 15 cluster
> packages.
>
We already have a fix for this message. It is a miscommunication between
kernel and dlm_controld.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Welcome to the cluster 3.1.6 release.
The new source tarball can be downloaded here:
https://fedorahosted.org/releases/c/l/cluster/cluster-3.1.6.tar.xz
ChangeLog:
https://fedorahosted.org/releases/c/l/cluster/Changelog-3.1.6
To report bugs or is
Hi Nicolas,
On 08/19/2011 09:48 PM, Nicolas Ross wrote:
> Hi !
>
> We have a cluster of 8 nodes that are splited among 2 gigabit 24 ports
> network switch. Port one on each server is used for services, and port 2
> for the "totem-ring" or cluster communications.
>
> The servers are splited 4 on
Hi Daniel,
this bug has already been addressed in cman init script. I suspect
Debian 6.x has an older version.
You can ask Debian maintainers to update or at least grab the latest
init script from STABLE31 branch.
Fabio
On 08/10/2011 11:13 AM, Daniel Meszaros wrote:
> Hi there,
>
> I had to fi
Hi Robert,
i was pointed to: https://bugzilla.redhat.com/show_bug.cgi?id=718230
not sure you have enough privileges to see the bz but the issue is known
and the fix is on its way.
Fabio
On 8/4/2011 3:48 PM, Robert Hayden wrote:
> I was attempting to add VM resources to a two node cluster with t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Welcome to the cluster 3.1.5 release.
This release addresses two issues in ccs_update_schema and relax the
requirements on fence-agents and resource-agents that were erroneously
introduced in the 3.1.4 release. It is still highly recommended to
upda
Welcome to the cluster 3.1.4 release.
This release fixes a few bugs and adds a new dynamic relaxng schema
creation.
In order to run this version of cman/cluster, it is strictly required to
have fence-agents at least in version 3.1.5 and resource-agents in
version 3.9.2. Alternatively you have to d
to all people that contributed to achieve this
great milestone.
Happy clustering,
Fabio
Under the hood (from 3.1.4):
Arnaud Quette (1):
eaton_snmp: add support for Eaton Switched ePDU
Fabio M. Di Nitto (3):
relaxng: ship bits required to build the schema at runtime
relaxng: drop
On 07/06/2011 07:08 PM, Nicolas Ross wrote:
> Hi !
>
> In our curent setup we have an 8-node cluster at site A. In the near
> future, we will have a different cluster at site B. Both site will be
> bridged with a lan-extension, and we plan on bridging the "service"
> vlan, the one that that cluste
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi everybody,
The current resource agent repository [1] has been tagged to v3.9.2.
Tarballs are also available [2].
This is a quick bug fix release to address a couple of regressions
introduced during 3.9.1 development cycle.
Highlights for the LH
On 6/28/2011 7:55 AM, anderson souza wrote:
> Hi everyone,
>
> I have an Active/Passive RHCS 6.1 runing with 8TB of GFS2 with NFS on
> top and exporting 26 mouting points to 250 NFS clients. The GFS2
> mounting points are mounted with noatime, nodiratime, data=writeback and
> localflocks options,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Welcome to the cluster 3.1.3 release.
This release fixes a build issue in dlm_controld with any kernel older
than 3.0.
The new source tarball can be downloaded here:
https://fedorahosted.org/releases/c/l/cluster/cluster-3.1.3.tar.xz
ChangeLog:
h
Lon, what's your opinion on this one?
On 06/16/2011 04:44 PM, Gianluca Cecchi wrote:
> On Thu, Jun 16, 2011 at 3:13 PM, Fabio M. Di Nitto wrote:
>
>> Highlights for the rgmanager resource agents set:
>>
>> - oracledb: use shutdown immediate
>
> hello,
&
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Welcome to the cluster 3.1.2 release.
This release contains several bug fixes and improvements. This version
must be used in conjunction with resource-agents 3.9.1.
The new source tarball can be downloaded here:
https://fedorahosted.org/releases/c
Hi everybody,
The current resource agent repository [1] has been tagged to v3.9.1.
Tarballs are also available [2].
This is the very first release of the common resource agent repository.
It is a big milestone towards eliminating duplication of effort with the
goal of improving the overall quali
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi everybody,
The current resource agent repository [1] has been tagged to v3.9.1rc1.
Tarballs are also available [2].
This is the very first release of the common resource agent repository.
It is a big milestone towards eliminating duplication of
On 05/09/2011 01:25 PM, Fabio M. Di Nitto wrote:
> Hi all,
>
> we are in the process of moving the old cluster wiki
> (http://sourceware.org/cluster/wiki/) to:
>
> https://fedorahosted.org/cluster/wiki/HomePage
The relocation is now complete and the old wiki is redirecting us
contributed to achieve this
great milestone.
Happy clustering,
Fabio
Under the hood (from 3.1.3):
Cedric Buissart (1):
ipmilan help: login same as -l
Fabio M. Di Nitto (4):
Fix file permissions
build: add missing file from tarball release
fence_rsa: readd test info
On 05/19/2011 05:14 PM, Nicolas Ross wrote:
> Is it just a matter of taking the first node, moving it's service to
> another one, yum update, reboot, and move the next one ?
Please contact GSS that will point you to the correct documentation to
perform the upgrade.
In general:
take first node, m
Hi all,
we are in the process of moving the old cluster wiki
(http://sourceware.org/cluster/wiki/) to:
https://fedorahosted.org/cluster/wiki/HomePage
All pages from the old wiki have been imported and we are in the process
to reformat the pages to match the new trac-wiki notation.
If you own an
On 04/06/2011 04:51 PM, Nicolas Ross wrote:
>> Nicolas, please report the issue via GSS.
>>
>> Marek can start looking into it.
>>
>> Fabio
>>
>
> Sorry, what's GSS ? Is it bugzilla.redhat.com ?
Red Hat Global Support Service... the one you contact to report
customer/product related issues. No it
Nicolas, please report the issue via GSS.
Marek can start looking into it.
Fabio
On 4/6/2011 3:02 PM, Nicolas Ross wrote:
> (...)
>
>>>
>>> fence_apc -a ip -l user -p pass -n node101 -x -v
>>>
>>> and I get very rapidly :
>>>
>>> Unable to connect/login to fencing device
>>>
>>> Netstat shows m
On 4/5/2011 6:48 PM, Nicolas Ross wrote:
> Hi !
>
> I've got my cluster now setup in it's final position at the colo
> facility, and we've got an APC ap-8941 power bar. At the moment, our
> fencing is configured with ipmilan via our RMM3 modules on our intel
> servers. But I'd like to add a backup
1 - 100 of 326 matches
Mail list logo