We don't and when we were investigating why we had on the ProCurve 4108gl reassembly problems, we were specific asked if we are doing bonding or VLAN tagging (neither we were doing). Just looks like the ProCurve are loosing packets without telling so. We switched in Cisco 2960G-48 with Jumbo Frames now and haven't had any reassembly timeouts since then. Global Cache timeout has gone down significant. Each Interconnect for Oracle 10G has its own Cisco 2960G-48 now.
> -----Original Message----- > From: Sunil Mushran [mailto:[EMAIL PROTECTED] > Sent: Thursday, October 11, 2007 15:13 > To: Ulf Zimmermann > Cc: Randy Ramsdell; [email protected] > Subject: Re: [Ocfs2-users] Cluster setup > > Use network bonding. > > Ulf Zimmermann wrote: > >> -----Original Message----- > >> From: [EMAIL PROTECTED] [mailto:ocfs2-users- > >> [EMAIL PROTECTED] On Behalf Of Alexei_Roudnev > >> Sent: Thursday, October 11, 2007 11:10 > >> To: Sunil Mushran; Randy Ramsdell > >> Cc: [email protected] > >> Subject: Re: [Ocfs2-users] Cluster setup > >> > >> I explained you: > >> 1 - single heartbeat interface IS A BUG for me. > >> > > > > I haven't really followed the whole discussion but that point above did > > just come to my mind a few days ago when we replaced our HP ProCurve > > 4108gl used for 3 separate Interconnects on 10g, where only 1 also > > carries the OCFS2 heartbeat. So if that switch dies, OCFS2 will go down > > while Oracle 10g could survive (if OCFS2 wouldn't die). > > > > I have to agree that is a bad design at this point. Heartbeat should > > also be on at least 2 links for OCFS2. > > > > Ulf. > > > > _______________________________________________ Ocfs2-users mailing list [email protected] http://oss.oracle.com/mailman/listinfo/ocfs2-users
