Hi,

I don't have time to look at this but you could certainly get help from
the JGroups mailing list.

Emmanuel

Bogon Choi wrote:
> This is my log and configuration. I am using Sequoia 2.10.10.
> <http://2.10.10.>
>
> == Network Configuration ==
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref Use Iface
> 10.60.10.0 <http://10.60.10.0> * 255.255.255.0 <http://255.255.255.0>
> U 0 0 0 eth0
> 192.168.10.0 <http://192.168.10.0> * 255.255.255.0
> <http://255.255.255.0> U 0 0 0 eth2
> 169.254.0.0 <http://169.254.0.0> * 255.255.0.0 <http://255.255.0.0> U
> 0 0 0 eth2
> default 10.60.10.1 <http://10.60.10.1> 0.0.0.0 <http://0.0.0.0> UG 0 0
> 0 eth0
>
> == LOG ==
> 18:53:18,164 DEBUG protocols.pbcast.NAKACK 192.168.10.203:32784
> <http://192.168.10.203:32784/>: received 192.168.10.203:32784#12
> <http://192.168.10.203:32784/#12>
> 18:53:18,164 DEBUG protocols.pbcast.STABLE received stable msg from
> 192.168.10.203:32784 <http://192.168.10.203:32784/>:
> [192.168.10.203:32784#11 <http://192.168.10.203:32784/#11>,
> 192.168.10.202:32789#50 <http://192.168.10.202:32789/#50>]
> 18:53:21,735 DEBUG protocols.pbcast.NAKACK 192.168.10.203:32784
> <http://192.168.10.203:32784/>: sending XMIT_REQ ([51, 51]) to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> 18:53:21,735 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {NAKACK=[XMIT_REQ, range=[51 : 51], sender=192.168.10.202:32789
> <http://192.168.10.202:32789/>], UDP=[channel_name=myDB]}
> 18:53:21,767 DEBUG jgroups.protocols.UDP sending 1 msgs (80 bytes
> (0.12% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:25,461 DEBUG jgroups.protocols.UDP received (mcast) 61 bytes
> from 192.168.10.202:32790 <http://192.168.10.202:32790/>
> 18:53:25,461 DEBUG jgroups.protocols.UDP message is [dst:
> 228.8.8.9:45566 <http://228.8.8.9:45566/>, src: 192.168.10.202:32789
> <http://192.168.10.202:32789/> (2 headers), size = 0 bytes], headers
> are {UDP=[channel_name=myDB], PING=[PING: type=GET_MBRS_REQ, arg=null]}
> 18:53:25,462 DEBUG jgroups.protocols.PING received GET_MBRS_REQ from
> 192.168.10.202:32789 <http://192.168.10.202:32789/>, sending response
> [PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]]
> 18:53:25,462 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {PING=[PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]], UDP=[channel_name=myDB]}
> 18:53:25,494 DEBUG jgroups.protocols.UDP sending 1 msgs (67 bytes
> (0.1% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:26,463 DEBUG jgroups.protocols.UDP received (mcast) 61 bytes
> from 192.168.10.202:32790 <http://192.168.10.202:32790/>
> 18:53:26,463 DEBUG jgroups.protocols.UDP message is [dst:
> 228.8.8.9:45566 <http://228.8.8.9:45566/>, src: 192.168.10.202:32789
> <http://192.168.10.202:32789/> (2 headers), size = 0 bytes], headers
> are {UDP=[channel_name=myDB], PING=[PING: type=GET_MBRS_REQ, arg=null]}
> 18:53:26,463 DEBUG jgroups.protocols.PING received GET_MBRS_REQ from
> 192.168.10.202:32789 <http://192.168.10.202:32789/>, sending response
> [PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]]
> 18:53:26,464 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {PING=[PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]], UDP=[channel_name=myDB]}
> 18:53:26,496 DEBUG jgroups.protocols.UDP sending 1 msgs (67 bytes
> (0.1% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:26,538 DEBUG protocols.pbcast.NAKACK 192.168.10.203:32784
> <http://192.168.10.203:32784/>: sending XMIT_REQ ([51, 51]) to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> 18:53:26,538 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {NAKACK=[XMIT_REQ, range=[51 : 51], sender=192.168.10.202:32789
> <http://192.168.10.202:32789/>], UDP=[channel_name=myDB]}
> 18:53:26,570 DEBUG jgroups.protocols.UDP sending 1 msgs (80 bytes
> (0.12% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:31,339 DEBUG protocols.pbcast.NAKACK 192.168.10.203:32784
> <http://192.168.10.203:32784/>: sending XMIT_REQ ([51, 51]) to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> 18:53:31,339 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {NAKACK=[XMIT_REQ, range=[51 : 51], sender=192.168.10.202:32789
> <http://192.168.10.202:32789/>], UDP=[channel_name=myDB]}
> 18:53:31,371 DEBUG jgroups.protocols.UDP sending 1 msgs (80 bytes
> (0.12% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:35,789 DEBUG jgroups.protocols.UDP received (mcast) 61 bytes
> from 192.168.10.202:32790 <http://192.168.10.202:32790/>
> 18:53:35,790 DEBUG jgroups.protocols.UDP message is [dst:
> 228.8.8.9:45566 <http://228.8.8.9:45566/>, src: 192.168.10.202:32789
> <http://192.168.10.202:32789/> (2 headers), size = 0 bytes], headers
> are {UDP=[channel_name=myDB], PING=[PING: type=GET_MBRS_REQ, arg=null]}
> 18:53:35,790 DEBUG jgroups.protocols.PING received GET_MBRS_REQ from
> 192.168.10.202:32789 <http://192.168.10.202:32789/>, sending response
> [PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]]
> 18:53:35,790 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {PING=[PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]], UDP=[channel_name=myDB]}
> 18:53:35,822 DEBUG jgroups.protocols.UDP sending 1 msgs (67 bytes
> (0.1% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:36,141 DEBUG protocols.pbcast.NAKACK 192.168.10.203:32784
> <http://192.168.10.203:32784/>: sending XMIT_REQ ([51, 51]) to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> 18:53:36,141 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {NAKACK=[XMIT_REQ, range=[51 : 51], sender=192.168.10.202:32789
> <http://192.168.10.202:32789/>], UDP=[channel_name=myDB]}
> 18:53:36,173 DEBUG jgroups.protocols.UDP sending 1 msgs (80 bytes
> (0.12% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:36,790 DEBUG jgroups.protocols.UDP received (mcast) 61 bytes
> from 192.168.10.202:32790 <http://192.168.10.202:32790/>
> 18:53:36,790 DEBUG jgroups.protocols.UDP message is [dst:
> 228.8.8.9:45566 <http://228.8.8.9:45566/>, src: 192.168.10.202:32789
> <http://192.168.10.202:32789/> (2 headers), size = 0 bytes], headers
> are {UDP=[channel_name=myDB], PING=[PING: type=GET_MBRS_REQ, arg=null]}
> 18:53:36,790 DEBUG jgroups.protocols.PING received GET_MBRS_REQ from
> 192.168.10.202:32789 <http://192.168.10.202:32789/>, sending response
> [PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]]
> 18:53:36,791 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {PING=[PING: type=GET_MBRS_RSP, arg=[own_addr=192.168.10.203:32784
> <http://192.168.10.203:32784/>, coord_addr=192.168.10.202:32789
> <http://192.168.10.202:32789/>, is_server=true]], UDP=[channel_name=myDB]}
> 18:53:36,823 DEBUG jgroups.protocols.UDP sending 1 msgs (67 bytes
> (0.1% of max_bundle_size), collected in 32ms) to 1 destination(s)
> 18:53:40,943 DEBUG protocols.pbcast.NAKACK 192.168.10.203:32784
> <http://192.168.10.203:32784/>: sending XMIT_REQ ([51, 51]) to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> 18:53:40,943 DEBUG jgroups.protocols.UDP sending msg to
> 192.168.10.202:32789 <http://192.168.10.202:32789/>
> (src=192.168.10.203:32784 <http://192.168.10.203:32784/>), headers are
> {NAKACK=[XMIT_REQ, range=[51 : 51], sender=192.168.10.202:32789
> <http://192.168.10.202:32789/>], UDP=[channel_name=myDB]}
> 18:53:40,976 DEBUG jgroups.protocols.UDP sending 1 msgs (80 bytes
> (0.12% of max_bundle_size), collected in 33ms) to 1 destination(s)
> 18:53:44,723 DEBUG jgroups.protocols.UDP received (mcast) 61 bytes
> from 192.168.10.202:32790 <http://192.168.10.202:32790/>
>
> == CONFIG ==
> <config>
> <UDP
> mcast_port="45566"
> mcast_addr="228.8.8.9 <http://228.8.8.9/>"
> tos="16"
> ucast_recv_buf_size="20000000"
> ucast_send_buf_size="640000"
> mcast_recv_buf_size="25000000"
> mcast_send_buf_size="640000"
> loopback="false"
> discard_incompatible_packets="true"
> max_bundle_size="64000"
> max_bundle_timeout="30"
> use_incoming_packet_handler="true"
> use_outgoing_packet_handler="false"
> ip_ttl="2"
> down_thread="false" up_thread="false"
> enable_bundling="true"/>
> <PING timeout="2000"
> down_thread="false" up_thread="false" num_initial_members="3"/>
> <MERGE2 max_interval="10000"
> down_thread="false" up_thread="false" min_interval="5000"/>
> <!--VERIFY_SUSPECT timeout="1500" down_thread="false"/-->
> <pbcast.NAKACK max_xmit_size="60000"
> use_mcast_xmit="false" gc_lag="0"
> retransmit_timeout="100,200,300,600,1200,2400,4800"
> down_thread="false" up_thread="false"
> discard_delivered_msgs="true"/>
> <UNICAST timeout="300,600,1200,2400,3600"
> down_thread="false" up_thread="false"/>
> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
> down_thread="false" up_thread="false"
> max_bytes="400000"/>
> <VIEW_SYNC avg_send_interval="60000" down_thread="false"
> up_thread="false" />
> <pbcast.GMS print_local_addr="true" join_timeout="3000"
> down_thread="false" up_thread="false"
> join_retry_timeout="2000" shun="true" handle_concurrent_startup="true" />
> <SEQUENCER down_thread="false" up_thread="false" />
> <FC max_credits="2000000" down_thread="false" up_thread="false"
> min_threshold="0.10"/>
> <!-- FRAG2 frag_size="60000" down_thread="false" up_thread="true"/ -->
> <!-- pbcast.STATE_TRANSFER down_thread="false" up_thread="false"/-->
> </config>
>
>
> 2008/10/28 Emmanuel Cecchet <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>>
>
>     Hi,
>
>     I would recommend that you set the JGroups logger to DEBUG in
>     log4j.properties (log4j.logger.org.jgroups=DEBUG, Console, Filetrace).
>
>     JGroups uses also UDP unicast to send acknowledgment messages and
>     missing packets, if the route is not properly set it will not work.
>     Note that the demo does not use the same stack (no total order) as
>     Sequoia. Sequoia does not bind any port as far as the group
>     communication is concerned, so this is a problem in your group
>     communication setup.
>
>     You can try to seek support on the JGroups mailing list by providing
>     them your JGroups config file and your network config. You can
>     also try
>     to run the JGroups demo with the JGroups config file you use for
>     Sequoia
>     to see if it works.
>
>     Hope this helps,
>     Emmanuel
>
>     Bogon Choi wrote:
>     > I tested with Tcpdump. I have two interfaces, eth0 and eth2
>     (gigabit).
>     > When I test with eth0 and its front-end switch, perfectly UDP
>     > multicast works. By the way, when I test with eth2 and gigabit
>     > back-end switch, it still does not work.
>     >
>     > So, I ran tcpdump to see packets on eth2 of both sides. The
>     > interesting thing is that I can see the multicast packets of both
>     > sides, eth2s. From this situation, I don't know why two controllers
>     > cannot receive those packets through the interface.
>     >
>     > So, I assume that the controller does not know how to listen
>     from the
>     > second interface, eth2. Maybe, it can only listen from the first
>     one,
>     > eth0, for UDP communication. Hard coded something.
>     >
>     > Totally I don't understand the current situation. Packets are
>     floating
>     > around on the gigabit network. But, the packets can not be
>     received on
>     > each controller. Can you give me some idea? Also, I want to mention
>     > that the JGroups simple test worked with the gigabit switch.
>     >
>     > In the worst case, I have in mind switching the order of interfaces,
>     > making gigabit interface to eth0.
>     > Thanks.
>     >
>     >
>     > 2008/10/22 Bogon Choi <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]> <mailto:[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>>>
>     >
>     > Thanks for your notice.
>     > I tested with JGroups sample. But, it worked with the current
>     > routing table.
>     > And, today I sent an email to the network administrator to check
>     > out the physical switch setting.
>     > As you said, it might filter UDP milticasting packets. Thanks
>     anyway.
>     >
>     > I am waiting from a reply from the admin. And, in the worst case,
>     > I have to setup tcpdump to trace all network stacks and routes.
>     >
>     >
>     >
>     > On Wed, Oct 22, 2008 at 10:55 AM, Emmanuel Cecchet
>     > <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>     <mailto:[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>> wrote:
>     >
>     > Hi,
>     >
>     > After showing this error, two controllers are partitioned.
>     >
>     > Actually they never saw each other and they were never in a group.
>     > Make sure that your switch supports UDP multicast. You can use
>     > the JGroups demo and test your network config until the demo
>     > works. Once you have your network settings right, you can try
>     > to use Sequoia.
>     > If you are using Linux, make sure that your kernel supports IP
>     > multicast.
>     > Use a tool like tcpdump or ethereal to check how packets are
>     > routed on your machine.
>     >
>     > Hope this helps,
>     > Emmanuel
>     >
>     >
>     > I am using Appia for UDP multicasting. When I type "enable
>     > <backend>" from the other controller,
>     > the prompt is stuck and the console shows a bunch of
>     > nakfifo.multicast.NakFifoMulticastSession nacked.
>     > And then, controllers are partitioned each other. They
>     > don't see each other finally.
>     >
>     > I was testing how to replicate a virtual database to two
>     > controllers. Each virtual database conf. was setup to use
>     > one backend of a different node, with RAIDb-1 type.
>     >
>     > First time, I tries to solve this problem by using
>     > 'JGroups'. But, jgroups also was stuck at 'enabling the
>     > backend'.
>     > So, I moved to 'Appia'. Appia seemed to work with TCP
>     > setup. Yes, it worked. But, it does not work with UDP
>     > multicasting.
>     >
>     > I turned off firewalls. So, this is not firewall problem.
>     >
>     > I added the entry in routing table, such as:
>     > /sbin/route add -net 224.0.0.0 <http://224.0.0.0> <http://224.0.0.0>
>     > <http://224.0.0.0> netmask 240.0.0.0 <http://240.0.0.0>
>     <http://240.0.0.0>
>     > <http://240.0.0.0> dev eth2
>     >
>     >
>     > Following is my routing table of a controller
>     >
>     > Kernel IP routing table
>     > Destination Gateway Genmask Flags Metric Ref Use Iface
>     > 10.60.10.0 <http://10.60.10.0> <http://10.60.10.0>
>     <http://10.60.10.0> *
>     > 255.255.255.0 <http://255.255.255.0> <http://255.255.255.0>
>     > <http://255.255.255.0> U 0 0 0 eth0
>     > 192.168.10.0 <http://192.168.10.0> <http://192.168.10.0>
>     <http://192.168.10.0> *
>     > 255.255.255.0 <http://255.255.255.0> <http://255.255.255.0>
>     > <http://255.255.255.0> U 0 0 0 eth2
>     > 224.0.0.0 <http://224.0.0.0> <http://224.0.0.0> <http://224.0.0.0> *
>     > 240.0.0.0 <http://240.0.0.0> <http://240.0.0.0>
>     <http://240.0.0.0> U 0 0 0 eth2
>     > default 10.60.10.1 <http://10.60.10.1> <http://10.60.10.1>
>     <http://10.60.10.1>
>     > 0.0.0.0 <http://0.0.0.0> <http://0.0.0.0> <http://0.0.0.0> UG 0
>     0 0 eth0
>     >
>     >
>     > Actually, I wanted to use gigabit ethernet for all
>     > products communication. So, I forwarded gossip and udp
>     > packets to eth2, gigbit ethernet. When I used TCP setup,
>     > it used eth0 wholly. What is the problem?
>     >
>     >
>     ------------------------------------------------------------------------
>     >
>     > _______________________________________________
>     > Sequoia mailing list
>     > [email protected]
>     <mailto:[email protected]>
>     > <mailto:[email protected]
>     <mailto:[email protected]>>
>     > https://forge.continuent.org/mailman/listinfo/sequoia
>     >
>     >
>     >
>     > --
>     > Emmanuel Cecchet
>     > FTO @ Frog Thinker Open Source Development & Consulting
>     > --
>     > Web: http://www.frogthinker.org
>     > email: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>     <mailto:[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
>     > Skype: emmanuel_cecchet
>     >
>     > _______________________________________________
>     > Sequoia mailing list
>     > [email protected]
>     <mailto:[email protected]>
>     > <mailto:[email protected]
>     <mailto:[email protected]>>
>     > https://forge.continuent.org/mailman/listinfo/sequoia
>     >
>     >
>     >
>     >
>     > --
>     > 여호와는 네게 복을 주시고 너를 지키시기를 원하며
>     > 여호와는 그 얼굴을 네게 비추사 은혜 베푸시기를 원하며
>     > 여호와는 그 얼굴을 네게로 향하여 드사 평강 주시기를 원하노라
>     > (민수기 6:24-26)
>     >
>     >
>     >
>     >
>     > --
>     > 여호와는 네게 복을 주시고 너를 지키시기를 원하며
>     > 여호와는 그 얼굴을 네게 비추사 은혜 베푸시기를 원하며
>     > 여호와는 그 얼굴을 네게로 향하여 드사 평강 주시기를 원하노라
>     > (민수기 6:24-26)
>     >
>     ------------------------------------------------------------------------
>     >
>     > _______________________________________________
>     > Sequoia mailing list
>     > [email protected]
>     <mailto:[email protected]>
>     > https://forge.continuent.org/mailman/listinfo/sequoia
>
>
>     --
>     Emmanuel Cecchet
>     FTO @ Frog Thinker
>     Open Source Development & Consulting
>     --
>     Web: http://www.frogthinker.org
>     email: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>     Skype: emmanuel_cecchet
>
>     _______________________________________________
>     Sequoia mailing list
>     [email protected]
>     <mailto:[email protected]>
>     https://forge.continuent.org/mailman/listinfo/sequoia
>
>
>
>
> -- 
> 여호와는 네게 복을 주시고 너를 지키시기를 원하며
> 여호와는 그 얼굴을 네게 비추사 은혜 베푸시기를 원하며
> 여호와는 그 얼굴을 네게로 향하여 드사 평강 주시기를 원하노라
> (민수기 6:24-26)
> ------------------------------------------------------------------------
>
> _______________________________________________
> Sequoia mailing list
> [email protected]
> https://forge.continuent.org/mailman/listinfo/sequoia


-- 
Emmanuel Cecchet
FTO @ Frog Thinker 
Open Source Development & Consulting
--
Web: http://www.frogthinker.org
email: [EMAIL PROTECTED]
Skype: emmanuel_cecchet

_______________________________________________
Sequoia mailing list
[email protected]
https://forge.continuent.org/mailman/listinfo/sequoia

Reply via email to