alternatively try an explicit address in the
Receiver configuration
<Receiver
className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                  address="auto"
                  port="4001"
                  autoBind="100"
                  selectorTimeout="5000"
                  maxThreads="6"/>

instead of address="auto" try address="192.168.1.43"
this should alter the log displayed at start-up and I would be very
interested if you still had a problem.



On Thu, Sep 19, 2013 at 10:35 AM, Vince Stewart <stewart.vi...@gmail.com>wrote:

> Hi Nicholas,
>
> I'm am a bit of a novice but I did have a very similar problem when I
> started using the clustering modules.
> My Tomcat output was referring to localhost (10.x.x.x) addresses while my
> netstat was reporting LISTEN on network addresses (192.x.x.x:400?).
> You have the same disparity. My system operated to expectation after I
> registered my machine's network IP address in linux folder /etc/hosts.
> Once I did that tomcat clustering logs started reporting membership with
> network addresses instead of localhost addresses.
>
>
>
>
>
> On Thu, Sep 19, 2013 at 2:37 AM, Mark Eggers <its_toas...@yahoo.com>wrote:
>
>> On 9/18/2013 6:00 AM, Nicholas Violi wrote:
>>
>>> Thanks Daniel.
>>>
>>> On Tue, Sep 17, 2013 at 5:30 PM, Daniel Mikusa <dmik...@gopivotal.com
>>> >wrote:
>>>
>>>>
>>>> Tried a quick two node setup on my Mac w/out HTTPD and it worked OK.  Go
>>>> to one Tomcat instance's port in chrome, it increments the counter in my
>>>> app.  Refresh a few times.  Open a second tab, go to the second Tomcat
>>>> instance's port.  The counter picks up where it left off and continues
>>>> incrementing.   Flipping back and forth between tabs / servers works
>>>> fine.
>>>>
>>>> Here's the cluster config that I used in case it helps.
>>>>
>>>>               <Cluster channelSendOptions="8"
>>>>
>>>>   className="org.apache.**catalina.ha.tcp.**SimpleTcpCluster">
>>>>                  <Manager
>>>> className="org.apache.**catalina.ha.session.**DeltaManager"
>>>>                           expireSessionsOnShutdown="**false"
>>>>                           notifyListenersOnReplication="**true"/>
>>>>                  <Channel
>>>> className="org.apache.**catalina.tribes.group.**GroupChannel">
>>>>                      <Membership address="228.0.0.4"
>>>>
>>>> className="org.apache.**catalina.tribes.membership.**McastService"
>>>>                                  dropTime="3000"
>>>>                                  frequency="500"
>>>>                                  port="45564"/>
>>>>                      <Receiver address="auto"
>>>>                                autoBind="100"
>>>>
>>>> className="org.apache.**catalina.tribes.transport.nio.**NioReceiver"
>>>>                                maxThreads="6"
>>>>                                port="4000"
>>>>                                selectorTimeout="5000"/>
>>>>                      <Sender
>>>> className="org.apache.**catalina.tribes.transport.**
>>>> ReplicationTransmitter">
>>>>                          <Transport
>>>> className="org.apache.**catalina.tribes.transport.nio.**
>>>> PooledParallelSender"/>
>>>>                      </Sender>
>>>>                      <Interceptor
>>>> className="org.apache.**catalina.tribes.group.**interceptors.**
>>>> TcpFailureDetector"/>
>>>>                      <Interceptor
>>>> className="org.apache.**catalina.tribes.group.**interceptors.**
>>>> MessageDispatch15Interceptor"/**>
>>>>                  </Channel>
>>>>                  <Valve
>>>> className="org.apache.**catalina.ha.tcp.**ReplicationValve"
>>>>                         filter=""/>
>>>>                  <Valve
>>>> className="org.apache.**catalina.ha.session.**JvmRouteBinderValve"/>
>>>>                  <ClusterListener
>>>> className="org.apache.**catalina.ha.session.**
>>>> JvmRouteSessionIDBinderListene**r"/>
>>>>                  <ClusterListener
>>>> className="org.apache.**catalina.ha.session.**ClusterSessionListener"/>
>>>>              </Cluster>
>>>>
>>>>
>>> Just tried this with the same results. My test that replication is
>>> behaving
>>> is accessing my webapp on the two ports and monitoring the session
>>> counter
>>> and list in the tomcat manager, and as I said before, I can only see the
>>> sessions created on the server attached to the manager instance. Is that
>>> a
>>> reasonable test? With the clustering config pretty well ruled out as the
>>> culprit, maybe my webapp is not dealing with sessions appropriately?
>>> Would
>>> you mind sending me your counter test app?
>>>
>>> Beyond that, have you tried increasing the log levels?
>>>
>>>
>>> I found conflicting information about enabling logging. What I had
>>> previously was
>>>
>>> org.apache.catalina.tribes.**level = FINE
>>> org.apache.catalina.tribes.**MESSAGES = FINE
>>>
>>> in logging.properties, which was reporting the FINE log statements in my
>>> original post. I just added some more:
>>>
>>> org.apache.catalina.ha.level = FINE
>>> org.apache.catalina.ha.**session.level = FINE
>>> org.apache.catalina.ha.**session.DeltaManager.level = FINE
>>> org.apache.catalina.ha.tcp.**level = FINE
>>> org.apache.catalina.ha.tcp.**level = FINE
>>> org.apache.catalina.ha.tcp.**ReplicationValve.level = FINE
>>> org.apache.catalina.ha.**session.**ClusterSessionListener.level = FINE
>>> org.apache.catalina.ha.**session.**JvmRouteSessionIDBinterListene**r.level
>>> = FINE
>>>
>>> And I still don't see any messages when interacting with the webapp in
>>> the
>>> browser. Are there any other classes I should be logging?
>>>
>>> Thanks,
>>> Nick
>>>
>>>
>> Copy-pasted from a message I sent to the mailing list about 3 weeks ago:
>>
>> It's been a while since I've played with this, so your mileage may vary.
>>
>> # wrapped for easier reading
>> # added one additional handler
>>
>> handlers = 1catalina.org.apache.juli.**FileHandler,
>>            2localhost.org.apache.juli.**FileHandler,
>>            3manager.org.apache.juli.**FileHandler,
>>            4host-manager.or.apache.juli.**FileHandler,
>>             java.util.logging.**ConsoleHandler,
>>            5cluster.org.apache.juli.**FileHandler
>>
>> # just the new cluster log handler - all others are stock
>> # logging.properties
>> # beware of the wrapping
>>
>> 5cluster.org.apache.juli.**FileHandler.level = FINER
>> 5cluster.org.apache.juli.**FileHandler.directory = ${catalina.base}/logs
>> 5cluster.org.apache.juli.**FileHandler.prefix = cluster.
>>
>> # just the clustering logs - all others are stock logging.properties
>> org.apache.catalina.tribes.**MESSAGES.level = FINE
>> org.apache.catalina.tribes.**MESSAGES.handlers =
>>     5cluster.org.apache.juli.**FileHandler
>>
>> org.apache.catalina.tribes.**level = FINE
>> org.apache.catalina.tribes.**handlers =
>>     5cluster.org.apache.juli.**FileHandler
>>
>> org.apache.catalina.ha.level = FINE
>> org.apache.catalina.ha.**handlers = 5cluster.org.apache.juli.**FileHander
>>
>> org.apache.catalina.ha.deploy.**level = INFO
>> org.apache.catalina.ha.deploy.**handlers =
>>     5cluster.org.apache.juli.**FileHandler
>>
>> Set logging at the desired level.
>>
>> I think I've posted this to the mailing list before . . .
>>
>> /mde/
>>
>>
>> ------------------------------**------------------------------**---------
>> To unsubscribe, e-mail: 
>> users-unsubscribe@tomcat.**apache.org<users-unsubscr...@tomcat.apache.org>
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>
>
> --
> Vince Stewart
>



-- 
Vince Stewart

Reply via email to