mod_jk Connector: version question

2006-04-02 Thread Paul Smith

I notice here:

http://www.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/

That there are various 'stable' versions for certain O/S.  In  
particular I notice that mod_jk 1.2.15 is considered stable for  
Solaris and w32 but not Linux.


Any reason? We've done a lot of performance testing with a compiled- 
from-source 1.2.15 on RedHat, but would really like a pre-compiled  
version if we could get away from it (rather use the one everyone  
uses than one we built).


The changelog for 1.2.15 would seem to indicate that it's the version  
we should be using (those issues fixed don't seem O/S specific.)


thoughts?

cheers,

Paul Smith

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Cluster configuration

2006-01-11 Thread Paul Smith
Aren't you missing the mcastBindAddr property?  We're just going  
through the test setup for clustering with 5.5.15 and it's working  
pretty well, here's a snippet from our config:


Engine name=Aconex defaultHost=192.168.0.219  
jvmRoute=worker2

  Realm className=org.apache.catalina.realm.UserDatabaseRealm
 resourceName=UserDatabase /
Host name=192.168.0.219 appBase=foo 
Context path=/ docBase=${catalina.home}/app/mel/  
distributable=true 

/Context
 Cluster clusterName=ClusterTest  
className=org.apache.catalina.cluster.tcp.SimpleTcpCluster

service.mcastBindAddr=192.168.0.219
receiver.tcpListenAddress=192.168.0.219
tcpThreadCount=2
/Cluster
/Host

/Engine

the mcastBindAddr and tcListenAddress needs to be different for each  
node in the cluster.


Also note that I would very much recommend using the latest modJK  
1.2.15, this together with tomcat 5.5.15 is looking pretty good.


cheers,

Paul

On 12/01/2006, at 11:17 AM, bradley mclain wrote:


I am unable to get a 2 node cluster to work.  I am using 5.5.15, as I
saw in an email that prior versions are broken.

when i start node A i see the following written to the log:

1064  INFO [main] - Manager [localhost/cluster]: skipping state
transfer. No members active in cluster group.

I expect that.  when i start node B (after A is started) I see the  
same

snippet:

1219  INFO [main] - Manager [localhost/cluster]: skipping state
transfer. No members active in cluster group.

I do not expect that.

Obviously the two nodes are not finding each other.  I have pinged
228.0.0.4 from those servers, as well as 228.0.0.1, and receive
responses from each of the servers, so I _believe_ Multicast is
operating properly, though I am not a network guy.

I am using the server.xml obtained from the documentation (attached
below), and I have carefully followed all instructions in the  
clustering

documentation, though perhaps not carefully enough ;-).

My servers are not multi-homed, thus I have tried both with and  
without

the mcastBindAddress attribute, but it makes no difference.

Can anyone suggest what I may be missing here?

cheers,
Bradley McLain

Server port=8011 shutdown=SHUTDOWN 
GlobalNamingResources
Resource  name=UserDatabase auth=Container
   type=org.apache.catalina.UserDatabase
description=User database that can be updated and
saved

factory=org.apache.catalina.users.MemoryUserDatabaseFactory
pathname=conf/tomcat-users.xml /
  /GlobalNamingResources
Service  name=Catalina
Connectorport=80 maxThreads=100  
minSpareThreads=4

maxSpareThreads=4 /
Engine   name=Catalina defaultHost=localhost
jvmRoute=node01 
Realm
className=org.apache.catalina.realm.UserDatabaseRealm
resourceName=UserDatabase /
Cluster
className=org.apache.catalina.cluster.tcp.SimpleTcpCluster
  doClusterLog=true
  clusterLogName=clusterlog

manager.className=org.apache.catalina.cluster.session.DeltaManager

manager.expireSessionsOnShutdown=false

manager.notifyListenersOnReplication=false

manager.notifySessionListenersOnReplication=false
  manager.sendAllSessions=false
  manager.sendAllSessionsSize=500
   
manager.sendAllSessionsWaitTime=20

Membership

className=org.apache.catalina.cluster.mcast.McastService
mcastAddr=228.0.0.4
mcastClusterDomain=d10
mcastPort=45564
mcastFrequency=1000
mcastDropTime=3/
Receiver

className=org.apache.catalina.cluster.tcp.ReplicationListener
  tcpListenAddress=auto
  tcpListenPort=9015
  tcpSelectorTimeout=100
  tcpThreadCount=6 /

  Sender

className=org.apache.catalina.cluster.tcp.ReplicationTransmitter
 replicationMode=fastasyncqueue
 doTransmitterProcessingStats=true
 doProcessingStats=true
 doWaitAckStats=true
 queueTimeWait=true
 queueDoStats=true
 queueCheckLock=true
 ackTimeout=15000
 waitForAck=true
 keepAliveTimeout=8
 keepAliveMaxRequestCount=-1/
  Valve

JDBC Session Persistence in a cluster problem/question

2006-01-09 Thread Paul Smith

From: [EMAIL PROTECTED]
Subject:JDBC Session persistence in a cluster problem/question
Date:   10 January 2006 5:42:51 PM
To:   [EMAIL PROTECTED]

[For context, Tomcat 5.5.12]

Hello all,

I'm having a bit of trouble understanding exactly what the  
capabilities are of the whole PersistenceManager and how it saves  
session data.  here's what I we have configured in server.xml:


   Host name=localhost appBase=foo 
Context path=/ docBase=${catalina.home}/app/mel/ 
 Manager  
className=org.apache.catalina.session.PersistentManager 
Store  
className=org.apache.catalina.session.JDBCStore
 
connectionURL=jdbc:inetdae7:tower.aconex.com? 
database=paulamp;user=sqlamp;password=sql

driverName=com.inet.tds.TdsDriver
sessionIdCol=session_id
sessionValidCol=valid_session
sessionMaxInactiveCol=max_inactive
sessionLastAccessedCol=last_access
sessionTable =tomcat_sessions
sessionAppCol = app_name
sessionDataCol = session_data
/
 /Manager
/Context
/Host

Sessions are persisting, as we can see the new rows being added to  
the DB.  Fine, great.


However in our test cluster environment we have noticed that session  
variables (strings, Integers) are being lost during the failover to  
the other node in the cluster.  From looking at the Javadoc of the  
PersistentManager and other related info on the net, I can't see  
anywhere where it indicates that the Session is persisted to the DB  
when the session is updated/modified/addedto.  It seems to only have  
settings that set how long to wait before saving to the persistence  
store.


1). Am I correct in my understanding so far?
2) . If so, is this design because of the likely performance impact  
of all these session changes, and the somewhat unlikely case of the  
server going down in the first place.
3). do I have any options here?  We really do need a pretty seamless  
failover with session information being kept in sync as it failsover  
to the other node in the cluster.


cheers,

Paul Smith



Apache+Tomcat+AJP: failover error, expected?

2006-01-09 Thread Paul Smith

context: Apache 2.0.54, Tomcat 5.5.12, jk 1.2.15, ajp 1.3

I am observing something which I think I understand, but I want to  
make sure it is expected behaviour and not something I should attempt  
to try and fix.  See config at the end of this email for more  
information.


We have configured Apache 2 to have a load-balancer between 2 Tomcat  
instances and configured it as per standard configurations (what we  
believe to be standard configurations).  When doing the following  
failover test:


1) Start up both tomcat nodes, and 'warm' them up by ensuring that  
the sticky session OR round-robin logic (we've tried both) routes  
some requests to both nodes.
2) Pause the submission of requests to the cluster just for testing  
purposes, then shut down one of the tomcat nodes cleanly via shutdown  
command leaving one clean node in the cluster.
3) _First_ request that comes into the cluster that would normally by  
'stickied'/stuck/round-robin'd to the now deceased node is shown a  
500 Error.  Subsequent requests are moved successfully over to the  
remaining node.


If I think about this a bit, it seems reasonable to design the ajp  
connectors this way because of trying to check the state of the  
worker before sending it a request introduces quite a lot of  
overhead, and would probably reduce performance.   The design is  
'optimistic' in that it assumes that for the majority of the time the  
tomcat node will be there.  Unfortunately the first request in gets  
nuked.


Is this the correct and expected behaviour in this scenario?  Or have  
we simply configured something incorrectly?  Perhaps that's just The  
Way It Is?


cheers,

Paul Smith

--begin configs --

[EMAIL PROTECTED] conf]# more workers.properties
# Define workers
worker.list=loadbalancer,status
worker.maintain=5

# -
# First Tomcat Server: worker1 (ajp13)
# -
#
# Talk to Tomcat listening on machine mel.neil.aconex.com at port 8009
worker.worker1.type=ajp13
worker.worker1.host=192.168.0.218
worker.worker1.port=8009
worker.worker1.lbfactor=1
# Use up to 10 sockets, which will stay no more than 10min in cache
worker.worker1.cachesize=10
worker.worker1.cache_timeout=600
# Ask operating system to send KEEP-ALIVE signal on the connection
worker.worker1.socket_keepalive=1
# Want ajp13 connection to be dropped after 10secs (recycle)
worker.worker1.socket_timeout=300

# -
# Second Tomcat Server: worker2 (ajp13)
# -
#
# Talk to Tomcat listening on machine mel.neil.aconex.com at port 8010
worker.worker2.type=ajp13
worker.worker2.host=192.168.0.219
worker.worker2.port=8009
worker.worker2.lbfactor=0
# Use up to 10 sockets, which will stay no more than 10min in cache
worker.worker2.cachesize=10
worker.worker2.cache_timeout=600
# Ask operating system to send KEEP-ALIVE signal on the connection
worker.worker2.socket_keepalive=1
# Want ajp13 connection to be dropped after 10secs (recycle)
worker.worker2.socket_timeout=300

# -
# Load Balancer worker: loadbalancer
# -
#
# The loadbalancer (type lb) worker performs weighted round-robin
# load balancing with non-sticky sessions.
# Note:
#   If a worker dies, the load balancer will check its state
#once in a while. Until then all work is redirected to peer
#worker.
worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=worker1,worker2
worker.loadbalancer.sticky_session=false
worker.loadbalancer.recover=65
# -
# Status worker: status
# -
#
worker.status.type=status



[EMAIL PROTECTED] conf]# more httpd.conf
#
# Based upon the NCSA server configuration files originally by Rob  
McCool.

#


[snip]


#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as  
a DSO you

# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are  
used.

# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.
#
# Example:
# LoadModule foo_module modules/mod_foo.so

LoadModule jk_module modules/mod_jk.so


[snip]


#
# Bring in additional module-specific configurations
#
IfModule mod_ssl.c
Include conf/ssl.conf
/IfModule

#
# Configure module: mod_jk
#
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkLogStampFormat [%a %b %d %H:%M:%S %Y] 
JkRequestLogFormat %w %V %T
JkOptions +ForwardKeySize
JkShmFile share/mod_jk.shm

JkMount /* loadbalancer
JkUnMount /*.html loadbalancer
JkUnMount /*.ico loadbalancer
JkUnMount /html/*/*.html loadbalancer
JkUnMount /html/*/*.htm loadbalancer
JkUnMount /html/*/*.ico loadbalancer
JkUnMount /html/*/*.gif loadbalancer
JkUnMount /html/*/*.jpg loadbalancer
JkUnMount /html