SSL Appliance and Tomcat5.0.30/CoyoteConnector
We employ an SSL appliance in front of servers running Tomcat5 stand-alone and wish to configure the CoyoteConnector as we have done previously with Tomcat4: Connector className=org.apache.coyote.tomcat5.CoyoteConnector port=8543 minProcessors=16 maxProcessors=384 enableLookups=false acceptCount=128 debug=0 connectionTimeout=30 scheme=https secure=true disableUploadTimeout=true proxyName=localhost proxyPort=443/ However, since secure=true is being specified, Http11Protocol.checkSocketfactory() in Tomcat 5.0.30 is attempting to create a secure SSL ServerSocketFactory. Of course, this fails since other SSL configuration parameters are not present. Since we are behind and SSL appliance, we really want just an ordinary ServerSocketFactory to be used and request.isSecure() to return true within our web applications. As I noted above, a similar configuration used to work for us on Tomcat 4.1. Short of implementing a custom SSLImplementation and configuring it using an imbedded Factory/ tag within the Connector definition, is there any way to force Tomcat 5.0/Http11Protocol to use a default ServerSocketFactory? Randy Watler - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Tomcat5 MBeans/JMX context creation
I am currently experimenting with the Java 1.5 remote JMX/MBeans for use in our build process. In particular, I am attempting to install a new context in a running Tomcat5.18 server using an external Ant task/application, (without using the manager webapp). I found code in the admin webapp to do this, but it does not seem to configure the WebappClassLoader instances associated with the context correctly. By examining the Tomcat5 source, I was able to find a workaround that seems to function correctly, but I would like to verify it with someone who knows how the MBean API is intended to be used. Here is what the admin webapp is doing to install a new context: ObjectName host = ... ; String contextPath = /testsite ; String webappDocBase = /tmp/testsite ; String newContext = (String) mbeanServer.invoke( factory, createStandardContext, new Object [] { host.toString(), /testsite, /tmp/testsite }, new String [] { java.lang.String, java.lang.String, java.lang.String } ) ; mbeanServer.invoke( factory, createWebappLoader, new Object [] { newContext }, new String [] { java.lang.String } ) ; mbeanServer.invoke( factory, createStandardManager, new Object [] { newContext }, new String [] { java.lang.String } ) ; Accessing a JSP page or servlet in this new context results in trying to use an unstarted ClassLoader that results in a ThreadDeath exception. The following MBeans are created, (note the odd Catalina:type=Loader instance): Catalina:type=Loader,path=/testsite,host=localhost Catalina:type=NamingResources,resourcetype=Context,path=/testsite,host=local host Standalone:j2eeType=Servlet,name=CookiesDump,WebModule=//localhost/testsite, J2EEApplication=none,J2EEServer=none Standalone:j2eeType=Servlet,name=default,WebModule=//localhost/testsite,J2EE Application=none,J2EEServer=none Standalone:j2eeType=Servlet,name=jsp,WebModule=//localhost/testsite,J2EEAppl ication=none,J2EEServer=none Standalone:j2eeType=WebModule,name=//localhost/testsite,J2EEApplication=none ,J2EEServer=none Standalone:type=Cache,host=localhost,path=/testsite Standalone:type=Loader,path=/testsite,host=localhost Standalone:type=Manager,path=/testsite,host=localhost If the code above is modified to remove the call to createWebappLoader, the webapp is loaded correctly and the exported MBeans match what is available when the same webapp is loaded if deployed directly in the Tomcat5 webapps directory at startup: Catalina:type=NamingResources,resourcetype=Context,path=/testsite,host=local host Standalone:j2eeType=Servlet,name=CookiesDump,WebModule=//localhost/testsite, J2EEApplication=none,J2EEServer=none Standalone:j2eeType=Servlet,name=default,WebModule=//localhost/testsite,J2EE Application=none,J2EEServer=none Standalone:j2eeType=Servlet,name=jsp,WebModule=//localhost/testsite,J2EEAppl ication=none,J2EEServer=none Standalone:j2eeType=WebModule,name=//localhost/testsite,J2EEApplication=none ,J2EEServer=none Standalone:type=Cache,host=localhost,path=/testsite Standalone:type=Loader,path=/testsite,host=localhost Standalone:type=Manager,path=/testsite,host=localhost It seems that the invocation of the createWebappLoader operation ends up creating an extra Loader instance somehow that confuses the ClassLoader configuration for the webapp context. I verified that the createStandardContext operation does indeed construct a Loader when start() is invoked on the context container. Apparently, either the createStandardContext is not supposed to create the Loader or the createWebappLoader operation should not be used in the admin webapp... then again, I might be missing something! Can anyone clarify how the createStandardContext should be invoked? Randy Watler Finali Corporation
Re: Tomcat5 MBeans/JMX context creation
Remy Maucherat wrote: Randy Watler wrote: I am currently experimenting with the Java 1.5 remote JMX/MBeans for use in our build process. ... Can anyone clarify how the createStandardContext should be invoked? You can look at the JBoss integration code for examples. ... There's additional code, but this is for creating a basic context, and configuring its classloader. The Tomcat code which allows doing this is a lot more recent than the harcoded admin webapp methods, and is more JavaBean like. Remy, Note that I am using the JMX remote Out-of-the-box Java 1.5 Monitoring and Management, so some of the JBoss setup may not apply, (like the ClassLoader configuration that passes object instances). Here is what I am trying: // create context mbeanServer.createMBean( org.apache.commons.modeler.BaseModelMBean, context, new Object [] { org.apache.catalina.core.StandardContext }, new String [] { java.lang.String } ) ; mbeanServer.setAttribute( context, new Attribute( docBase, /tmp/testsite ) ) ; mbeanServer.setAttribute( context, new Attribute( path, /testsite ) ) ; // start context mbeanServer.invoke( context, start, null, null ) ; Unfortunately, the createMBean call above throws the following exception: javax.management.ReflectionException: The MBean class could not be loaded by the default loader repository at com.sun.jmx.mbeanserver.MBeanInstantiatorImpl.findClassWithDefaultLoaderRepo sitory(MBeanInstantiatorImpl.java:61) at com.sun.jmx.mbeanserver.MBeanInstantiatorImpl.instantiate(MBeanInstantiatorI mpl.java:378) at com.sun.jmx.mbeanserver.JmxMBeanServer.instantiate(JmxMBeanServer.java:1011) at com.sun.jmx.remote.security.MBeanServerAccessController.createMBean(MBeanSer verAccessController.java:167) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl. java:1418) at javax.management.remote.rmi.RMIConnectionImpl.access+100(RMIConnectionImpl.j ava:81) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMICon nectionImpl.java:1301) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConne ctionImpl.java:1395) at javax.management.remote.rmi.RMIConnectionImpl.createMBean(RMIConnectionImpl. java:337) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl .java:25) at java.lang.reflect.Method.invoke(Method.java:494) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:294) at sun.rmi.transport.Transport+1.run(Transport.java:153) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:149) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:460) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:7 01) at java.lang.Thread.run(Thread.java:566) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteC all.java:247) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:223) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:126) at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnectionImpl_Stub.createMBean(Unknown Source) at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.createM Bean(RMIConnector.java:662) at InstallTask.execute(InstallTask.java:84) ... 1 more Caused by: java.lang.ClassNotFoundException: org.apache.commons.modeler.BaseModelMBean at com.sun.jmx.mbeanserver.ClassLoaderRepositorySupport.loadClass(ClassLoaderRe positorySupport.java:208) at com.sun.jmx.mbeanserver.ClassLoaderRepositorySupport.loadClass(ClassLoaderRe positorySupport.java:128) at com.sun.jmx.mbeanserver.MBeanInstantiatorImpl.findClassWithDefaultLoaderRepo sitory(MBeanInstantiatorImpl.java:58) at com.sun.jmx.mbeanserver.MBeanInstantiatorImpl.instantiate(MBeanInstantiatorI mpl.java:378) at com.sun.jmx.mbeanserver.JmxMBeanServer.instantiate(JmxMBeanServer.java:1011) at com.sun.jmx.remote.security.MBeanServerAccessController.createMBean(MBeanSer verAccessController.java:167) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl. java:1418) at javax.management.remote.rmi.RMIConnectionImpl.access+100(RMIConnectionImpl.j ava:81) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMICon nectionImpl.java:1301) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConne ctionImpl.java:1395
Re: [OT] Re: linux : Limit of file descriptor
Francois: We have not run into a FD limit within the JVM using Tomcat, (only aformentioned user limits that can be modified using ulimit). I too have read of OS/JVM limits surrounding some operations like select() for the NIO package, but these apparently are not limits on Tomcat or Coyote Connector implementations AFAIK. Randy Watler Finali Corporation Francois JEANMOUGIN wrote: Can you post the contents of the following files: /proc/sys/fs/file-max As already mentioned :209713 /proc/sys/fs/file-nr 23321398209713 So. 2332 FDs opened, 1398 used. I found a message talking about a hard coded limit of 1024 in the Jvm... Is it a per thread limit or ?
Re: linux : Limit of file descriptor
Francois, For what it is worth, we run vanilla RedHat 8.0 servers and have to bump up the number handles available to Tomcat using ulimit, (otherwise it is limited to ~1024). We have a complex application that does a lot of proxied access work. Despite that, we have only run out of file descriptors once in several years during what we think was a DOS attack. Remember, each socket gets a file descriptor as well. It seems unlikely that the FDs are being consumed by Tomcat if your log files look normal. Does the number of file handles climb steadily over time or rapidly balloon beyond norms on startup? What build of Linux is on the box in question? Randy Watler Finali Corporation Francois JEANMOUGIN wrote: -Message d'origine- De : Ralph Einfeldt [mailto:[EMAIL PROTECTED] Envoyé : mercredi 22 octobre 2003 15:09 À : Tomcat Users List Objet : RE: linux : Limit of file descriptor Those are shared fd's. Nope, those are real FDs (Otherwise you would have hit the limit much earlier as it is typically something betweenn 1024 and 4096). Nope, on this server, it is 209713. You want some? François. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] smime.p7s Description: S/MIME Cryptographic Signature
Re: linux : Limit of file descriptor
Francois: H. Are you running with a fronting web server, (i.e. Apache, IIS, etc), or Tomcat stand alone? Randy Watler Finali Corporation smime.p7s Description: S/MIME Cryptographic Signature
Re: linux : Limit of file descriptor
Francois, This may be a Red Herring, but I have to wonder if the usage of Posix threads in the RedHat enterprise servers is causing problems somehow? Are your other servers RedHat enterprise boxen or different in other ways? What JVM are you using? Randy Watler Finali Corporation smime.p7s Description: S/MIME Cryptographic Signature
[HOWTO] Tomcat 4.1.27 Standalone Jpackage RPMs
How To Install Tomcat 4.1.27 Standalone using Jpackage RPMs --- The Tomcat Team no longer generates monolithic RPMs for the full or LE version of Tomcat after 4.1.24. Instead, RPMs can be obtained from www.jpackage.org. Jpackage's apparent goal is to generate RPMs for common Java based packages and to standardize the installed directory structure and environment. Because Jpackage manages many RPMs, (at least one for each Java library/tool), there are over 20 RPMs that make up a working Tomcat 4.1 installation. This includes RPMs for the j2sdk and its extensions. As with other RPM managed products, future upgrades to Tomcat may only include one or a handfull of RPMs to be updated. If this sounds daunting or difficult to manage, remember that RPMs make the whole process fairly easy. As mentioned above, Jpackage attempts to unify the Java install environment by repackaging JVMs/JREs into a consistent form that allows multiple versions of Sun, Blackdown, and IBM product RPMs to be plug compatible with other Jpackage components like Tomcat. This eliminates many configuration problems and makes user and service environments generally stable. To realize this goal, some Jpackage RPM installation scripts require that the JVM and related components be installed using Jpackage RPMs; this includes the Tomcat 4.1 RPM. Also, there is also a Jpackage utilities RPM that must be installed to support this install architecture. For many reasons, (see the site FAQ for details), Jpackage cannot distribute binary RPMs for many vendor's products or components. However, Jpackage has created nosrc RPMs that you manually compose, (only once), after pulling the products binary sources from the vendor's web sites. The nosrc designation can be misleading: by building RPMs from these templates, downloaded binary components are not rebuilt from Java or other sources, they are simply repackaged as Jpackage compatible binary RPMs! For a Tomcat 4.1 install, this process must be done for only the JVM and a few of its extensions. Don't let this extra step color your opinion of the Jpackage RPMs! Here are the steps I used to install Tomcat 4.1.27 using the Sun 1.4.2_01 j2sdk on a RedHat 8.0 server, (YMMV): --- 1. Erase any j2sdk and obsolete Tomcat RPM installs. For example, these were the commands I used after saving any Tomcat configuration files or webapps: su - Password: rpm -e j2sdk-1.4.2_01-fcs rpm -e tomcat4-4.1.24-full.2jpp rm -rf /var/tomcat4 rm -rf /etc/tomcat4 exit --- 2. Download the following RPMs from www.jpackage.org: ant-1.5.4-2jpp.noarch.rpm jaf-1.0.2-3jpp.nosrc.rpm jakarta-commons-beanutils-1.6.1-4jpp.noarch.rpm jakarta-commons-collections-2.1-4jpp.noarch.rpm jakarta-commons-daemon-1.0.cvs20030227-6jpp.noarch.rpm jakarta-commons-dbcp-1.0-4jpp.noarch.rpm jakarta-commons-digester-1.5-3jpp.noarch.rpm jakarta-commons-fileupload-1.0-1jpp.noarch.rpm jakarta-commons-logging-1.0.3-4jpp.noarch.rpm jakarta-commons-modeler-1.1-2jpp.noarch.rpm jakarta-commons-pool-1.0.1-5jpp.noarch.rpm java-1.4.2-sun-1.4.2.01-7jpp.nosrc.rpm javamail-1.3.1-1jpp.nosrc.rpm jpackage-utils-1.5.27-1jpp.noarch.rpm jta-1.0.1-0.b.3jpp.nosrc.rpm mx4j-1.1.1-5jpp.noarch.rpm regexp-1.3-1jpp.noarch.rpm servletapi4-4.0.4-3jpp.noarch.rpm tomcat4-4.1.27-2jpp.noarch.rpm tyrex-1.0-3jpp.noarch.rpm xalan-j2-2.5.1-1jpp.noarch.rpm xerces-j2-2.4.0-3jpp.noarch.rpm xml-commons-1.0-0.b2.6jpp.noarch.rpm xml-commons-apis-1.0-0.b2.6jpp.noarch.rpm If you wish to use an older JVM version, choose the nosrc RPM that you wish to use; you may also need to include the following nosrc RPMs for required extensions and libraries depending on the JVM version: jaas-ext-1.0.1-2jpp.nosrc.rpm jdbc-stdext-ext-2.0-12jpp.nosrc.rpm jndi-ext-1.2.1-10jpp.nosrc.rpm jsse-ext-1.0.3.01-5jpp.nosrc.rpm ldapjdk-4.1-5jpp.noarch.rpm oro-2.0.7-1jpp.noarch.rpm --- 3. Download the following binary sources from the Sun j2sdk and appropriate product web sites: j2sdk-1_4_2_01-linux-i586.bin jaf-1_0_2.zip javamail-1_3_1.zip jta-1_0_1B-classes.zip Again, you may also need to download these if you are using older JVM versions: jaas-1_0_01.zip jdbc2_0-stdext.jar jndi-1_2_1.zip jsse-1_0_3_02-do.zip, (or jsse-1_0_3_02-gl.zip) --- 4. Install the require Jpackage utility RPM: su - Password: rpm -U jpackage-utils-1.5.27-1jpp.noarch.rpm exit --- 5. Because we are going to build binary RPMs using the nosrc RPMs and the downloaded binary sources, we need a standard host RPM directory structure. Note that the following RPM building steps should be done in a user directory and not as root on your machine! More details for these steps can be found on the www.jpackage.org web site. Here are the commands to prepare the RPM environment: mkdir ~/rpm mkdir ~/rpm/BUILD mkdir ~/rpm/RPMS mkdir ~/rpm/RPMS/i386 mkdir ~/rpm/RPMS/i586
[HOWTO UPDATE] Tomcat 4.1.27 Standalone Jpackage RPMs
How To Install Tomcat 4.1.27 Standalone using Jpackage RPMs --- The Tomcat Team no longer generates monolithic RPMs for the full or LE version of Tomcat after 4.1.24. Instead, RPMs can be obtained from www.jpackage.org. Jpackage's apparent goal is to generate RPMs for common Java based packages and to standardize the installed directory structure and environment. Because Jpackage manages many RPMs, (at least one for each Java library/tool), there are over 20 RPMs that make up a working Tomcat 4.1 installation. This includes RPMs for the j2sdk and its extensions. As with other RPM managed products, future upgrades to Tomcat may only include one or a handfull of RPMs to be updated. If this sounds daunting or difficult to manage, remember that RPMs make the whole process fairly easy. As mentioned above, Jpackage attempts to unify the Java install environment by repackaging JVMs/JREs into a consistent form that allows multiple versions of Sun, Blackdown, and IBM product RPMs to be plug compatible with other Jpackage components like Tomcat. This eliminates many configuration problems and makes user and service environments generally stable. To realize this goal, some Jpackage RPM installation scripts require that the JVM and related components be installed using Jpackage RPMs; this includes the Tomcat 4.1 RPM. Also, there is also a Jpackage utilities RPM that must be installed to support this install architecture. For many reasons, (see the site FAQ for details), Jpackage cannot distribute binary RPMs for many vendor's products or components. However, Jpackage has created nosrc RPMs that you manually compose, (only once), after pulling the products binary sources from the vendor's web sites. The nosrc designation can be misleading: by building RPMs from these templates, downloaded binary components are not rebuilt from Java or other sources, they are simply repackaged as Jpackage compatible binary RPMs! For a Tomcat 4.1 install, this process must be done for only the JVM and a few of its extensions. Don't let this extra step color your opinion of the Jpackage RPMs! Here are the steps I used to install Tomcat 4.1.27 using the Sun 1.4.2_01 j2sdk on a RedHat 8.0 server, (YMMV): --- 1. Erase any j2sdk and obsolete Tomcat RPM installs. For example, these were the commands I used after saving any Tomcat configuration files or webapps: su - Password: rpm -e j2sdk-1.4.2_01-fcs rpm -e tomcat4-4.1.24-full.2jpp rm -rf /var/tomcat4 rm -rf /etc/tomcat4 rm -rf /var/log/tomcat4 exit --- 2. Download the following RPMs from www.jpackage.org: ant-1.5.4-2jpp.noarch.rpm jaf-1.0.2-3jpp.nosrc.rpm jakarta-commons-beanutils-1.6.1-4jpp.noarch.rpm jakarta-commons-collections-2.1-4jpp.noarch.rpm jakarta-commons-daemon-1.0.cvs20030227-6jpp.noarch.rpm jakarta-commons-dbcp-1.0-4jpp.noarch.rpm jakarta-commons-digester-1.5-3jpp.noarch.rpm jakarta-commons-fileupload-1.0-1jpp.noarch.rpm jakarta-commons-logging-1.0.3-4jpp.noarch.rpm jakarta-commons-modeler-1.1-2jpp.noarch.rpm jakarta-commons-pool-1.0.1-5jpp.noarch.rpm java-1.4.2-sun-1.4.2.01-7jpp.nosrc.rpm javamail-1.3.1-1jpp.nosrc.rpm jpackage-utils-1.5.27-1jpp.noarch.rpm jta-1.0.1-0.b.3jpp.nosrc.rpm mx4j-1.1.1-5jpp.noarch.rpm regexp-1.3-1jpp.noarch.rpm servletapi4-4.0.4-3jpp.noarch.rpm tomcat4-4.1.27-2jpp.noarch.rpm tyrex-1.0-3jpp.noarch.rpm xalan-j2-2.5.1-1jpp.noarch.rpm xerces-j2-2.4.0-3jpp.noarch.rpm xml-commons-1.0-0.b2.6jpp.noarch.rpm xml-commons-apis-1.0-0.b2.6jpp.noarch.rpm If you wish to use an older JVM version, choose the nosrc RPM that you wish to use; you may also need to include the following nosrc RPMs for required extensions and libraries depending on the JVM version: jaas-ext-1.0.1-2jpp.nosrc.rpm jdbc-stdext-ext-2.0-12jpp.nosrc.rpm jndi-ext-1.2.1-10jpp.nosrc.rpm jsse-ext-1.0.3.01-5jpp.nosrc.rpm ldapjdk-4.1-5jpp.noarch.rpm oro-2.0.7-1jpp.noarch.rpm --- 3. Download the following binary sources from the Sun j2sdk and appropriate product web sites: j2sdk-1_4_2_01-linux-i586.bin jaf-1_0_2.zip javamail-1_3_1.zip jta-1_0_1B-classes.zip Again, you may also need to download these if you are using older JVM versions: jaas-1_0_01.zip jdbc2_0-stdext.jar jndi-1_2_1.zip jsse-1_0_3_02-do.zip, (or jsse-1_0_3_02-gl.zip) --- 4. Install the require Jpackage utility RPM: su - Password: rpm -U jpackage-utils-1.5.27-1jpp.noarch.rpm exit --- 5. Because we are going to build binary RPMs using the nosrc RPMs and the downloaded binary sources, we need a standard host RPM directory structure. Note that the following RPM building steps should be done in a user directory and not as root on your machine! More details for these steps can be found on the www.jpackage.org web site. Here are the commands to prepare the RPM environment: mkdir ~/rpm mkdir ~/rpm/BUILD mkdir ~/rpm/RPMS mkdir ~/rpm/RPMS/i386 mkdir
RE: Tomcat RPM's?
Steve, According to Henri Gomez, the Tomcat Team will no longer be providing full or le RPMs. You can now find 4.1.27 RPMs on www.jpackage.org. I am in the middle of evaluating those and plan on publishing a short how to here when I have managed to install the now many RPMS required. Randy Watler Finali Corporation -Original Message- From: Steve Stearns To: [EMAIL PROTECTED] Sent: 10/15/03 2:44 PM Subject: Tomcat RPM's? I've noticed that there's not been an RPM release of Tomcat since the 4.1.24 release. Is the RPM version of tomcat no longer being provided? If it is still being provided, any expectation as to when we'll see a new RPM version? I looked in the mail archives and saw people asking about it, but the only responses I saw were suggesting people just use the tar version instead. I could do that, but having the RPM version would be easier for my setup and I'd rather continue to use that if it will be available. ---Steve - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
4.1.27 Full RPMs
Our Tomcat production upgrade process has been constructed around the usage of the full RPMs that were formerly generated by Henri Gomez. Unfortunately, the binary distribution for 4.1.27 no longer seems to include the RPMs that were part of 4.1.24. Of course, www.jpackage.org does have a 4.1.27 RPM available, but it does not seem to be a full version. Before I set out to install the jpackage.org RPM set, I am wondering if the lack of a Jakarta-Tomcat RPM binary is permanent or if it simply remains to be done for 4.1.27? Randy Watler Finali Corporation
RESOLVED: Linux/RPM 4.1.18 dtomcat4 script changes
Resolved this issue by setting the CATALINA_HOME explicitly in tomcat4.conf instead of trying to override this variable for each use of dtomcat4. Probably better practice anyway since the install directory does not really change... this is how it is done in the out-of-the-box install anyway, so the problem was certainly mine. Doh! Randy Watler Finali Corporation rwatler wrote: Just wondering if anyone else is having problems with changes to the RPM dtomcat4 script from 4.1.10 to 4.1.18?
Linux/RPM 4.1.18 dtomcat4 script changes
Just wondering if anyone else is having problems with changes to the RPM dtomcat4 script from 4.1.10 to 4.1.18? We use several methods to launch Tomcat within development, including those that expect to set the CATALINA_HOME environment variable and launch using /usr/bin/dtomcat4. However, this new code in dtomcat4 overwrites the setting, resulting CATALINA_HOME being set relative to the dtomcat4 script itself: # resolve links - $0 may be a softlink PRG=$0 while [ -h $PRG ]; do ls=`ls -ld $PRG` link=`expr $ls : '.*- \(.*\)$'` if expr $link : '.*/.*' /dev/null; then PRG=$link else PRG=`dirname $PRG`/$link fi done # Get standard environment variables PRGDIR=`dirname $PRG` CATALINA_HOME=`cd $PRGDIR/.. ; pwd` if [ -r $CATALINA_HOME/bin/setenv.sh ]; then . $CATALINA_HOME/bin/setenv.sh fi Since this script is located in /usr/bin and Tomcat is installed in /var/tomcat4 by the RPM, it would appear that this code can not find a default correctly and should not have been added. I would expect that the assignment of CATALINA_HOME to take place only if not already set, as in: if [ -z $CATALINA_HOME ] ; then CATALINA_HOME=`cd $PRGDIR/.. ; pwd` fi Have I missed something here? Randy Watler Finali Corporation
Coyote Connector/Tomcat 4.1.12 flushBuffer()
We are using Tomcat to serve pages that can take a long time to generate, (in excess of 1 minute). To prevent the browser from retrying to resend the request, we are committing the request using flushBuffer() immediately after setting the response headers. However, it appears that the CoyoteConnector ServletResponse.flushBuffer() implementation does not execute unless the output stream or writer has been written to. We are currently working around this problem by using chunked encoding and pushing some white space out onto the stream. Of course, this does commit the request, but is ugly. The Servlet 2.3 standard seems to imply that flushBuffer() should commit the request and flush the response headers. Furthermore, the code in tomcat4/CoyoteResponse.java appears to attempt to commit the request and flush the stream. When CoyoteResponse.flushBuffer() invokes OutputBuffer.flush(), it attempts to call OutputBuffer.realWriteBytes(null, 0, 0). This almost calls the necessary doWrite() protocol on the underlying Response object, but the cnt 0 argument safety check in OutputBuffer.realWriteBytes():377 prevents it. Is there a bug here or are my eyes missing something subtle, (or obvious)? Randy Watler Finali Corporation
Re: Rare request delay of 100 seconds
George, Thanks for the cycles. We have been pondering this move already and have it on the evaluation queue. Meanwhile, we will continue to try to find the root cause. Of course, we will post any findings here! Thanks again, Randy Watler Finali Corporation Sexton, George wrote: At this point, my only recommendation would be to upgrade the kernel to the current RedHat 7.2 patch release of 2.4.18-7.x There have been a ton of things fixed in 9 kernel releases and this could be related to one of them. -Original Message- From: Randy Watler [mailto:rwatler;finali.com] Sent: 11 November, 2002 10:34 AM To: Tomcat Users List Subject: Re: Rare request delay of 100 seconds Jeff, Thanks for the response. I did want to clarify... we are seeing very rare requests that are delayed by 100 seconds, (not 100ms). Anything measured in seconds seems to be very slow for protocol issues like these, no? Randy Watler Finali Corporation Jeff Tulley wrote: Could it simply be the Nagle problem? There are two algorithms commonly used with TCP/IP, trying to make things more efficient, delayed ack, and Nagle's algorithm. Unfortunately they don't work well together. You can turn off Nagle's algorithm, but that could cause a lot of tinygrams on the wire. Same thing with delayed acks (there the tinygram is the extra acks). I don't know if RedHat has implemented a fix for the problem, but there are a few proposed fixes, one of them called the Doupnik algorithm. I'm not sure if this would be your problem, but it sure sounds like it. Anytime that I've seen somebodies requests coming in at such a regular interval (200 ms is typical, but 100 ms is common also), it has almost always turned out to be this problem. Here are some links from a colleague on the problem and a potential solution: Look here for sample source code. http://netlab1.usu.edu/pub/misc/newpolicy.sources/ Look here for the explanation of the problem and the solution http://netlab1.usu.edu/pub/misc/draft-doupnik-tcpimpl-nagle-mode-00.txt Jeff Tulley ([EMAIL PROTECTED]) (801)861-5322 Novell, Inc., the leading provider of Net business solutions http://www.novell.com [EMAIL PROTECTED] 11/9/02 9:25:53 AM George, Oops! I was off by one on the RedHat version. Here is the whole story: RedHat 7.2 Linux version: 2.4.9-31smp gcc version: 2.96 Sorry I forgot to include this information up front! Randy Watler Finali Corporation Sexton, George wrote: What kernel version are you running? -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org
Re: Rare request delay of 100 seconds
Jeff, Thanks for the response. I did want to clarify... we are seeing very rare requests that are delayed by 100 seconds, (not 100ms). Anything measured in seconds seems to be very slow for protocol issues like these, no? Randy Watler Finali Corporation Jeff Tulley wrote: Could it simply be the Nagle problem? There are two algorithms commonly used with TCP/IP, trying to make things more efficient, delayed ack, and Nagle's algorithm. Unfortunately they don't work well together. You can turn off Nagle's algorithm, but that could cause a lot of tinygrams on the wire. Same thing with delayed acks (there the tinygram is the extra acks). I don't know if RedHat has implemented a fix for the problem, but there are a few proposed fixes, one of them called the Doupnik algorithm. I'm not sure if this would be your problem, but it sure sounds like it. Anytime that I've seen somebodies requests coming in at such a regular interval (200 ms is typical, but 100 ms is common also), it has almost always turned out to be this problem. Here are some links from a colleague on the problem and a potential solution: Look here for sample source code. http://netlab1.usu.edu/pub/misc/newpolicy.sources/ Look here for the explanation of the problem and the solution http://netlab1.usu.edu/pub/misc/draft-doupnik-tcpimpl-nagle-mode-00.txt Jeff Tulley ([EMAIL PROTECTED]) (801)861-5322 Novell, Inc., the leading provider of Net business solutions http://www.novell.com [EMAIL PROTECTED] 11/9/02 9:25:53 AM George, Oops! I was off by one on the RedHat version. Here is the whole story: RedHat 7.2 Linux version: 2.4.9-31smp gcc version: 2.96 Sorry I forgot to include this information up front! Randy Watler Finali Corporation Sexton, George wrote: What kernel version are you running? -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org
Re: Rare request delay of 100 seconds
George, Oops! I was off by one on the RedHat version. Here is the whole story: RedHat 7.2 Linux version: 2.4.9-31smp gcc version: 2.96 Sorry I forgot to include this information up front! Randy Watler Finali Corporation Sexton, George wrote: What kernel version are you running? -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org
Rare request delay of 100 seconds
We are using 4.1.12 standalone on RedHat Linux 7.3 servers and having rare HTTP requests delayed on their way into the Coyote HTTP/1.1 connectors. We have packet traces that indicate the request is delayed by exactly 100 seconds, but otherwise is received and responded to as one would expect. Other requests immediately before and after the problematic ones are handled without any significant delay. We are wondering if others have noticed this problem or similar ones that sound like it? In the coyote connector code, it appears that the request sockets are set by default to have 100 second SO_LINGER timeouts for socket close() calls. Of course, this looks suspicious given our problem, but we have not been able to identify any way the blocked close() operation could affect incoming accept() requests. It is clear that running out of processor threads in the thread pool could cause such a delay, but we are not running under loads where this would happen, especially for 100 seconds. Any ideas out there? Randy Watler Finali Corporation
Re: Rare request delay of 100 seconds
George, Thanks for the query. Here is the connector configuration: Connector className=org.apache.coyote.tomcat4.CoyoteConnector port=8543 minProcessors=8 maxProcessors=128 enableLookups=false acceptCount=64 debug=0 connectionTimeout=30 scheme=https secure=true useURIValidationHack=false/ So, I think we have it setup right, no? Randy Watler Finali Corporation Sexton, George wrote: Do you have the connector doing reverse DNS resolution of hosts? -Original Message- From: Randy Watler [mailto:rwatler;finali.com] Sent: 08 November, 2002 4:22 PM To: [EMAIL PROTECTED] Subject: Rare request delay of 100 seconds We are using 4.1.12 standalone on RedHat Linux 7.3 servers and having rare HTTP requests delayed on their way into the Coyote HTTP/1.1 connectors. We have packet traces that indicate the request is delayed by exactly 100 seconds, but otherwise is received and responded to as one would expect. Other requests immediately before and after the problematic ones are handled without any significant delay. We are wondering if others have noticed this problem or similar ones that sound like it? In the coyote connector code, it appears that the request sockets are set by default to have 100 second SO_LINGER timeouts for socket close() calls. Of course, this looks suspicious given our problem, but we have not been able to identify any way the blocked close() operation could affect incoming accept() requests. It is clear that running out of processor threads in the thread pool could cause such a delay, but we are not running under loads where this would happen, especially for 100 seconds. Any ideas out there? Randy Watler Finali Corporation -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org
Re: Rare request delay of 100 seconds
George, This is a non-SSL secure connector that is fronted by an SSL hardware. This means that Tomcat is seeing the traffic as plain HTTP. The packet data we traced was to the Tomcat server itself, so the SSL hardware seems to have been eliminated from the equation. The connector is set to https and secure so that Tomcat servlets know they are served in a secure context. Randy Watler Finali Corporation Sexton, George wrote: If you don't use SSL do you have the same problem? -Original Message- From: Randy Watler [mailto:rwatler;finali.com] Sent: 08 November, 2002 4:41 PM To: Tomcat Users List Subject: Re: Rare request delay of 100 seconds George, Thanks for the query. Here is the connector configuration: Connector className=org.apache.coyote.tomcat4.CoyoteConnector port=8543 minProcessors=8 maxProcessors=128 enableLookups=false acceptCount=64 debug=0 connectionTimeout=30 scheme=https secure=true useURIValidationHack=false/ So, I think we have it setup right, no? Randy Watler Finali Corporation Sexton, George wrote: Do you have the connector doing reverse DNS resolution of hosts? -Original Message- From: Randy Watler [mailto:rwatler;finali.com] Sent: 08 November, 2002 4:22 PM To: [EMAIL PROTECTED] Subject: Rare request delay of 100 seconds We are using 4.1.12 standalone on RedHat Linux 7.3 servers and having rare HTTP requests delayed on their way into the Coyote HTTP/1.1 connectors. We have packet traces that indicate the request is delayed by exactly 100 seconds, but otherwise is received and responded to as one would expect. Other requests immediately before and after the problematic ones are handled without any significant delay. We are wondering if others have noticed this problem or similar ones that sound like it? In the coyote connector code, it appears that the request sockets are set by default to have 100 second SO_LINGER timeouts for socket close() calls. Of course, this looks suspicious given our problem, but we have not been able to identify any way the blocked close() operation could affect incoming accept() requests. It is clear that running out of processor threads in the thread pool could cause such a delay, but we are not running under loads where this would happen, especially for 100 seconds. Any ideas out there? Randy Watler Finali Corporation -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org -- To unsubscribe, e-mail: mailto:tomcat-user-unsubscribe;jakarta.apache.org For additional commands, e-mail: mailto:tomcat-user-help;jakarta.apache.org