I'm running Cocoon 2.0.3 with Resin 2.1.4. It seems to work fine. It used to die unexpectedly quite often until I tweaked a few parameters in my resin servlet engine conf file. It was improperly configured to the constraints imposed upon me by the web host being used. But I still noticed many repetitive errors in the Resin server error.log file as the following (just three here):

[2002/10/29 01:50:04] DEBUG   (2002-10-29) 01:50.04:453   [        ] (/cocoon/status) tcpConnection-8080-0/DefaultLogKitManager: Logger for category sitemap.generator.status not defined in configuration. New Logger created and returned

[2002/10/29 01:50:04] DEBUG   (2002-10-29) 01:50.04:456   [        ] (/cocoon/status) tcpConnection-8080-0/DefaultLogKitManager: Logger for category sitemap.transformer.xslt not defined in configuration. New Logger created and returned

[2002/10/29 01:50:04] DEBUG   (2002-10-29) 01:50.04:460   [        ] (/cocoon/status) tcpConnection-8080-0/DefaultLogKitManager: Logger for category core.xslt-processor not defined in configuration. New Logger created and returned



So then I started studying the core.log file of each webapp I am experimenting with and noticed when a URL is selected via the sitemap, for each access, I get a head of info as follows (I placed the '<***>' in a few spots below):

REQUEST: /cocoon/status

CONTEXT PATH: /cocoon
SERVLET PATH: /status
PATH INFO: null

REMOTE HOST: 63.202.<***>.<***>
REMOTE ADDRESS: 63.202.<***>.<***>
REMOTE USER: null
REQUEST SESSION ID: null
REQUEST PREFERRED LOCALE: en_US
SERVER HOST: www.<***>.net
SERVER PORT: 8080

METHOD: GET
CONTENT LENGTH: -1
PROTOCOL: HTTP/1.1
SCHEME: http
AUTH TYPE: Basic

CURRENT ACTIVE REQUESTS: 1
REQUEST PARAMETERS:

HEADER PARAMETERS:

PARAM: 'Host' VALUES: '[www.<***>.net:8080]'
PARAM: 'User-Agent' VALUES: '[Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.0.1) Gecko/20020823 Netscape/7.0]'
PARAM: 'Accept' VALUES: '[text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,te
xt/css,*/*;q=0.1]'
PARAM: 'Accept-Language' VALUES: '[en-us, en;q=0.50]'
PARAM: 'Accept-Encoding' VALUES: '[gzip, deflate, compress;q=0.9]'
PARAM: 'Accept-Charset' VALUES: '[ISO-8859-1, utf-8;q=0.66, *;q=0.66]'
PARAM: 'Keep-Alive' VALUES: '[300]'
PARAM: 'Connection' VALUES: '[keep-alive]'
PARAM: 'Referer' VALUES: '[http://www.<***>.net:8080/cocoon/]'

SESSION ATTRIBUTES:



Then many lines such as the following occur (only three here):

DEBUG   (2002-10-29) 01:50.04:408   [core.manager] (/cocoon/status) tcpConnection-8080-0/ResourceLimitingPool: Got a org.apache.cocoon.components.pipeline.CachingEventPipeline from the pool.
DEBUG   (2002-10-29) 01:50.04:408   [core.manager] (/cocoon/status) tcpConnection-8080-0/ResourceLimitingPool: Got a org.apache.cocoon.components.pipeline.CachingStreamPipeline from the pool.
DEBUG   (2002-10-29) 01:50.04:451   [core.manager] (/cocoon/status) tcpConnection-8080-0/DefaultComponentFactory: ComponentFactory creating new instance of org.apache.cocoon.generation.StatusGenerator.




Then the fun starts....many, many lines of the following (only six listed):

DEBUG   (2002-10-29) 01:50.08:842   [core.store.janitor] (Unknown-URI) Unknown-thread/StoreJanitorImpl: JVM total Memory: 24309760
DEBUG   (2002-10-29) 01:50.08:842   [core.store.janitor] (Unknown-URI) Unknown-thread/StoreJanitorImpl: JVM free Memory: 5772608
DEBUG   (2002-10-29) 01:50.08:842   [core.store.janitor] (Unknown-URI) Unknown-thread/StoreJanitorImpl: Memory is low = false
DEBUG   (2002-10-29) 01:50.18:850   [core.store.janitor] (Unknown-URI) Unknown-thread/StoreJanitorImpl: JVM total Memory: 24309760
DEBUG   (2002-10-29) 01:50.18:850   [core.store.janitor] (Unknown-URI) Unknown-thread/StoreJanitorImpl: JVM free Memory: 5727272
DEBUG   (2002-10-29) 01:50.18:851   [core.store.janitor] (Unknown-URI) Unknown-thread/StoreJanitorImpl: Memory is low = false



They come in groups of three, and are created at fairly regular intervals (10 secs on my particular server.) Cocoon seems to run fine. The only concern is it may not be operating as efficiently as it could, and also fills up the log files real quick. Is there a parameter I may have missed? Perhaps it is something with the configuration of my Resin 2.1.4 server. Thanks for any input.

Jon Lancelle

Reply via email to