Would someone please tell me that i am not going crazy. I have a remote server running today's 2.1.5-dev with a full Jetty and am trying to do some basic Cocoon cache testing.
Using a little shell script, i am doing some requests using wget to go direct to Cocoon (no webserver in front) using full URLs. On the server machine i have 'tail -f WEB-INF/logs/access.log' to see the INFO requests with the access times. When i run the request script orginating from the server i get what i expect. The first batch of requests takes a bit of time. Using the samples/status.html i can see the cache fill up. Subsequent batches of requests obviously use the Cocoon cache, because the access times are now in milliseconds. However, when i run the script originating from my local workstation (i.e. across the public net), i see no cache improvement in the logs. Why would there be any difference for Cocoon, with me using local and remote requests? --David
