RE: Chunked encoding bug in tomcat embedded/spring MVC

2015-02-19 Thread andrew-c.brown
 From: Mark Thomas [mailto:ma...@apache.org]
 On 19/02/2015 13:05, andrew-c.br...@ubs.com wrote:
  Not sure whether the responsibility lies here or with spring so I
  thought I'd ask here first. Here's the scenario.
 
  We have a Jetty 9.2.7 async reverse proxy. It always sends back to the
  servers behind using chunked encoding.
 
  We have backend servers built around embedded 7.0.23 (also tested the
  latest 7.0.59).
 
  Jetty is configured to make SSL connections to these servers. SSL is
  not the issue, though it may make it easier to reproduce. I can
  reproduce this issue at will.
 
  Our backend servers are using Spring MVC with automatic argument
  assignment where some argument values come from decoded JSON in the
  body. For example:
 
  @RequestMapping(method = RequestMethod.PUT, value = SOME_URL)
 
  public @ResponseBody WebAsyncTaskSomeObject
 ourMethod(@RequestBody
  @Valid final SomeObject f, @Min(1) @PathVariable(SOME_ID) final long
  someId, final HttpServletRequest request) {
 
  }
 
  Here's the issue.
 
  Using Wireshark I noticed that quite often the first TCP segment
  passed from Jetty to the backend server contained the entire PUT
  request
  **except** (and this is important) the final 7 bytes chunk terminator.
  That arrives in the next segment on the wire.
 
  \r\n
  0
  \r\n
  \r\n
 
  The nearly-complete segment causes Tomcat to wake up and start
  processing the request. To cut a very long call stack short, the
  automatic method argument assignment kicks into life and runs the
  Jackson JSON parser to read the incoming body data using
  org.apache.coyote.http11.filters.ChunkedInputFilter. Enough data is
  present in the buffer to fully process the request so our method is
  called with all the correct parameters and it does its stuff and sends
  back a response.
 
  That's where it should end, but it doesn't.
 
  The remaining 7 bytes arrive on the wire and wake up Tomcat's NIO loop
  again. Tomcat thinks it's a new request since the previous one has
  been completely handled. This causes a 400 Bad Request to be sent back
  up the wire following on from the correct response, and the connection
  is terminated which causes a closed connection to be present in
  Jetty's connection pool. That's bad.
 
  My opinion is that the Jackson JSON parser shouldn't have to care
  about the type of stream it's reading from so the responsibility
  should be with the chunked input stream to ensure that it doesn't get
  into this state. Perhaps if it were to always read ahead the next
  chunk size before handing back a completed chunk of data then it could
  ensure that the trailing zero is always consumed.
 
  Any thoughts?
 
 This sounds like a Tomcat bug but it will need some research to figure out
 what is happening and confirm that.
 
 As an aside, the JSON parser should read until it gets a -1 (end of stream). I
 suspect it is using the structure of the JSON to figure out where the data
 ends and isn't doing the final read.
 
 When the request/response is completed Tomcat should do a blocking read
 until the end chunk has been read. That this isn't happening is what makes
 me think this is a Tomcat bug.

The JSON parser is calling ObjectMapper._readMapAndClose(). This completes its 
read - as far as its concerned it's finished - and it calls close() on its 
JsonParser parameter. That stream close() call is implemented by 
CoyoteInputStream.close(). This, in turn calls 
org.apache.catalina.connector.InputBuffer.close() which just sets a private 
'closed' flag. The filters have an end method() and ChunkedInputFilter uses it 
to seek to the end but that's never called.

A good place to clean up the request filters held in 
org.apache.coyote.http11.AbstractInputBuffer would appear to be in 
org.apache.catalina.connector.close(), but I'm not familiar enough with the 
async workflow to know if that's correct or not.

- Andy

Visit our website at http://www.ubs.com

This message contains confidential information and is intended only
for the individual named. If you are not the named addressee you
should not disseminate, distribute or copy this e-mail. Please
notify the sender immediately by e-mail if you have received this
e-mail by mistake and delete this e-mail from your system.

E-mails are not encrypted and cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender
therefore does not accept liability for any errors or omissions in the
contents of this message which arise as a result of e-mail transmission.
If verification is required please request a hard-copy version. This
message is provided for informational purposes and should not be
construed as a solicitation or offer to buy or sell any securities
or related financial instruments.

UBS Limited is a company limited by shares incorporated in the United
Kingdom registered in England and Wales with number 2035362.
Registered 

Re: Chunked encoding bug in tomcat embedded/spring MVC

2015-02-19 Thread Mark Thomas
On 19/02/2015 13:05, andrew-c.br...@ubs.com wrote:
 Not sure whether the responsibility lies here or with spring so I
 thought I'd ask here first. Here's the scenario.
 
 We have a Jetty 9.2.7 async reverse proxy. It always sends back to the
 servers behind using chunked encoding.
 
 We have backend servers built around embedded 7.0.23 (also tested the
 latest 7.0.59).
 
 Jetty is configured to make SSL connections to these servers. SSL is not
 the issue, though it may make it easier to reproduce. I can reproduce
 this issue at will.
 
 Our backend servers are using Spring MVC with automatic argument
 assignment where some argument values come from decoded JSON in the
 body. For example:
 
 @RequestMapping(method = RequestMethod.PUT, value = SOME_URL)
 
 public @ResponseBody WebAsyncTaskSomeObject ourMethod(@RequestBody
 @Valid final SomeObject f, @Min(1) @PathVariable(SOME_ID) final long
 someId, final HttpServletRequest request) {
 
 }
 
 Here's the issue.
 
 Using Wireshark I noticed that quite often the first TCP segment passed
 from Jetty to the backend server contained the entire PUT request
 **except** (and this is important) the final 7 bytes chunk terminator.
 That arrives in the next segment on the wire.
 
 \r\n
 0
 \r\n
 \r\n
 
 The nearly-complete segment causes Tomcat to wake up and start
 processing the request. To cut a very long call stack short, the
 automatic method argument assignment kicks into life and runs the
 Jackson JSON parser to read the incoming body data using
 org.apache.coyote.http11.filters.ChunkedInputFilter. Enough data is
 present in the buffer to fully process the request so our method is
 called with all the correct parameters and it does its stuff and sends
 back a response.
 
 That's where it should end, but it doesn't.
 
 The remaining 7 bytes arrive on the wire and wake up Tomcat's NIO loop
 again. Tomcat thinks it's a new request since the previous one has been
 completely handled. This causes a 400 Bad Request to be sent back up the
 wire following on from the correct response, and the connection is
 terminated which causes a closed connection to be present in Jetty's
 connection pool. That's bad.
 
 My opinion is that the Jackson JSON parser shouldn't have to care about
 the type of stream it's reading from so the responsibility should be
 with the chunked input stream to ensure that it doesn't get into this
 state. Perhaps if it were to always read ahead the next chunk size
 before handing back a completed chunk of data then it could ensure that
 the trailing zero is always consumed.
 
 Any thoughts?

This sounds like a Tomcat bug but it will need some research to figure
out what is happening and confirm that.

As an aside, the JSON parser should read until it gets a -1 (end of
stream). I suspect it is using the structure of the JSON to figure out
where the data ends and isn't doing the final read.

When the request/response is completed Tomcat should do a blocking read
until the end chunk has been read. That this isn't happening is what
makes me think this is a Tomcat bug.

Mark


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Chunked encoding bug in tomcat embedded/spring MVC

2015-02-19 Thread andrew-c.brown
  From: Mark Thomas [mailto:ma...@apache.org] On 19/02/2015 13:05,
  andrew-c.br...@ubs.com wrote:
   Not sure whether the responsibility lies here or with spring so I
   thought I'd ask here first. Here's the scenario.
  
   We have a Jetty 9.2.7 async reverse proxy. It always sends back to
   the servers behind using chunked encoding.
  
   We have backend servers built around embedded 7.0.23 (also tested
   the latest 7.0.59).
  
   Jetty is configured to make SSL connections to these servers. SSL is
   not the issue, though it may make it easier to reproduce. I can
   reproduce this issue at will.
  
   Our backend servers are using Spring MVC with automatic argument
   assignment where some argument values come from decoded JSON in
 the
   body. For example:
  
   @RequestMapping(method = RequestMethod.PUT, value = SOME_URL)
  
   public @ResponseBody WebAsyncTaskSomeObject
  ourMethod(@RequestBody
   @Valid final SomeObject f, @Min(1) @PathVariable(SOME_ID) final long
   someId, final HttpServletRequest request) {
  
   }
  
   Here's the issue.
  
   Using Wireshark I noticed that quite often the first TCP segment
   passed from Jetty to the backend server contained the entire PUT
   request
   **except** (and this is important) the final 7 bytes chunk terminator.
   That arrives in the next segment on the wire.
  
   \r\n
   0
   \r\n
   \r\n
  
   The nearly-complete segment causes Tomcat to wake up and start
   processing the request. To cut a very long call stack short, the
   automatic method argument assignment kicks into life and runs the
   Jackson JSON parser to read the incoming body data using
   org.apache.coyote.http11.filters.ChunkedInputFilter. Enough data is
   present in the buffer to fully process the request so our method is
   called with all the correct parameters and it does its stuff and
   sends back a response.
  
   That's where it should end, but it doesn't.
  
   The remaining 7 bytes arrive on the wire and wake up Tomcat's NIO
   loop again. Tomcat thinks it's a new request since the previous one
   has been completely handled. This causes a 400 Bad Request to be
   sent back up the wire following on from the correct response, and
   the connection is terminated which causes a closed connection to be
   present in Jetty's connection pool. That's bad.
  
   My opinion is that the Jackson JSON parser shouldn't have to care
   about the type of stream it's reading from so the responsibility
   should be with the chunked input stream to ensure that it doesn't
   get into this state. Perhaps if it were to always read ahead the
   next chunk size before handing back a completed chunk of data then
   it could ensure that the trailing zero is always consumed.
  
   Any thoughts?
 
  This sounds like a Tomcat bug but it will need some research to figure
  out what is happening and confirm that.
 
  As an aside, the JSON parser should read until it gets a -1 (end of
  stream). I suspect it is using the structure of the JSON to figure out
  where the data ends and isn't doing the final read.
 
  When the request/response is completed Tomcat should do a blocking
  read until the end chunk has been read. That this isn't happening is
  what makes me think this is a Tomcat bug.
 
 The JSON parser is calling ObjectMapper._readMapAndClose(). This
 completes its read - as far as its concerned it's finished - and it calls 
 close() on
 its JsonParser parameter. That stream close() call is implemented by
 CoyoteInputStream.close(). This, in turn calls
 org.apache.catalina.connector.InputBuffer.close() which just sets a private
 'closed' flag. The filters have an end method() and ChunkedInputFilter uses it
 to seek to the end but that's never called.
 
 A good place to clean up the request filters held in
 org.apache.coyote.http11.AbstractInputBuffer would appear to be in
 org.apache.catalina.connector.close(), but I'm not familiar enough with the
 async workflow to know if that's correct or not.

Some more info. Inside the org.apache.coyote.http11. 
AbstractHttp11ProcessorS.process method there is this cleanup code after the 
main request while() loop:

// Finish the handling of the request
rp.setStage(org.apache.coyote.Constants.STAGE_ENDINPUT);

if (!isAsync()  !comet) {
if (error) {
// If we know we are closing the connection, don't drain
// input. This way uploading a 100GB file doesn't tie up the
// thread if the servlet has rejected it.
getInputBuffer().setSwallowInput(false);
}
endRequest();
}

Note the call to endRequest(). If I make my methods synchronous (i.e. remove 
WebAsyncMethod) then isAsync() returns false, this cleanup block is entered, 
endRequest() is called, ChunkedInputFilter.end() is called and the trailing 
metadata is consumed. All is good. Only when methods are async is this block 
skipped 

Re: chunked encoding

2012-03-26 Thread Pid
On 25/03/2012 22:55, Alex Samad - Yieldbroker wrote:
 
 
 -Original Message-
 From: Pid [mailto:p...@pidster.com]
 Sent: Monday, 26 March 2012 8:47 AM
 To: Tomcat Users List
 Subject: Re: chunked encoding

 On 25/03/2012 08:54, Alex Samad - Yieldbroker wrote:
 [snip]


 1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
 2. RFC 2616 (the specification of HTTP/1.1 protocol)
 Thanks, I had also hoped to get a bit of debate on the !experimental! nature
 of it in the connector

 What makes you think it's experimental?
 
 The documentation 
 enable_chunked_encoding  
 A string value representing a boolean. If it is set to true, chunked encoding 
 is supported by the server.
 A true value can be represented by the string 1 or any string starting with 
 the letters T or t. A false value will be assumed for 0 or any string 
 starting with F or f. The default value is false.
 This option is considered experimental and its support must be compile time 
 enabled. Use isapi_redirect.dll with chunked support enabled.
 This directive has been added in version 1.2.27

Right, got it... I thought you meant chunking in general (because I
wasn't paying proper attention to the thread).


p



-- 

[key:62590808]



signature.asc
Description: OpenPGP digital signature


Re: chunked encoding

2012-03-26 Thread Rainer Jung

On 25.03.2012 23:55, Alex Samad - Yieldbroker wrote:




-Original Message-
From: Pid [mailto:p...@pidster.com]
Sent: Monday, 26 March 2012 8:47 AM
To: Tomcat Users List
Subject: Re: chunked encoding

On 25/03/2012 08:54, Alex Samad - Yieldbroker wrote:

[snip]



1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
2. RFC 2616 (the specification of HTTP/1.1 protocol)

Thanks, I had also hoped to get a bit of debate on the !experimental! nature

of it in the connector

What makes you think it's experimental?


The documentation
enable_chunked_encoding   
A string value representing a boolean. If it is set to true, chunked encoding 
is supported by the server.
A true value can be represented by the string 1 or any string starting with the letters T or t. A false 
value will be assumed for 0 or any string starting with F or f. The default value is false.
This option is considered experimental and its support must be compile time 
enabled. Use isapi_redirect.dll with chunked support enabled.
This directive has been added in version 1.2.27


The feature was contributed by Tim and does no longer need a specially 
compiled binary since the change r910424 done by Mladen (released in 
version 1.2.30). Some minor bugs concerning chunking have been fixed 
since then.


I'd say we no longer consider this experimental, the docs just haven't 
been updated correctly. Will do right now (but this will usualy not 
become publically visible before the next release).


Thanks for the question / hint.

Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2012-03-25 Thread Alex Samad - Yieldbroker
[snip]

 
 1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
 2. RFC 2616 (the specification of HTTP/1.1 protocol)
Thanks, I had also hoped to get a bit of debate on the !experimental! nature  
of it in the connector

How does it affect compression.  So I presume the chunking is between the 
connector and IIS and then on to the client or is it just the connector to 
tomcat/jboss ?

Alex

 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-25 Thread Pid
On 25/03/2012 08:54, Alex Samad - Yieldbroker wrote:
 [snip]
 

 1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
 2. RFC 2616 (the specification of HTTP/1.1 protocol)
 Thanks, I had also hoped to get a bit of debate on the !experimental! nature  
 of it in the connector

What makes you think it's experimental?


p

 How does it affect compression.  So I presume the chunking is between the 
 connector and IIS and then on to the client or is it just the connector to 
 tomcat/jboss ?
 
 Alex
 

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-- 

[key:62590808]



signature.asc
Description: OpenPGP digital signature


RE: chunked encoding

2012-03-25 Thread Alex Samad - Yieldbroker


 -Original Message-
 From: Pid [mailto:p...@pidster.com]
 Sent: Monday, 26 March 2012 8:47 AM
 To: Tomcat Users List
 Subject: Re: chunked encoding
 
 On 25/03/2012 08:54, Alex Samad - Yieldbroker wrote:
  [snip]
 
 
  1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
  2. RFC 2616 (the specification of HTTP/1.1 protocol)
  Thanks, I had also hoped to get a bit of debate on the !experimental! nature
 of it in the connector
 
 What makes you think it's experimental?

The documentation 
enable_chunked_encoding
A string value representing a boolean. If it is set to true, chunked encoding 
is supported by the server.
A true value can be represented by the string 1 or any string starting with 
the letters T or t. A false value will be assumed for 0 or any string 
starting with F or f. The default value is false.
This option is considered experimental and its support must be compile time 
enabled. Use isapi_redirect.dll with chunked support enabled.
This directive has been added in version 1.2.27


 
 
 p
 
  How does it affect compression.  So I presume the chunking is between the
 connector and IIS and then on to the client or is it just the connector to
 tomcat/jboss ?
 
  Alex
 
 
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 --
 
 [key:62590808]


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-23 Thread Chema
 1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
 2. RFC 2616 (the specification of HTTP/1.1 protocol)

One question

How does web browser know what is the right order of the chunks ?
When server waits for generating the whole response, I understand that
transmission can rely on TCP and the client ( web browser ) can be
sure that response is completed and all message parts are in order

But when server sends response by chunks I don't know how the client (
web browser ) puts them in order
I did't seen anything about it on Wikipedia link

Thanks and regards

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2012-03-23 Thread Caldarale, Charles R
 From: Chema [mailto:demablo...@gmail.com] 
 Subject: Re: chunked encoding

 How does web browser know what is the right order of the chunks ?

The order they are passed to the client by the client's inbound TCP/IP stack is 
the correct order.

 But when server sends response by chunks I don't know how the 
 client (web browser ) puts them in order

The server application must pass the chunks to its outbound TCP/IP stack in 
order, so normal TCP sequencing takes care of it.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-23 Thread Chema
 The server application must pass the chunks to its outbound TCP/IP stack in 
 order, so normal TCP sequencing takes care of it.


Thanks
But, if I'm not wrong , chunks messages belong application layer, so
when servers pass them to TCP/IP stack , they are different messages.
Do it by same connection , but they are different messages on
application layer , right ?

I see it how a chat conversation: when I send Hello and Bye by
client chat  , receiver chat only can know the right order if there is
any mechanism on *application layer* to put them in order

I can rely on the order which messages were sent, but it doesn't look
very reliable

Sure I'm wrong but I don't understand it

Thanks and regards

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-23 Thread Konstantin Kolinko
2012/3/24 Chema demablo...@gmail.com:
 The server application must pass the chunks to its outbound TCP/IP stack in 
 order, so normal TCP sequencing takes care of it.


 Thanks
 But, if I'm not wrong , chunks messages belong application layer, so
 when servers pass them to TCP/IP stack , they are different messages.
 Do it by same connection , but they are different messages on
 application layer , right ?


TCP packets are numbered (by TCP itself). Thus chunks are ordered as well.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2012-03-23 Thread Caldarale, Charles R
 From: Chema [mailto:demablo...@gmail.com] 
 Subject: Re: chunked encoding

 But, if I'm not wrong , chunks messages belong application layer, so
 when servers pass them to TCP/IP stack , they are different messages.

TCP/IP knows nothing about messages, only about the two byte streams for the 
connection (one inbound, one outbound).

 Do it by same connection , but they are different messages on
 application layer , right ?

It's up to the application to deliver the chunks to its outbound TCP/IP stack 
in the proper order.  If you have a multi-threaded application where each 
thread has responsibility for a different chunk, it's still up to the 
application to get them to the TCP/IP stack in the correct sequence.  However, 
that is all moot, since the processing of a given request and response in a 
servlet container is single-threaded, by definition.

 I can rely on the order which messages were sent, but it doesn't look
 very reliable

It's completely reliable, unless you take overt action to write a really, 
really convoluted application on the server.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-23 Thread Chema

 TCP packets are numbered (by TCP itself). Thus chunks are ordered as well.


So, chunks aren't sent on the same time, but they are sent by the same
TCP connection .
In this case, it has sense for me: a stream of chunks . Thanks

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-23 Thread Chema
2012/3/23 Caldarale, Charles R chuck.caldar...@unisys.com:
 From: Chema [mailto:demablo...@gmail.com]
 Subject: Re: chunked encoding

 But, if I'm not wrong , chunks messages belong application layer, so
 when servers pass them to TCP/IP stack , they are different messages.

 TCP/IP knows nothing about messages, only about the two byte streams for 
 the connection (one inbound, one outbound).

Thanks.
You're right . It was my fault.
If I consider to send many chunks over the same TCP connection, it has
sense for me.

I don't know why I thought on different chunks over separate connections.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2012-03-22 Thread Konstantin Kolinko
2012/3/23 Alex Samad - Yieldbroker alex.sa...@yieldbroker.com:
 Hi

 I saw a thread earlier about chunked encoding and why
 a) it might be better to use that
 b) that it is not experimental any more


 Can somebody explain what it is, why it might be better and maybe some pro's 
 and con's and how not experimental is it.

 Even some pointers to papers, wiki's, blog that have been written would be 
 cool.


1. http://en.wikipedia.org/wiki/Chunked_transfer_encoding
2. RFC 2616 (the specification of HTTP/1.1 protocol)

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-13 Thread Chris Dumoulin
Based on Mark Thomas' recommendation in another thread, I've upgraded to tomcat 
7.0.11 and the issues I was seeing seem to be fixed.

This seems to be the relevant changelog entry:
Fix issues that prevented asynchronous servlets from working when used with 
the HTTP APR connector on platforms that support TCP_DEFER_ACCEPT. (markt)

Thanks all for your comments and suggestions.

- Chris

On March 11, 2011 05:04:07 pm Christopher Schultz wrote:
 André,
 
 On 3/11/2011 4:45 PM, André Warnier wrote:
  Chris wrote:
  1. Yes, tomcat is sending the header: Transfer-Encoding: chunked
  2. I've also tried using close() instead of flush() and I've seen the
  same chunked encoding.
  3. The nginx reverse proxying with the HttpProxyModule only supports
  HTTP/1.0 according to the Synopsis section here:
  http://wiki.nginx.org/HttpProxyModule
 
  Based on my reading of the HTTP spec, closing the connection seems to
  be a valid alternative to sending the terminating 0\r\n\r\n.
  Any ideas about why the connection wouldn't be closed when using APR?
 
  I can't say exactly what, but something else also nagged at me when
  reading the original post : as I recall, HTTP 1.0 also does not support
  keep-alive connections.
  Would there not be some invalid combination of request from the client
  for a keep-alive connection together with a HTTP 1.0 protocol, or with a
  setting at the Connector side ?
  Maybe some code is getting confused at some invalid combination ?
 
 Unlikely. nginx should allow the client to use keepalive, but then use
 1.0-spec semantics when connecting to the backend.
 
 That seems like a good reason to use something other than nginx to me.
 
 -chris
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-11 Thread Chris
I should also mention that I'm doing Servlet 3.0 async requests:
AsyncContext context= req.startAsync();
...
context.complete();

I've also tried using AJP instead of HTTP between tomcat and nginx, and I see 
similar behaviour; tomcat doesn't terminate the reply when using APR.

Anyone have any ideas about what's going on, any workarounds, or even places in 
the tomcat code to look for possible errors?

Thanks,
Chris

On March 10, 2011 12:37:00 pm Chris wrote:
 I've narrowed this down even further.
 
  As I mentioned below, the 0\r\n\r\n was not being sent to nginx, although 
 it was being sent to curl. The difference was that nginx was doing a GET 
 HTTP/1.0, while curl was using HTTP/1.1. If I configure curl to use HTTP/1.0 
 then I get the same result: no 0\r\n\r\n is sent.
 
 So, to summarize:
 - GET HTTP/1.0 results in no terminating 0\r\n\r\n for the chunked 
 response, regardless of whether libtcnative is being used.
 - The difference when using libtcnative is that the connection isn't 
 terminated after the response is sent.
 
 Chris
 
 On March 10, 2011 10:33:00 am Chris wrote:
  Hi All,
  Yesterday I created bug 50906 for this issue: 
  https://issues.apache.org/bugzilla/show_bug.cgi?id=50906
  
  Since then I've got some more details to add:
  
  - I'm running with nginx in front of tomcat
  - The 60 second timeout is happening in nginx and not tomcat
  - Regardless of whether or not I'm using libtcnative, I don't see the 
  terminating 0\r\n\r\n being sent by tomcat to nginx
  - When *not* using libtcnative, the connection is closed after writing the 
  response; there's a FIN, in addition to ACK on the last TCP packet from 
  tomcat. Nginx seems to use this to infer the end of the response and add 
  the 0\r\n\r\n to the reply sent to the client.
  - When using libtcnative, there is no FIN on the last TCP ACK packet, and 
  the connection stays open. One minute later nginx times out waiting for the 
  response to complete and adds the 0\r\n\r\n to the response to the client.
  - I notice that if I use curl to make a request directly to tomcat (instead 
  of going through nginx), then I do see the terminating 0\r\n\r\n. I still 
  see a difference in that tomcat disconnects immediately after the reply 
  when *not* using the native library.
  
  Any ideas?
  
  Thanks,
  Chris
  
  On March 9, 2011 04:56:22 pm Mark Thomas wrote:
   On 09/03/2011 21:49, Chris wrote:
Hi,
I'm using Tomcat 7.0.8 on Ubuntu 10.10.

When using the APR based Tomcat Native Library (libtcnative), responses 
from Tomcat are being sent with a chunked encoding, but the 0 
terminating the chunked response isn't sent until exactly 1 minute 
later.

The response is being written to an 
org.apache.catalina.connector.CoyoteOutputStream. The following calls 
are made:
out.write(resp);
out.flush();
out.close();

If I just remove the libtcnative-1.so, so that Tomcat loads without 
using it, then the response still uses chunked encoding, but the 
terminating 0 is sent immediately, with the rest of the response.

Any ideas would be appreciated.
   
   Sounds like a bug. Please create a bugzilla entry.
   
   Mark
   
   -
   To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
   For additional commands, e-mail: users-h...@tomcat.apache.org
   
   
  
 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-11 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Chris,

On 3/10/2011 12:37 PM, Chris wrote:
  As I mentioned below, the 0\r\n\r\n was not being sent to nginx, although 
 it was being sent to curl. The difference was that nginx was doing a GET 
 HTTP/1.0, while curl was using HTTP/1.1. If I configure curl to use HTTP/1.0 
 then I get the same result: no 0\r\n\r\n is sent.
 
 So, to summarize:
 - GET HTTP/1.0 results in no terminating 0\r\n\r\n for the chunked 
 response, regardless of whether libtcnative is being used.
 - The difference when using libtcnative is that the connection isn't 
 terminated after the response is sent.

Good catch with the HTTP 1.0 versus 1.1.

I'm not exactly an expert at HTTP spec and compliance, but I'm pretty
sure that the following are true:

1. HTTP 1.0 does not support chunked encoding, therefore Tomcat is
   somewhat correct in it's failure to send a trailing 0\r\n\r\n.
   Does Tomcat send a chunked response header?

2. Calling response.flush() often triggers the use of chunked encoding,
   which could cause Tomcat to make a poor decision when HTTP 1.0 is
   involved.

Is it possible to configure nginx to use HTTP 1.1? I would think you'd
want to use 1.1 anyway.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk16M0sACgkQ9CaO5/Lv0PCD6wCfVT45mNh6JCwP/xYAOCxPY+7M
AxkAoLnagKz9MyDx1Sre97kVF+o1w0yI
=J90v
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-11 Thread Chris
1. Yes, tomcat is sending the header: Transfer-Encoding: chunked
2. I've also tried using close() instead of flush() and I've seen the same 
chunked encoding.
3. The nginx reverse proxying with the HttpProxyModule only supports HTTP/1.0 
according to the Synopsis section here: http://wiki.nginx.org/HttpProxyModule

Based on my reading of the HTTP spec, closing the connection seems to be a 
valid alternative to sending the terminating 0\r\n\r\n.
Any ideas about why the connection wouldn't be closed when using APR?

On March 11, 2011 09:35:56 am Christopher Schultz wrote:
 Chris,
 
 On 3/10/2011 12:37 PM, Chris wrote:
   As I mentioned below, the 0\r\n\r\n was not being sent to nginx, 
  although it was being sent to curl. The difference was that nginx was doing 
  a GET HTTP/1.0, while curl was using HTTP/1.1. If I configure curl to use 
  HTTP/1.0 then I get the same result: no 0\r\n\r\n is sent.
 
  So, to summarize:
  - GET HTTP/1.0 results in no terminating 0\r\n\r\n for the chunked 
  response, regardless of whether libtcnative is being used.
  - The difference when using libtcnative is that the connection isn't 
  terminated after the response is sent.
 
 Good catch with the HTTP 1.0 versus 1.1.
 
 I'm not exactly an expert at HTTP spec and compliance, but I'm pretty
 sure that the following are true:
 
 1. HTTP 1.0 does not support chunked encoding, therefore Tomcat is
somewhat correct in it's failure to send a trailing 0\r\n\r\n.
Does Tomcat send a chunked response header?
 
 2. Calling response.flush() often triggers the use of chunked encoding,
which could cause Tomcat to make a poor decision when HTTP 1.0 is
involved.
 
 Is it possible to configure nginx to use HTTP 1.1? I would think you'd
 want to use 1.1 anyway.
 
 -chris
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-11 Thread André Warnier

Chris wrote:

1. Yes, tomcat is sending the header: Transfer-Encoding: chunked
2. I've also tried using close() instead of flush() and I've seen the same 
chunked encoding.
3. The nginx reverse proxying with the HttpProxyModule only supports HTTP/1.0 according 
to the Synopsis section here: http://wiki.nginx.org/HttpProxyModule

Based on my reading of the HTTP spec, closing the connection seems to be a valid 
alternative to sending the terminating 0\r\n\r\n.
Any ideas about why the connection wouldn't be closed when using APR?

I can't say exactly what, but something else also nagged at me when reading the original 
post : as I recall, HTTP 1.0 also does not support keep-alive connections.
Would there not be some invalid combination of request from the client for a keep-alive 
connection together with a HTTP 1.0 protocol, or with a setting at the Connector side ?

Maybe some code is getting confused at some invalid combination ?

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-11 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Chris,

On 3/11/2011 10:19 AM, Chris wrote:
 1. Yes, tomcat is sending the header: Transfer-Encoding: chunked

Okay. Does that happen even for HTTP 1.0? Is so, that looks like a spec
violation to me.

 2. I've also tried using close() instead of flush() and I've seen the same 
 chunked encoding.

Why are you doing either of those things? Tomcat can properly flush and
close the response stream as necessary. It also makes your code simpler.

 3. The nginx reverse proxying with the HttpProxyModule only supports HTTP/1.0 
 according to the Synopsis section here: 
 http://wiki.nginx.org/HttpProxyModule

:(


 Based on my reading of the HTTP spec, closing the connection seems to
 be a valid alternative to sending the terminating 0\r\n\r\n. Any
 ideas about why the connection wouldn't be closed when using APR?

I have no idea. Have you tried the NIO connector just to see how it behaves?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk16nAUACgkQ9CaO5/Lv0PAonQCdEum8fdBShLDv1G+X6+mzbyhL
yiAAn2BW77VRtbx5soAyjj8yBG0LGNfx
=89V7
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-11 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

André,

On 3/11/2011 4:45 PM, André Warnier wrote:
 Chris wrote:
 1. Yes, tomcat is sending the header: Transfer-Encoding: chunked
 2. I've also tried using close() instead of flush() and I've seen the
 same chunked encoding.
 3. The nginx reverse proxying with the HttpProxyModule only supports
 HTTP/1.0 according to the Synopsis section here:
 http://wiki.nginx.org/HttpProxyModule

 Based on my reading of the HTTP spec, closing the connection seems to
 be a valid alternative to sending the terminating 0\r\n\r\n.
 Any ideas about why the connection wouldn't be closed when using APR?

 I can't say exactly what, but something else also nagged at me when
 reading the original post : as I recall, HTTP 1.0 also does not support
 keep-alive connections.
 Would there not be some invalid combination of request from the client
 for a keep-alive connection together with a HTTP 1.0 protocol, or with a
 setting at the Connector side ?
 Maybe some code is getting confused at some invalid combination ?

Unlikely. nginx should allow the client to use keepalive, but then use
1.0-spec semantics when connecting to the backend.

That seems like a good reason to use something other than nginx to me.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk16nFcACgkQ9CaO5/Lv0PDobACgpE/Wivtq6wDrCKmwpotf1/rc
ouEAnR08CoBPIYlNepUnTSjJQDfmzznj
=oPbQ
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-10 Thread Chris
Hi All,
Yesterday I created bug 50906 for this issue: 
https://issues.apache.org/bugzilla/show_bug.cgi?id=50906

Since then I've got some more details to add:

- I'm running with nginx in front of tomcat
- The 60 second timeout is happening in nginx and not tomcat
- Regardless of whether or not I'm using libtcnative, I don't see the 
terminating 0\r\n\r\n being sent by tomcat to nginx
- When *not* using libtcnative, the connection is closed after writing the 
response; there's a FIN, in addition to ACK on the last TCP packet from tomcat. 
Nginx seems to use this to infer the end of the response and add the 
0\r\n\r\n to the reply sent to the client.
- When using libtcnative, there is no FIN on the last TCP ACK packet, and the 
connection stays open. One minute later nginx times out waiting for the 
response to complete and adds the 0\r\n\r\n to the response to the client.
- I notice that if I use curl to make a request directly to tomcat (instead of 
going through nginx), then I do see the terminating 0\r\n\r\n. I still see a 
difference in that tomcat disconnects immediately after the reply when *not* 
using the native library.

Any ideas?

Thanks,
Chris

On March 9, 2011 04:56:22 pm Mark Thomas wrote:
 On 09/03/2011 21:49, Chris wrote:
  Hi,
  I'm using Tomcat 7.0.8 on Ubuntu 10.10.
  
  When using the APR based Tomcat Native Library (libtcnative), responses 
  from Tomcat are being sent with a chunked encoding, but the 0 terminating 
  the chunked response isn't sent until exactly 1 minute later.
  
  The response is being written to an 
  org.apache.catalina.connector.CoyoteOutputStream. The following calls are 
  made:
  out.write(resp);
  out.flush();
  out.close();
  
  If I just remove the libtcnative-1.so, so that Tomcat loads without using 
  it, then the response still uses chunked encoding, but the terminating 0 
  is sent immediately, with the rest of the response.
  
  Any ideas would be appreciated.
 
 Sounds like a bug. Please create a bugzilla entry.
 
 Mark
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-10 Thread Chris
I've narrowed this down even further.

 As I mentioned below, the 0\r\n\r\n was not being sent to nginx, although it 
was being sent to curl. The difference was that nginx was doing a GET HTTP/1.0, 
while curl was using HTTP/1.1. If I configure curl to use HTTP/1.0 then I get 
the same result: no 0\r\n\r\n is sent.

So, to summarize:
- GET HTTP/1.0 results in no terminating 0\r\n\r\n for the chunked response, 
regardless of whether libtcnative is being used.
- The difference when using libtcnative is that the connection isn't terminated 
after the response is sent.

Chris

On March 10, 2011 10:33:00 am Chris wrote:
 Hi All,
 Yesterday I created bug 50906 for this issue: 
 https://issues.apache.org/bugzilla/show_bug.cgi?id=50906
 
 Since then I've got some more details to add:
 
 - I'm running with nginx in front of tomcat
 - The 60 second timeout is happening in nginx and not tomcat
 - Regardless of whether or not I'm using libtcnative, I don't see the 
 terminating 0\r\n\r\n being sent by tomcat to nginx
 - When *not* using libtcnative, the connection is closed after writing the 
 response; there's a FIN, in addition to ACK on the last TCP packet from 
 tomcat. Nginx seems to use this to infer the end of the response and add the 
 0\r\n\r\n to the reply sent to the client.
 - When using libtcnative, there is no FIN on the last TCP ACK packet, and the 
 connection stays open. One minute later nginx times out waiting for the 
 response to complete and adds the 0\r\n\r\n to the response to the client.
 - I notice that if I use curl to make a request directly to tomcat (instead 
 of going through nginx), then I do see the terminating 0\r\n\r\n. I still 
 see a difference in that tomcat disconnects immediately after the reply when 
 *not* using the native library.
 
 Any ideas?
 
 Thanks,
 Chris
 
 On March 9, 2011 04:56:22 pm Mark Thomas wrote:
  On 09/03/2011 21:49, Chris wrote:
   Hi,
   I'm using Tomcat 7.0.8 on Ubuntu 10.10.
   
   When using the APR based Tomcat Native Library (libtcnative), responses 
   from Tomcat are being sent with a chunked encoding, but the 0 
   terminating the chunked response isn't sent until exactly 1 minute later.
   
   The response is being written to an 
   org.apache.catalina.connector.CoyoteOutputStream. The following calls are 
   made:
   out.write(resp);
   out.flush();
   out.close();
   
   If I just remove the libtcnative-1.so, so that Tomcat loads without using 
   it, then the response still uses chunked encoding, but the terminating 
   0 is sent immediately, with the rest of the response.
   
   Any ideas would be appreciated.
  
  Sounds like a bug. Please create a bugzilla entry.
  
  Mark
  
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
  
  
 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Chunked encoding not terminated with native library

2011-03-09 Thread Mark Thomas
On 09/03/2011 21:49, Chris wrote:
 Hi,
 I'm using Tomcat 7.0.8 on Ubuntu 10.10.
 
 When using the APR based Tomcat Native Library (libtcnative), responses from 
 Tomcat are being sent with a chunked encoding, but the 0 terminating the 
 chunked response isn't sent until exactly 1 minute later.
 
 The response is being written to an 
 org.apache.catalina.connector.CoyoteOutputStream. The following calls are 
 made:
 out.write(resp);
 out.flush();
 out.close();
 
 If I just remove the libtcnative-1.so, so that Tomcat loads without using it, 
 then the response still uses chunked encoding, but the terminating 0 is 
 sent immediately, with the rest of the response.
 
 Any ideas would be appreciated.

Sounds like a bug. Please create a bugzilla entry.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-15 Thread Anthony J. Biacco
I'd like to re-open my initial chunking problem briefly here, and maybe move it 
to apache/modules-dev if it ends up not applying here anymore.
If you all remember, I was having the problem of not being able to get my 
request response NOT-chunked from my webapps via apache-mod_jk-tomcat.
So our dev team has added a content-length header to our webapp response.
The results below show that chunking was still taking place under normal 
circumstances. The only way to get the content length passed through was to 
turn gzip/deflate off, which for me isn't an option since if I turn it off, 
then the header doesn't get passed to my CDN which means it doesn't go to 
end-users which means my bandwidth usage skyrockets.
It seems that's Rainer might be right with regards to his former response below 
in that gzip could be suppressing the content length header because of 
streaming.
Here would be my question:
If that's true, why wouldn't it do the same for static files? If both a static 
file's data and a dynamic webapp's data size is known through the 
content-length, what would make one different than the other to cause the gzip 
module to behave this way? Assuming that's the cause.

If you guys are confident this no longer relates to tomcat, I'll move my 
problem over to the apache lists.
I appreciate your help, if you have any other ideas.

-Tony

1. With gzip on, http/1.1, the response was still coming back chunked, no 
content length header (note: compressed bytes is 536)
DateWed, 15 Jul 2009 16:09:38 GMT
Server  Apache
Last-Modified   Wed, 15 Jul 2009 16:09:38 GMT
Cache-Control   max-age=300, must-revalidate
Expires Wed, 15 Jul 2009 16:14:38 GMT
VaryAccept-Encoding
Content-Encodinggzip
Connection  close
Transfer-Encoding   chunked
Content-Typetext/javascript
Content-Languageen
2. With gzip on, http/1.0, the response was not coming back with a chunked 
header, but also not coming back with a content-length header (note: compressed 
bytes is 536)
DateWed, 15 Jul 2009 16:09:38 GMT
Server  Apache
Last-Modified   Wed, 15 Jul 2009 16:09:38 GMT
Cache-Control   max-age=300, must-revalidate
Expires Wed, 15 Jul 2009 16:14:38 GMT
VaryAccept-Encoding
Content-Encodinggzip
Connection  close
Content-Typetext/javascript
Content-Languageen
3. With gzip off, http/1.1, the response is not coming back chunked and has a 
content length header
DateWed, 15 Jul 2009 15:59:19 GMT
Server  Apache
Last-Modified   Wed, 15 Jul 2009 15:59:19 GMT
Content-Length  3444
Cache-Control   max-age=300, must-revalidate
Expires Wed, 15 Jul 2009 16:04:19 GMT
Connection  close
Content-Typetext/javascript
Content-Languageen
4. With gzip off, http/1.0, the response is not coming back chunked and has a 
content length header
DateWed, 15 Jul 2009 15:59:19 GMT
Server  Apache
Last-Modified   Wed, 15 Jul 2009 15:59:19 GMT
Content-Length  3444
Cache-Control   max-age=300, must-revalidate
Expires Wed, 15 Jul 2009 16:04:19 GMT
Connection  close
Content-Typetext/javascript
Content-Languageen


-Tony
---
Manager, IT Operations
Format Dynamics, Inc.
303-573-1800x27
abia...@formatdynamics.com
http://www.formatdynamics.com

 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Friday, June 12, 2009 3:44 AM
 To: Tomcat Users List
 Subject: Re: chunked encoding
 
 On 12.06.2009 10:43, Markus Schönhaber wrote:
  Anthony J. Biacco:
 
  Hence the idea about downgrading to http 1.0. But that doesn't get
 me
  the content length header still (which in itself is strange),
 
  No, it's not strange at all. If the length of the response body is
 not
  known when the response headers are sent, you obviously can't add a
  Content-Length header. That has nothing to do with the HTTP version
 used.
 
 ... true, but an HTTP/1.0 client can also just read until the
 connection
 is closed. That's another way of handling content of unknown length.
 
 BTW: IIRC, the OP mentioned mod_deflate compression. It comes last in
 the response handling. I'm not totally sure, how mod_deflate changes
 the
 headers (whether content-length is for the uncompressed or compressed
 size), but I expect mod_deflate to also change content of fixed length
 to chunked encoding, because in general (not small content) it does not
 know the final length in advance. mod_deflate streams, i.e. it doesn't
 first read the full response and then compresses

Re: chunked encoding

2009-07-15 Thread André Warnier

Anthony J. Biacco wrote:

I'd like to re-open my initial chunking problem briefly here, and maybe move it 
to apache/modules-dev


Maybe more like the Apache users list though.

Let's maybe summarise the issue first.
Your configuration is :
client - apache httpd with mod_deflate - mod_jk - Tomcat

The responses are generated by Tomcat, but you have to see this a bit 
differently.  As far as Apache httpd is concerned, the response 
handler or content generator is mod_jk.  Apache does not really know 
or care that there is a Tomcat behind.  It just passes a request to 
mod_jk, and gets a response in return.


The real culprit here for your chunked encoding and lack of 
content-length header is mod_deflate (as Rainer indicated).
It has to do that, because it compresses the response on-the-fly, and 
does not know the compressed response size in advance.


There are quite a few possibilities to tailor the behaviour of 
mod_deflate, so maybe you want to have a look again at

http://httpd.apache.org/docs/2.2/mod/mod_deflate.html
before anything else.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-07-15 Thread Anthony J. Biacco
 
 The real culprit here for your chunked encoding and lack of
 content-length header is mod_deflate (as Rainer indicated).
 It has to do that, because it compresses the response on-the-fly, and
 does not know the compressed response size in advance.
 

Which would be fine (well not fine, but I'd understand), if it had the
same behavior when compressing static files also.

-Tony

 There are quite a few possibilities to tailor the behaviour of
 mod_deflate, so maybe you want to have a look again at
 http://httpd.apache.org/docs/2.2/mod/mod_deflate.html
 before anything else.

Yep, checked those and tried with various values, no dice.

-Tony


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-14 Thread charliehnabble



awarnier wrote:
 
 charliehnabble wrote:
 What if I WANT chunked?
 
 I have a device that sends http POST header and xml request payload in
 one
 packet. Tomcat responds chunked. However when an intervening router
 decides
 to split the http POST header and xml request into two packets, Tomcat
 responds non-chunked.
 
 On the face of it, that does not seem to make any sense.  Whether Tomcat 
 sends the response chunked or not, shouldn't have anything to do with 
 the way the request comes in.  At least not with whether it comes in as 
 one or two packets (definition of packet needed here).
 It could have something to do with an accept-encoding HTTP request 
 header however.

Excuse me, by packet I meant IP datagram. See, there's a router in the
path that splits my POST into two IP datagrams, one containing the http
header and one contining the http payload (an xml message). It also adds
a connection:close: header. Apparently splitting the http message and
adding connection:close cause Tomcat to send non-chunked.

BTW, Tomcat also doesn't send a content-length header, so if it's not
chunked I don't know how long the message is.


-- 
View this message in context: 
http://www.nabble.com/chunked-encoding-tp23986311p24479138.html
Sent from the Tomcat - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-07-14 Thread Caldarale, Charles R
 From: charliehnabble [mailto:nab...@hand-family.org]
 Subject: Re: chunked encoding
 
 Excuse me, by packet I meant IP datagram.

Just a terminology nit: datagram normally refers to a UDP packet, and we're 
using TCP here.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-14 Thread André Warnier

Caldarale, Charles R wrote:

From: charliehnabble [mailto:nab...@hand-family.org]
Subject: Re: chunked encoding

Excuse me, by packet I meant IP datagram.


Just a terminology nit: datagram normally refers to a UDP packet, and we're 
using TCP here.

I'll add another nit: if the router is smart enough to split the 
request into headers and content, and in addition to insert an extra 
HTTP header, then it's not a simple router.  It's probably a HTTP proxy.

And it doesn't like persistent connections..mm.
Does it by any chance also downgrade the request to HTTP/1.0 ?

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-14 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Charlie,

On 7/14/2009 9:11 AM, charliehnabble wrote:
 See, there's a router in the
 path that splits my POST into two IP datagrams, one containing the http
 header and one contining the http payload (an xml message). It also adds
 a connection:close: header. Apparently splitting the http message and
 adding connection:close cause Tomcat to send non-chunked.

You have a router that is modifying HTTP messages? I think that's called
something other than a router.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkpcnuEACgkQ9CaO5/Lv0PBi7ACfTBpdPKXGaz2fuU0VQbUYR6Li
YZAAoJ5qo9YeaZqoio+Q4nOckh37FHz2
=S7mP
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-14 Thread charliehnabble


awarnier wrote:
 
 Caldarale, Charles R wrote:
 From: charliehnabble [mailto:nab...@hand-family.org]
 Subject: Re: chunked encoding

 Excuse me, by packet I meant IP datagram.
 
 Just a terminology nit: datagram normally refers to a UDP packet, and
 we're using TCP here.
 
 I'll add another nit: if the router is smart enough to split the 
 request into headers and content, and in addition to insert an extra 
 HTTP header, then it's not a simple router.  It's probably a HTTP proxy.
 And it doesn't like persistent connections..mm.
 Does it by any chance also downgrade the request to HTTP/1.0 ?
 

No, 1.1.

Here's my original header:

POST /BlackBoxServer/VR350 HTTP/1.1
Host:
Content-Type:text/xml;charset=utf8
Content-Length:263

And the modified header:

POST /BlackBoxServer/VR350 HTTP/1.1
Host:
Content-Type:text/xml;charset=utf8
Connection:Close
Content-Length:263

Looks like I should be sending connection:close anyway, since I'm through
with the connection after I get the response. Maybe it is connection:close
that makes Tomcat not send a chunk length. I don't know why Tomcat doesn't
put a content-length header in that case.

Here's Tomcats response to the request with the content:close:

HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: text/xml;charset=utf-8
Date: Mon, 13 Jul 2009 20:09:31 GMT
Connection: close

?xml version=1.0 ?S:Envelope etc, etc
-- 
View this message in context: 
http://www.nabble.com/chunked-encoding-tp23986311p24481908.html
Sent from the Tomcat - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-07-14 Thread charliehnabble


Caldarale, Charles R wrote:
 
 From: charliehnabble [mailto:nab...@hand-family.org]
 Subject: Re: chunked encoding
 
 Excuse me, by packet I meant IP datagram.
 
 Just a terminology nit: datagram normally refers to a UDP packet, and
 we're using TCP here.
 
 

http://en.wikipedia.org/wiki/Packet_(information_technology)#Packets_vs._datagrams

It's on the Internet, it must be true. :-)

-- 
View this message in context: 
http://www.nabble.com/chunked-encoding-tp23986311p24482248.html
Sent from the Tomcat - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-14 Thread André Warnier

charliehnabble wrote:



 Maybe it is connection:close

that makes Tomcat not send a chunk length. I don't know why Tomcat doesn't
put a content-length header in that case.

Now that I believe is normal.  As I recall the HTTP RFC (2616?), that is 
the only case where the server does not have to send a content length : 
when it closes the connection at the end of the response anyway.
The point is, the client must have a non-ambiguous way of knowing when 
the response content ends.

So there is either :
- a persistent connection with a content-length header
- a persistent connection with chunked encoding (where each chunk 
indicates a length, and there is a last chunk of 0 length)
- or a connection closing at the end of the response body, with or 
without a content-length header (kind of, without is in that case tolerated)


Where this all leaves your problem, I don't know.

You could, in your webapp, force a content-length header to be output no 
matter what. (Assuming you know in advance what this length is going to 
be, before you write the first byte of it to the response output stream)


It seems to me that you should find out precisely what your device or 
router does not like : maybe it just does not like a non-chunked 
answer /without/ a content-length header (which should technically be 
ok, but maybe it is a bit picky).





-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-07-13 Thread charliehnabble

I debated putting this in a separate thread, but there seems to be so much
expertise focused on this thread:

What if I WANT chunked?

I have a device that sends http POST header and xml request payload in one
packet. Tomcat responds chunked. However when an intervening router decides
to split the http POST header and xml request into two packets, Tomcat
responds non-chunked. My device wants chunked response.

Any way to force Tomcat to respond chunked?
-- 
View this message in context: 
http://www.nabble.com/chunked-encoding-tp23986311p24469028.html
Sent from the Tomcat - User mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-07-13 Thread André Warnier

charliehnabble wrote:

I debated putting this in a separate thread, but there seems to be so much
expertise focused on this thread:

What if I WANT chunked?

I have a device that sends http POST header and xml request payload in one
packet. Tomcat responds chunked. However when an intervening router decides
to split the http POST header and xml request into two packets, Tomcat
responds non-chunked.


On the face of it, that does not seem to make any sense.  Whether Tomcat 
sends the response chunked or not, shouldn't have anything to do with 
the way the request comes in.  At least not with whether it comes in as 
one or two packets (definition of packet needed here).
It could have something to do with an accept-encoding HTTP request 
header however.


 My device wants chunked response.

Hate to say this, but your device is wrong. It should be able to accept 
a response chunked or not.




Any way to force Tomcat to respond chunked?


I believe Tomcat will send the response chunked, if it doesn't know how 
long the response will be when it sends out the first response headers.
If your app tells Tomcat by setting a Content-length header, then you're 
cooked.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-15 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Martin,

On 6/13/2009 8:57 PM, Martin Gainty wrote:
 how do you accomodate multi-mb size files?

You do one of the following:

1. Use some means to determine the file size a priori (like using a
   static file, but in a database, so you can ask the db how big it is)

2. Buffer the file on the disk instead of in-memory

3. Just move the file to the disk instead of serving it out of a
   database

4. Suck it up and not deliver the Content-Length header, disabling
   CDN caching :(

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAko2dUQACgkQ9CaO5/Lv0PDhHQCaA8aLXWDdTqEj3m7a6rULytRm
bj0An2uYNaWWFGwQ7qugpW8PWPBdfaTe
=+7Iq
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-15 Thread Martin Gainty

mg(hopefully) brief response

 Date: Mon, 15 Jun 2009 12:22:28 -0400
 From: ch...@christopherschultz.net
 To: users@tomcat.apache.org
 Subject: Re: chunked encoding
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Martin,
 
 On 6/13/2009 8:57 PM, Martin Gainty wrote:
  how do you accomodate multi-mb size files?
 
 You do one of the following:
 
 1. Use some means to determine the file size a priori (like using a
static file, but in a database, so you can ask the db how big it is)

MGlob_loc = ((OracleResultSet)rset).getBFILE (1);  long length = 
lob_loc.length();
MGhttp://download-west.oracle.com/docs/cd/B13789_01/appdev.101/b10796/adlob_bf.htm#1014966
 
 2. Buffer the file on the disk instead of in-memory
MGDiskFileUpload upload = new DiskFileUpload()
MGhttp://svn.apache.org/viewvc/commons/proper/fileupload/trunk/xdocs/overview.xml?revision=560660
 
 3. Just move the file to the disk instead of serving it out of a
database
MGwhat most companies do (i usually do this myself) then insert reference of 
URL to DB
MGbut what if the server goes down?..uh oh..
 
 4. Suck it up and not deliver the Content-Length header, disabling
CDN caching :(
MGvery interesting..like to hear more about this..
 
 - -chris
MG--martin

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
 iEYEARECAAYFAko2dUQACgkQ9CaO5/Lv0PDhHQCaA8aLXWDdTqEj3m7a6rULytRm
 bj0An2uYNaWWFGwQ7qugpW8PWPBdfaTe
 =+7Iq
 -END PGP SIGNATURE-
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 

_
Lauren found her dream laptop. Find the PC that’s right for you.
http://www.microsoft.com/windows/choosepc/?ocid=ftp_val_wl_290

Re: chunked encoding

2009-06-13 Thread Rainer Jung
On 12.06.2009 17:48, Anthony J. Biacco wrote:
 Rainer Jung:

 On 12.06.2009 10:43, Markus Schönhaber wrote:
 No, it's not strange at all. If the length of the response body is
 not
 known when the response headers are sent, you obviously can't add a
 Content-Length header. That has nothing to do with the HTTP version
 used.
 ... true, but an HTTP/1.0 client can also just read until the
 connection
 is closed. That's another way of handling content of unknown length.
 Yes, that's exactly what I was pointing at.
 IOW, using HTTP/1.0 doesn't magically add a Content-Length header (as
 the OP seems to have expected) in situations where the size of the
 
 I was 1/2 hoping it would add the content-length header and 1/2 hoping it'd 
 just stop chunking. Getting both was a pipe dream :-)
 
 response body isn't known beforehand. The difference between HTTP/1.1
 and HTTP/1.0 wrt this situation is simply what has to be done to enable
 the client to know about the end of transmission. While 1.1 will need
 to
 transfer the body chunked (at least with keep-alive), 1.0 doesn't know
 nor care about chunked because the server will close the underlying TCP
 connection when the response is completely sent.
 
 Yes, and I think that with keep-alive off, apache should not chunk (or at 
 least give the option to) since it knows I am closing the connection right 
 after the response is finished.

I suggest using the environment variables downgrade-1.0 and nokeepalive,
maybe also no-gzip. You can set them via mod_rewrite dynamically. So you
can support keep-alive for normal requests and the other configuration
for the CDN. Of course this will only help, if you can determine a CDN
request, e.g. via the user-agent, IP or similar.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-13 Thread Rainer Jung
On 13.06.2009 14:51, Rainer Jung wrote:
 On 12.06.2009 17:48, Anthony J. Biacco wrote:
 Rainer Jung:

 On 12.06.2009 10:43, Markus Schönhaber wrote:
 No, it's not strange at all. If the length of the response body is
 not
 known when the response headers are sent, you obviously can't add a
 Content-Length header. That has nothing to do with the HTTP version
 used.
 ... true, but an HTTP/1.0 client can also just read until the
 connection
 is closed. That's another way of handling content of unknown length.
 Yes, that's exactly what I was pointing at.
 IOW, using HTTP/1.0 doesn't magically add a Content-Length header (as
 the OP seems to have expected) in situations where the size of the
 I was 1/2 hoping it would add the content-length header and 1/2 hoping it'd 
 just stop chunking. Getting both was a pipe dream :-)

 response body isn't known beforehand. The difference between HTTP/1.1
 and HTTP/1.0 wrt this situation is simply what has to be done to enable
 the client to know about the end of transmission. While 1.1 will need
 to
 transfer the body chunked (at least with keep-alive), 1.0 doesn't know
 nor care about chunked because the server will close the underlying TCP
 connection when the response is completely sent.
 Yes, and I think that with keep-alive off, apache should not chunk (or at 
 least give the option to) since it knows I am closing the connection right 
 after the response is finished.
 
 I suggest using the environment variables downgrade-1.0 and nokeepalive,
 maybe also no-gzip. You can set them via mod_rewrite dynamically. So you
 can support keep-alive for normal requests and the other configuration
 for the CDN. Of course this will only help, if you can determine a CDN
 request, e.g. via the user-agent, IP or similar.

Sorry, i forgot the link:

http://httpd.apache.org/docs/2.2/env.html

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-13 Thread Anthony J. Biacco


 
 Yes, and I think that with keep-
alive off, apache should not chunk (or at least give the option to) since 
it knows I am closing the 
connection
right after the response is finished.

I suggest using the environment 
variables downgrade-1.0 and 
nokeepalive,
maybe also no-gzip. You can set 
them via mod_rewrite dynamically. 
So you
can support keep-alive for normal 
requests and the other configuration
for the CDN. Of course this will only help, if you can determine a CDN
request, e.g. via the user-agent, IP or similar.

The keep alives I don't do anywhere and never much saw it as a benefit in our 
situation as we don't serve a web page to end users, but these separate js 
content wshich is like 1-3 requests per IP. Keeping those connections open  
seems like a waste.
In any case, yeah, i've been testing out the downgrade protocol for the cdn 
(they hit us on a particular VH so I can differentiate). The no-gzip probably 
won't work as if I don't pass on gzip headers then they don't pass them on to 
end-users.

Thanx,

-Tony
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-13 Thread Bill Barker

Christopher Schultz ch...@christopherschultz.net wrote in message 
news:4a32c4e3.6060...@christopherschultz.net...
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Anthony,

 On 6/12/2009 1:47 PM, Anthony J. Biacco wrote:
 Well, they used to be static JS files, then we decide we wanted more
 flexibility in the content that went into them, so we stuck them in a
 database and decided to generate them as needed.

 Er.. SELECT LENGTH(content), content FROM content_table?

 Or are you saying that you get some kind of template from the database
 and fill-in the details dynamically.

 I have to imagine that you could figure out the length of this data
 before you start streaming it back to the client. In that case, you
 simply have to provide your own Content-Length header.

 This just sounds like you're making it harder than it is.


Generally going to agree with Chris here that you're making it harder than 
it is.  If you are sending files on the order of about 12Kb (as specified in 
another post), then just put a Filter in front of it that wraps the response 
and the wrapper buffers the content, and then sets the content-length header 
when control returns to the Filter.  I did one of these as a toy a few years 
back (meaning that the coding style is awful), but it worked fine.  Of 
course, this doesn't work well if you expect to send multi MB sized files.

 - -chris
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iEYEARECAAYFAkoyxOMACgkQ9CaO5/Lv0PD/GQCgwEZm3lMeEoSww1P/4gBysiQi
 8lcAnitGUVWQNzCA2LNVT+jwdnAZDQAF
 =tS84
 -END PGP SIGNATURE- 




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-13 Thread Martin Gainty

how do you accomodate multi-mb size files?

Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 To: users@tomcat.apache.org
 From: wbar...@wilshire.com
 Subject: Re: chunked encoding
 Date: Sat, 13 Jun 2009 17:29:44 -0700
 
 
 Christopher Schultz ch...@christopherschultz.net wrote in message 
 news:4a32c4e3.6060...@christopherschultz.net...
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  Anthony,
 
  On 6/12/2009 1:47 PM, Anthony J. Biacco wrote:
  Well, they used to be static JS files, then we decide we wanted more
  flexibility in the content that went into them, so we stuck them in a
  database and decided to generate them as needed.
 
  Er.. SELECT LENGTH(content), content FROM content_table?
 
  Or are you saying that you get some kind of template from the database
  and fill-in the details dynamically.
 
  I have to imagine that you could figure out the length of this data
  before you start streaming it back to the client. In that case, you
  simply have to provide your own Content-Length header.
 
  This just sounds like you're making it harder than it is.
 
 
 Generally going to agree with Chris here that you're making it harder than 
 it is.  If you are sending files on the order of about 12Kb (as specified in 
 another post), then just put a Filter in front of it that wraps the response 
 and the wrapper buffers the content, and then sets the content-length header 
 when control returns to the Filter.  I did one of these as a toy a few years 
 back (meaning that the coding style is awful), but it worked fine.  Of 
 course, this doesn't work well if you expect to send multi MB sized files.
 
  - -chris
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.9 (MingW32)
  Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
  iEYEARECAAYFAkoyxOMACgkQ9CaO5/Lv0PD/GQCgwEZm3lMeEoSww1P/4gBysiQi
  8lcAnitGUVWQNzCA2LNVT+jwdnAZDQAF
  =tS84
  -END PGP SIGNATURE- 
 
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 

_
Hotmail® has ever-growing storage! Don’t worry about storage limits. 
http://windowslive.com/Tutorial/Hotmail/Storage?ocid=TXT_TAGLM_WL_HM_Tutorial_Storage_062009

Re: chunked encoding

2009-06-12 Thread Markus Schönhaber
Anthony J. Biacco:

 Hence the idea about downgrading to http 1.0. But that doesn't get me
 the content length header still (which in itself is strange),

No, it's not strange at all. If the length of the response body is not
known when the response headers are sent, you obviously can't add a
Content-Length header. That has nothing to do with the HTTP version used.

 though I could (although I'm sure to get yelled at for) fake the
 content-length header with something in apache like: Header add
 Content-length 5 Where 5 is some number larger than my
 largest possible response. Again, probably not the greatest idea.

Probably not. Did you try using ServletResponse#setBufferSize as I
suggested in my other post?

BTW: For Tomcat's NIO Connector I see the socket.appWriteBufSize
property which seems to set the output buffer size globally.

-- 
Regards
  mks

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-12 Thread Rainer Jung
On 12.06.2009 10:43, Markus Schönhaber wrote:
 Anthony J. Biacco:
 
 Hence the idea about downgrading to http 1.0. But that doesn't get me
 the content length header still (which in itself is strange),
 
 No, it's not strange at all. If the length of the response body is not
 known when the response headers are sent, you obviously can't add a
 Content-Length header. That has nothing to do with the HTTP version used.

... true, but an HTTP/1.0 client can also just read until the connection
is closed. That's another way of handling content of unknown length.

BTW: IIRC, the OP mentioned mod_deflate compression. It comes last in
the response handling. I'm not totally sure, how mod_deflate changes the
headers (whether content-length is for the uncompressed or compressed
size), but I expect mod_deflate to also change content of fixed length
to chunked encoding, because in general (not small content) it does not
know the final length in advance. mod_deflate streams, i.e. it doesn't
first read the full response and then compresses.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-12 Thread Markus Schönhaber
Rainer Jung:

 On 12.06.2009 10:43, Markus Schönhaber wrote:

 No, it's not strange at all. If the length of the response body is not
 known when the response headers are sent, you obviously can't add a
 Content-Length header. That has nothing to do with the HTTP version used.
 
 ... true, but an HTTP/1.0 client can also just read until the connection
 is closed. That's another way of handling content of unknown length.

Yes, that's exactly what I was pointing at.
IOW, using HTTP/1.0 doesn't magically add a Content-Length header (as
the OP seems to have expected) in situations where the size of the
response body isn't known beforehand. The difference between HTTP/1.1
and HTTP/1.0 wrt this situation is simply what has to be done to enable
the client to know about the end of transmission. While 1.1 will need to
transfer the body chunked (at least with keep-alive), 1.0 doesn't know
nor care about chunked because the server will close the underlying TCP
connection when the response is completely sent.

-- 
Regards
  mks


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-12 Thread André Warnier

Markus Schönhaber wrote:

Rainer Jung:


On 12.06.2009 10:43, Markus Schönhaber wrote:



No, it's not strange at all. If the length of the response body is not
known when the response headers are sent, you obviously can't add a
Content-Length header. That has nothing to do with the HTTP version used.

... true, but an HTTP/1.0 client can also just read until the connection
is closed. That's another way of handling content of unknown length.


Yes, that's exactly what I was pointing at.
IOW, using HTTP/1.0 doesn't magically add a Content-Length header (as
the OP seems to have expected) in situations where the size of the
response body isn't known beforehand. The difference between HTTP/1.1
and HTTP/1.0 wrt this situation is simply what has to be done to enable
the client to know about the end of transmission. While 1.1 will need to
transfer the body chunked (at least with keep-alive), 1.0 doesn't know
nor care about chunked because the server will close the underlying TCP
connection when the response is completely sent.


In summary thus :

- making the request be HTTP 1.0, no matter how it's done, is not going 
to magically make Tomcat send the response in one chunk nor add a 
Content-Length header.
(it may just /prevent/ it from adding a Content-transfer-encoding: 
chunked header, yes ?)


- the first-choice solution would be to have the CDN fix their software, 
or select another CDN which can handle chunked content.


- the second-best would be :
(presuming the OP knows at some point the real size of the data chunk 
that has to be sent back.)
Write a servlet which obtains the data, then uses 
response.setContentLength(nnn), then does a 
response.getWriter/getOutputStream, then writes the data there. Yes ?


- if the above is not acceptable/practical, then another solution would 
be to intercept and buffer the full response somewhere, calculate its 
size, and then forward it unchunked, preceded by a proper Content-Length 
header.


I have written a couple of (simplistic, beginner-level) servlets which 
produce output.  I cannot recall having had to add myself any code to 
add Content-transfer-encoding headers nor chunking the response.

So at what level in Tomcat does that happen ?
I mean, would a servlet filter intercepting the response be early 
enough, or late enough to do that ?
Or does Tomcat already decide to add this chunking header at the moment 
the servlet does the response.getWriter/getOutputStream, if it hasn't 
seen yet a Content-Length header by then ?




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-12 Thread Markus Schönhaber
André Warnier:

 In summary thus :
 
 - making the request be HTTP 1.0, no matter how it's done, is not going 
 to magically make Tomcat send the response in one chunk nor add a 
 Content-Length header.

Exactly.

 (it may just /prevent/ it from adding a Content-transfer-encoding: 
 chunked header, yes ?)

It may prevent it from sending chunked content (and adding the
appropriate header) in 100% of the cases, since there's no chunked
transfer encoding in HTTP/1.0. IOW, you may replace may with will in
the above sentence ;-).

 - the first-choice solution would be to have the CDN fix their software, 
 or select another CDN which can handle chunked content.

I agree.

 - the second-best would be :
 (presuming the OP knows at some point the real size of the data chunk 
 that has to be sent back.)
 Write a servlet which obtains the data, then uses 
 response.setContentLength(nnn), then does a 
 response.getWriter/getOutputStream, then writes the data there. Yes ?
 
 - if the above is not acceptable/practical, then another solution would 
 be to intercept and buffer the full response somewhere, calculate its 
 size, and then forward it unchunked, preceded by a proper Content-Length 
 header.

Yes.
I just noticed that the OP said he was going to experiment with setting
the bufferSize attribute of the AJP Connector to a higher value.
That might indeed be the easiest workaround - provided the output his
servlets/JSPs generate do not exceed the buffer size - and this
attribute really does what I understand it does.
Using ServletResponse#setBufferSize, which I already mentioned, might
work too - on an per servlet level. But if increasing the value of the
bufferSize attribute of the Connector works, it's much less hassle.

-- 
Regards
  mks

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-12 Thread Anthony J. Biacco
 
 Rainer Jung:
 
  On 12.06.2009 10:43, Markus Schönhaber wrote:
 
  No, it's not strange at all. If the length of the response body is
 not
  known when the response headers are sent, you obviously can't add a
  Content-Length header. That has nothing to do with the HTTP version
 used.
 
  ... true, but an HTTP/1.0 client can also just read until the
 connection
  is closed. That's another way of handling content of unknown length.
 
 Yes, that's exactly what I was pointing at.
 IOW, using HTTP/1.0 doesn't magically add a Content-Length header (as
 the OP seems to have expected) in situations where the size of the

I was 1/2 hoping it would add the content-length header and 1/2 hoping it'd 
just stop chunking. Getting both was a pipe dream :-)

 response body isn't known beforehand. The difference between HTTP/1.1
 and HTTP/1.0 wrt this situation is simply what has to be done to enable
 the client to know about the end of transmission. While 1.1 will need
 to
 transfer the body chunked (at least with keep-alive), 1.0 doesn't know
 nor care about chunked because the server will close the underlying TCP
 connection when the response is completely sent.

Yes, and I think that with keep-alive off, apache should not chunk (or at least 
give the option to) since it knows I am closing the connection right after the 
response is finished.

-Tony

 
 --
 Regards
   mks
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-12 Thread Anthony J. Biacco
 
  - the first-choice solution would be to have the CDN fix their
 software,
  or select another CDN which can handle chunked content.
 
 I agree.
 

And you know how easy that will be :-)

  - the second-best would be :
  (presuming the OP knows at some point the real size of the data chunk
  that has to be sent back.)
  Write a servlet which obtains the data, then uses
  response.setContentLength(nnn), then does a
  response.getWriter/getOutputStream, then writes the data there. Yes ?
 
  - if the above is not acceptable/practical, then another solution
 would
  be to intercept and buffer the full response somewhere, calculate its
  size, and then forward it unchunked, preceded by a proper Content-
 Length
  header.
 
 Yes.

Yes, that's probably possible, I just don’t know how it would affect the 
load-time of the app or any timeouts, since now I'd be not writing any data 
until the very end of the servlet when all the output is collected, sized, and 
then shipped out.

 I just noticed that the OP said he was going to experiment with setting
 the bufferSize attribute of the AJP Connector to a higher value.
 That might indeed be the easiest workaround - provided the output his
 servlets/JSPs generate do not exceed the buffer size - and this
 attribute really does what I understand it does.
 Using ServletResponse#setBufferSize, which I already mentioned, might
 work too - on an per servlet level. But if increasing the value of the
 bufferSize attribute of the Connector works, it's much less hassle.
 

Yeah, the bufferSize attribute in the AJP connector (and HTTP connector) didn't 
help. I set it to 16384, and a 12K response still got chunked.
I haven't tried the servlet method, mostly because I'm not a java programmer. I 
could figure it out, but I'd much rather have our dev. team handle it when I 
get them a free couple minutes.

-Tony



RE: chunked encoding

2009-06-12 Thread Anthony J. Biacco

 
 BTW: IIRC, the OP mentioned mod_deflate compression. It comes last in
 the response handling. I'm not totally sure, how mod_deflate changes
 the
 headers (whether content-length is for the uncompressed or compressed
 size), but I expect mod_deflate to also change content of fixed length
 to chunked encoding, because in general (not small content) it does
not
 know the final length in advance. mod_deflate streams, i.e. it doesn't
 first read the full response and then compresses.
 

Yes, I am using mod_deflate. It doesn't set the content-length to the
length of the compressed content. In my 12K reponse (original content
size), it compresses it to 3K, and even though it's not large enough
that it should be chunked, it STILL chunks it. It's like it's keeping
the chunking going from the original content size, and even though the
compressed size is so small it doesn't need to be chunked anymore, still
does it.


COMPRESSED (3K) apache-tomcat
DateFri, 12 Jun 2009 16:03:45 GMT
Server  Apache
Cache-Control   max-age=300, must-revalidate
Expires Fri, 12 Jun 2009 16:08:45 GMT
VaryAccept-Encoding
Content-Encodinggzip
Connection  close
Transfer-Encoding   chunked
Content-Typetext/javascript
Content-Languageen

NOT-COMPRESSED (12K) apache-tomcat
DateFri, 12 Jun 2009 16:06:12 GMT
Server  Apache
Cache-Control   max-age=300, must-revalidate
Expires Fri, 12 Jun 2009 16:11:12 GMT
Connection  close
Transfer-Encoding   chunked
Content-Typetext/javascript
Content-Languageen
X-Pad   avoid browser bug

NOT-COMPRESSED (12K) tomcat
Content-Typetext/javascript
Transfer-Encoding   chunked
DateFri, 12 Jun 2009 16:20:38 GMT
Server  Apache

-Tony



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-12 Thread Anthony J. Biacco

 
 Maybe your idea of making this be a HTTP 1.0 request, or say set
 whatever internal flag Tomcat would itself set if it had been an HTTP
 1.0 request.  Perhaps a servlet filter is soon enough, or if not, a
 Valve.
 Provided that would do the trick, it is also something you could do at
 the Apache level, before proxying to Tomcat.
 I am wondering about possible side-effects though.  The chunked
 encoding
 is probably not the only difference between 1.0 and 1.1. For example,
 if
 your Tomcat has Virtual Hosts, it may be an issue.
 

Yeah, I do have a VH in additional to the normal localhost, I'll have to test 
that out with the 1.0 connection.

 Now, maybe another stupid question : do you /have/ to generate these
 javascripts with Tomcat ? Couldn't your front-end generate them by
 itself ?
 

Well, they used to be static JS files, then we decide we wanted more 
flexibility in the content that went into them, so we stuck them in a database 
and decided to generate them as needed. But you know, you make something better 
it some ways, you make it worse in others, and while moving it to tomcat make 
it better on the front-end, on the back-end it kind of hurts me in performance 
and simplicity.
So the short answers are no and yes, respectively..but it's not likely to 
happen.

It's amazing how complicated things get when you start caching and compressing 
stuff. It all seems like an easy thing to do. Turn on a couple apache modules, 
parameters, and off you go. But oh wait, this old browser doesn't support 
compression, and shit, my CDN needs this header, and I got a cache at the CDN, 
and a cache in my apache which creates a zillion cache files for one real file 
because I got a Vary header, and then the cache in tomcat, and really how long 
do I have to wait for content to be fresh, and and..

-Tony



Re: chunked encoding

2009-06-12 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Anthony,

On 6/12/2009 1:47 PM, Anthony J. Biacco wrote:
 Well, they used to be static JS files, then we decide we wanted more
 flexibility in the content that went into them, so we stuck them in a
 database and decided to generate them as needed.

Er.. SELECT LENGTH(content), content FROM content_table?

Or are you saying that you get some kind of template from the database
and fill-in the details dynamically.

I have to imagine that you could figure out the length of this data
before you start streaming it back to the client. In that case, you
simply have to provide your own Content-Length header.

This just sounds like you're making it harder than it is.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoyxOMACgkQ9CaO5/Lv0PD/GQCgwEZm3lMeEoSww1P/4gBysiQi
8lcAnitGUVWQNzCA2LNVT+jwdnAZDQAF
=tS84
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-11 Thread Markus Schönhaber
Anthony J. Biacco:

 Here's my problem. When the request is to a servlet (static apache files
 and JSPs through mod_jk are fine) in the form of a GET, instead of
 sending a Content-Length response header, I get a Transfer-Encoding:
 chunked header
 I'd like to know:
 1) What are the causes of either Tomcat (or Apache is it?) enabling
 chunking on the connection?

Tomcat, probably, since you're talking about a servlet-created response.
And it's not chunking the connection but transferring the response body
chunked.

If at the time the response headers are sent the size of the response
body is already known (for example, if it's just the contents of a
file), it's easy to send a Content-Length response header.
OTOH, how big the output of a servlet will be is generally not known
before the servlet has finished. If you want to send a Content-Length
header anyway, I see two (well, really only one) alternatives:
1. Cache the complete servlet output and count the bytes - which isn't
very practical.[1]
2. Don't send a Content-Length header.
Alternative 2. creates another problem:
With HTTP/1.0 a client can quite reliably determine when the entire
response body is transferred, even if no content-length header is sent:
when the server closes the underlying TCP connection.
With HTTP/1.1 this isn't the case any more since the TCP connection may
be left open to be used to transfer additional requests/responses
(keep-alive). To enable the client to determine when the entire response
was transmitted, you'll have to transfer it chunked.

 2) How do I get a Content-Length reponse header instead? Do I need to
 downgrade the client to HTTP/1.0 or is there another way?

What's the point in caching dynamically created responses?

 FYI, the reason I'm trying to do this is that I use a CDN, and they
 won't cache my data without the presence of a Content-Length response
 header, so my servlet data isn't getting cached at the CDN.

What's a CDN?

[1] Tomcat will, by default, cache some output of servlets. IIRC the
default buffer size is 8k. So, if your servlet creates output of no more
then 8k, a Content-Length header will be sent. Otherwise chunked
encoding will be used.
This might be the reason why you see Content-Length headers from your
JSPs - their output is probably small enough.
-- 
Regards
  mks

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-11 Thread Anthony J. Biacco
 
  Here's my problem. When the request is to a servlet (static apache
 files
  and JSPs through mod_jk are fine) in the form of a GET, instead of
  sending a Content-Length response header, I get a Transfer-Encoding:
  chunked header
  I'd like to know:
  1) What are the causes of either Tomcat (or Apache is it?) enabling
  chunking on the connection?
 
 Tomcat, probably, since you're talking about a servlet-created
 response.
 And it's not chunking the connection but transferring the response body
 chunked.
 

The only thing that makes me question this, is that if I query the servlet 
directly on port 8080 instead of through mod_jk/ajp, it doesn't get chunked. 
Well, I don’t get a transfer-encoding header I should say.
But I don’t get a content length through there either.

 If at the time the response headers are sent the size of the response
 body is already known (for example, if it's just the contents of a
 file), it's easy to send a Content-Length response header.
 OTOH, how big the output of a servlet will be is generally not known
 before the servlet has finished. If you want to send a Content-Length
 header anyway, I see two (well, really only one) alternatives:
 1. Cache the complete servlet output and count the bytes - which isn't
 very practical.[1]

agreed

 2. Don't send a Content-Length header.
 Alternative 2. creates another problem:
 With HTTP/1.0 a client can quite reliably determine when the entire
 response body is transferred, even if no content-length header is sent:
 when the server closes the underlying TCP connection.
 With HTTP/1.1 this isn't the case any more since the TCP connection may
 be left open to be used to transfer additional requests/responses
 (keep-alive). To enable the client to determine when the entire
 response
 was transmitted, you'll have to transfer it chunked.
 

Well, to accomplish #2 I wouldn't have to do anything, since this is my problem 
:-)
I don’t do keep-alives on apache, FWIW. I have is specifically turned off.

  2) How do I get a Content-Length reponse header instead? Do I need to
  downgrade the client to HTTP/1.0 or is there another way?
 
 What's the point in caching dynamically created responses?
 

While they are dynamically created (from database and other sources), the 
content doesn't change on every request, so internally we set the cache-control 
to something like 300 seconds, or whatever we deem appropriate based on how 
constant the content stays. And the same goes for our CDN, which is a Content 
Delivery Network (limelight, akamai, etc..). With requests in the millions per 
day, we're talking a substantial bandwidth and load savings if we can cache 
this content at the CDN. But according to the CDN, without a content-length 
header, the caching won't happen.
I wonder if I downgrade the connection to http 1.0 if it applies to every hop? 
The way the CDN works is that, a request is made to it by the client, if it has 
it in its cache, it serves it to the client, if not, it requests the file from 
the origin server (my web server), I'm assuming by some proxy mechanism. So if 
I downgrade to 1.0, will that apply to the connection from the client to the 
CDN, the CDN to me, or both?

  FYI, the reason I'm trying to do this is that I use a CDN, and they
  won't cache my data without the presence of a Content-Length response
  header, so my servlet data isn't getting cached at the CDN.
 
 What's a CDN?
 
 [1] Tomcat will, by default, cache some output of servlets. IIRC the
 default buffer size is 8k. So, if your servlet creates output of no
 more
 then 8k, a Content-Length header will be sent. Otherwise chunked
 encoding will be used.
 This might be the reason why you see Content-Length headers from your
 JSPs - their output is probably small enough.

I tested with a 8K jsp and did get it chunked.
Do you happen to know the parameter for changing the buffer size? Perhaps I can 
increase it to a number representing the largest length of my servlet content. 
Which isn't too big, maybe 20K.

Thanx,

-Tony



RE: chunked encoding

2009-06-11 Thread Anthony J. Biacco
 I tested with a 8K jsp and did get it chunked.
 Do you happen to know the parameter for changing the buffer size?
 Perhaps I can increase it to a number representing the largest length
 of my servlet content. Which isn't too big, maybe 20K.

NM on this, I found bufferSize for the AJP connector. I'll test it out.

-Tony



RE: chunked encoding

2009-06-11 Thread Anthony J. Biacco
No dice. I tried a bufferSize of 16384 and an 11K response still got chunked. 
Even tried using packetSize and max_packet_size (mod_jk).

-Tony
---
Manager, IT Operations
Format Dynamics, Inc.
303-573-1800x27
abia...@formatdynamics.com
http://www.formatdynamics.com


 -Original Message-
 From: Anthony J. Biacco
 Sent: Thursday, June 11, 2009 2:31 PM
 To: 'Tomcat Users List'
 Subject: RE: chunked encoding
 
  I tested with a 8K jsp and did get it chunked.
  Do you happen to know the parameter for changing the buffer size?
  Perhaps I can increase it to a number representing the largest length
  of my servlet content. Which isn't too big, maybe 20K.
 
 NM on this, I found bufferSize for the AJP connector. I'll test it out.
 
 -Tony



Re: chunked encoding

2009-06-11 Thread Markus Schönhaber
Anthony J. Biacco:

 The only thing that makes me question this, is that if I query the
 servlet directly on port 8080 instead of through mod_jk/ajp, it
 doesn't get chunked. Well, I don’t get a transfer-encoding header I
 should say. But I don’t get a content length through there either.

And which HTTP version is used?

 But according to the CDN, without a content-length
 header, the caching won't happen. I wonder if I downgrade the
 connection to http 1.0 if it applies to every hop? The way the CDN
 works is that, a request is made to it by the client, if it has it in
 its cache, it serves it to the client, if not, it requests the file
 from the origin server (my web server), I'm assuming by some proxy
 mechanism. So if I downgrade to 1.0, will that apply to the
 connection from the client to the CDN, the CDN to me, or both?

I don't know. But you could use a network sniffer and check.

 I tested with a 8K jsp and did get it chunked. Do you happen to know
 the parameter for changing the buffer size? Perhaps I can increase it
 to a number representing the largest length of my servlet content.
 Which isn't too big, maybe 20K.

You could try
javax.servlet.ServletResponse#setBufferSize
http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletResponse.html#setBufferSize%28int%29

There may be even a configuration parameter to change Tomcat's default
buffer size globally. But I don't know if there really is one and if so,
which (and I'm too lazy too check atm).

-- 
Regards
  mks

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-11 Thread Martin Gainty

mks is correct

you can set MaxPostSize to a value  =2097152 for HttpConnector in 
$TOMCAT_HOME/conf/server.xml
http://spdn.ifas.ufl.edu/docs/config/http.html

and yes your connector will need to support HTTP 1.1 support for 
chunked-encoding
http://spdn.ifas.ufl.edu/docs/config/http.html#Connector%20Comparison

Mit freundlichen Grüßen
Martin 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Thu, 11 Jun 2009 22:58:02 +0200
 From: tomcat-us...@list-post.mks-mail.de
 To: users@tomcat.apache.org
 Subject: Re: chunked encoding
 
 Anthony J. Biacco:
 
  The only thing that makes me question this, is that if I query the
  servlet directly on port 8080 instead of through mod_jk/ajp, it
  doesn't get chunked. Well, I don’t get a transfer-encoding header I
  should say. But I don’t get a content length through there either.
 
 And which HTTP version is used?
 
  But according to the CDN, without a content-length
  header, the caching won't happen. I wonder if I downgrade the
  connection to http 1.0 if it applies to every hop? The way the CDN
  works is that, a request is made to it by the client, if it has it in
  its cache, it serves it to the client, if not, it requests the file
  from the origin server (my web server), I'm assuming by some proxy
  mechanism. So if I downgrade to 1.0, will that apply to the
  connection from the client to the CDN, the CDN to me, or both?
 
 I don't know. But you could use a network sniffer and check.
 
  I tested with a 8K jsp and did get it chunked. Do you happen to know
  the parameter for changing the buffer size? Perhaps I can increase it
  to a number representing the largest length of my servlet content.
  Which isn't too big, maybe 20K.
 
 You could try
 javax.servlet.ServletResponse#setBufferSize
 http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletResponse.html#setBufferSize%28int%29
 
 There may be even a configuration parameter to change Tomcat's default
 buffer size globally. But I don't know if there really is one and if so,
 which (and I'm too lazy too check atm).
 
 -- 
 Regards
   mks
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 

_
Lauren found her dream laptop. Find the PC that’s right for you.
http://www.microsoft.com/windows/choosepc/?ocid=ftp_val_wl_290

Re: chunked encoding

2009-06-11 Thread André Warnier

Anthony J. Biacco wrote:

No dice. I tried a bufferSize of 16384 and an 11K response still got chunked. 
Even tried using packetSize and max_packet_size (mod_jk).


I think we need Rainer here.

In the meantime, just as an intellectual exercise, let's take the 
problem from the other end.


A client gets a page (directly or not) from your site.
In that page, is a link to a javascript.
I understand that this link points to the CDN instead of your site.

The client thus requests this javascript from the CDN.

The CDN looks in their cache if they have it.
If they do, they serve it.
If not, they issue a request to your site for it, and your site delivers 
it to the CDN.  The CDN anyway delivers it to the client.
If the response of your site to the CDN is not chunked, they cache it, 
otherwise they don't.


Presumably, you know when a request for such a javascript comes from the 
CDN (as opposed to from a client directly), and you know exactly what 
such a request looks like (I mean, there is pattern to these URLs).


Since you are mentioning mod_jk, I also presume that the CDN sends its 
request to an Apache httpd front-end on your site, which in turn proxies 
it to Tomcat via mod_jk.
And it seems to be AJP/mod_jk which (sometimes) chunks the content prior 
to returning it to Apache.


It seems to me that in such a case, one should be able to do something 
at your Apache httpd front-end level, to de-chunk this response and 
re-create a content-length header, prior to returning it to the CDN.
(As per your earlier message, we're not talking about megabyte 
responses, we're talking about 20 Kb or so).


Maybe Apache httpd could even cache it, which I guess may ensure that it 
is returned non-chunked to the CDN.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-11 Thread Caldarale, Charles R
 From: Martin Gainty [mailto:mgai...@hotmail.com]
 Subject: RE: chunked encoding
 
 you can set MaxPostSize to a value  =2097152 for HttpConnector in
 $TOMCAT_HOME/conf/server.xml

Which has absolutely nothing to do with the issue under discussion.

maxPostSize is for processing of POST requests, the topic here is chunked 
*output*.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-11 Thread Anthony J. Biacco
 
 The client thus requests this javascript from the CDN.
 
 The CDN looks in their cache if they have it.
 If they do, they serve it.
 If not, they issue a request to your site for it, and your site
 delivers
 it to the CDN.  The CDN anyway delivers it to the client.
 If the response of your site to the CDN is not chunked, they cache it,
 otherwise they don't.
 

And has the a content-length header, but essentially, yes, all true.

 Presumably, you know when a request for such a javascript comes from
 the
 CDN (as opposed to from a client directly), and you know exactly what
 such a request looks like (I mean, there is pattern to these URLs).
 

True

 Since you are mentioning mod_jk, I also presume that the CDN sends its
 request to an Apache httpd front-end on your site, which in turn
 proxies
 it to Tomcat via mod_jk.
 And it seems to be AJP/mod_jk which (sometimes) chunks the content
 prior
 to returning it to Apache.

True, obviously we're assuming AJP/mod_jk is doing the chunking, which Rainer 
could confirm or disconfirm.

 
 It seems to me that in such a case, one should be able to do something
 at your Apache httpd front-end level, to de-chunk this response and
 re-create a content-length header, prior to returning it to the CDN.
 (As per your earlier message, we're not talking about megabyte
 responses, we're talking about 20 Kb or so).
 

That'd be ideal, yes. I haven't found any such parameters in Apache so far 
though.
Hence the idea about downgrading to http 1.0. But that doesn't get me the 
content length header still (which in itself is strange), though I could 
(although I'm sure to get yelled at for) fake the content-length header with 
something in apache like: Header add Content-length 5
Where 5 is some number larger than my largest possible response. Again, 
probably not the greatest idea.

 Maybe Apache httpd could even cache it, which I guess may ensure that
 it
 is returned non-chunked to the CDN.

I in fact, do use mod_mem_cache to cache this data (in production). For the 
purposes of testing, I have it turned off in dev.

-Tony

 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-11 Thread André Warnier

Anthony J. Biacco wrote:



That'd be ideal, yes. I haven't found any such parameters in Apache so far 
though.


I wasn't necessarily thinking about an existing parameter or module.
More of a custom add-on, which would make the request to Tomcat, buffer 
the response, and return it in one chunk with a proper content-length 
header to the CDN caller.
It would be something quite easy to prototype with mod_perl and test the 
concept.
That would of course only be worth the effort if fixing this would 
require a major effort at the Tomcat or AJP/mod_jk level (which it might 
well be).




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-11 Thread André Warnier

Maybe something else worth trying..

I think you mentioned earlier that this did not happen when you accessed 
the link directly via the Tomcat HTTP connector.


Since at the Apache level, you can recognise those calls, why don't you 
try to proxy those calls specifically via mod_proxy_http, to the (or 
a) Tomcat HTTP connector ?

(and let other calls still go though mod_jk).

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: chunked encoding

2009-06-11 Thread Anthony J. Biacco
It turned out I just wasn't using a response big enough. Once I did something 
like 10k I then got a chunked header from tomcat.

-Tony
---
Manager, IT Operations
Format Dynamics, Inc.
303-573-1800x27
abia...@formatdynamics.com
http://www.formatdynamics.com


 -Original Message-
 From: André Warnier [mailto:a...@ice-sa.com]
 Sent: Thursday, June 11, 2009 5:20 PM
 To: Tomcat Users List
 Subject: Re: chunked encoding
 
 Maybe something else worth trying..
 
 I think you mentioned earlier that this did not happen when you
 accessed
 the link directly via the Tomcat HTTP connector.
 
 Since at the Apache level, you can recognise those calls, why don't you
 try to proxy those calls specifically via mod_proxy_http, to the (or
 a) Tomcat HTTP connector ?
 (and let other calls still go though mod_jk).
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-11 Thread Tim Funk

http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html

3.6.1
All HTTP/1.1 applications MUST be able to receive and decode the 
chunked transfer-coding, and MUST ignore chunk-extension extensions 
they do not understand.


So you have to jump through big hoops to not use chunked encoding

[IIRC - This thread had to do with a CDN not caching due to chunked 
encoding. A good CDN should be able to cache content if you pass the 
appropriate cache friendly headers. (Like Etag, expires, etc) And handle 
the chunked encoding for you.]


-Tim

Anthony J. Biacco wrote:

No dice. I tried a bufferSize of 16384 and an 11K response still got chunked. 
Even tried using packetSize and max_packet_size (mod_jk).

-Tony
---
Manager, IT Operations
Format Dynamics, Inc.
303-573-1800x27
abia...@formatdynamics.com
http://www.formatdynamics.com


  

-Original Message-
From: Anthony J. Biacco
Sent: Thursday, June 11, 2009 2:31 PM
To: 'Tomcat Users List'
Subject: RE: chunked encoding



I tested with a 8K jsp and did get it chunked.
Do you happen to know the parameter for changing the buffer size?
Perhaps I can increase it to a number representing the largest length
of my servlet content. Which isn't too big, maybe 20K.
  

NM on this, I found bufferSize for the AJP connector. I'll test it out.

-Tony



  



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-11 Thread André Warnier

Anthony J. Biacco wrote:

It turned out I just wasn't using a response big enough. Once I did something 
like 10k I then got a chunked header from tomcat.


Ok, so it isn't mod_jk/AJP specifically, it's deeper.
It was a bit to be expected, since the server has no real way to know 
when your servlet is going to stop sending more bytes..

Well, that leaves the Apache module solution.

Maybe your idea of making this be a HTTP 1.0 request, or say set 
whatever internal flag Tomcat would itself set if it had been an HTTP 
1.0 request.  Perhaps a servlet filter is soon enough, or if not, a Valve.
Provided that would do the trick, it is also something you could do at 
the Apache level, before proxying to Tomcat.
I am wondering about possible side-effects though.  The chunked encoding 
is probably not the only difference between 1.0 and 1.1. For example, if 
your Tomcat has Virtual Hosts, it may be an issue.


Now, maybe another stupid question : do you /have/ to generate these 
javascripts with Tomcat ? Couldn't your front-end generate them by itself ?




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: chunked encoding

2009-06-11 Thread Bill Barker

André Warnier a...@ice-sa.com wrote in message 
news:4a317d8d.3060...@ice-sa.com...
 Anthony J. Biacco wrote:
 No dice. I tried a bufferSize of 16384 and an 11K response still got 
 chunked. Even tried using packetSize and max_packet_size (mod_jk).

 I think we need Rainer here.


No, the various AJP Connectors don't auto-set the content-length header 
(unlike the HTTP/1.1 Connectors).  They just pass on (or not) the value set 
by the Servlet. 




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org