At 04:53 3/5/2007, you wrote:
Peter Kennard wrote:
At 23:07 3/4/2007, you wrote:
But since you can't send the response without finishing the reading of the input stream - the entire question doesn't seem to make sense.
If the input pipe is slow (ie: cellphone with slow pipe) and you are sending a transaction where the first part of it initiates a big operation (like a database lookup) the lookup can be happening while the rest of the data is still being read in. ie: it is useful to be able to read input in small chunks as it comes in. And the client can be tuned to chunk appropriately for the transaction(s).


It's not really useful for Tomcat though, given that the server is designed to be a Servlet Container rather than a multipurpose network application.

Yes - it's a matter of what is *considered* useful in the context.
HTTP is being forced to evolve.

Tomcat mainly handles two cases: 1) read headers & THEN send response e.g. GET, 2) read headers & process body, THEN send response e.g. POST.

I will let it lie after this rant - I have enough info for now, and I thank list participants for thier answers :)

Right - Tomcat was initally wholly designed aroud HTTP1.0 "single IP connection hit" as a paradigm and disavowed the feature of having a bidirectional connection. IN essence HTTP made UDP out of TCP, and in the process may have inadvertantly scuttled IPV2. This was a major flaw in the initial HTTP conception and as the IETF and W3C move forward things like "Content-Disposition: chunked" have been added. which along with the evolution of HTTP proxys and load balancers has led to the absurdity of what one migh call "HTTP URL Addressed - High Speed Routers" as a major part of the infrastructure.

available() may work for this depending on buffering scheme of tomcat's of
...
how chunks may be juggled by any proxy or other front end. I am simply dealing with how you *can* handle them on the receiving end.

Why would the servlet API need to do that, when chunking is something that happens during the response rather than the request?

It *can* happen both ways as defined by W3C, just current browsers don't support it. Tomcat HAS [I might say partial] support for it in the HTTP1.1 connector now because it was "required" to build practical proxy "routers" and to load balancers. Some resources I consider a valuable part of the "Tomcat infrastructure" I want to leverage for this project.

Your analysis is from the point of view of someone who's (if you'll forgive the analogy) trying to force a car to work like a canoe.

In some sense yes, but then almost all uses of HTTP are doing this to one extent or another because initially HTTP was so shallowly conceived. I would say I'm trying to make a mississippi riverboat into a hydrofoil and keep the ballroom :)

Given that, I'd suggest that if your app client is sending a large amount of data that can be processed in discrete chunks, you might as well split it (the request) into a series of separate smaller requests.

Actually in this case is is a small ammount of data over a slow pipe but in a lot of small pieces, and in this case a "hit" becomes high overhead. The HTTP hacked on solution to a similar problem, the many-hit hyperlinked page, is to use "keep alive" and to duplicate really fat HTTP headers for hundreds of "multiplexed" hits comming back from complexly linked up pages pages of frames and tables, and have this integrated into the socket handling of both browsers and servers.

If you've got control of the client you could set an X-Header that indicates it's position in the series, and another that groups them.

Undestood, yes "session" ID to maintain state if that is needed.

At least then you gain some robustness and your server can indicate to the client if it's missing one of the series.

Yes I understand these issues *all*too* well, hence why I want to do this ;^>

Having said all that, though, I'd have started from scratch or built a web service as I'm not sure what I'd really be gaining by using Tomcat.

Actully a whole lot and the the people involved should be proud it is so useful robust and documented. Even if I have to write a "custom protocol" handler for the front end for some ports.

Tomcat already has built a very good, administrable server side application structure, packaging scheme, remote deployment system, developer adminstration and acess scheme, application manager web ui, and a "standards based remote platform independent UI system" ie: HTTP1.1 support + jsp + libraries for use of the web desktops. And if I remain AJP complient internally I can take advantage of load balancers and a lot of other "off the shelf stuff". This includes all the O'Reilly books and documentation, and the pool of available personell who know how to use it, etc. (and this list BTW), I can hire people and with an API supported by a subclass of Servlet, integrate them in with IDE support, eclipse plugins etc to work on creating multiple modules quickly.

Because of this I can much more quickly get a complex system up with a team, even if I have to "tweak" it a bit, because 90% of it remains exteremely useful unchanged. If the complex system in the long run is a killer, then we will have the time to rebuild big chunks of it over as custom modules if needed. And I can donate to the Apache Jakarta Debian etc folks.

*****

It is interesting that HTTP is in a sense "tunneled" inside TCP-IP.
And what I am doing for some ports is tunneling "our stuff" inside HTTP due to what one might call "market" or ecological economic forces.

I posted this a long time ago on a thread about firewalls and the way things seem to evolve. The world is a whacky place. You might find it amusing.
PK

*****
Posted 2002 on another list:

(and some ISPs) -- the Cisco VPN solution promises to try to "get around that" by actually using HTTP:80 to get around most firewalls -- but as firewalls mature and cease being simple port-based to protocol-based, these too will start making VPN from a client site harder as the non-HTTP traffic on those ports get dropped. In the end, a customer is going to protect their

hmmmm - port 80

It's an interesting evolution.

In the beginning there was the IP address,
People wanted to do more the one thing with a server so they invented ports with 16k+ identifiers to identify what they were asking for.

But some people wanted to do nasty things with some ports so people invented firewalls that restricted which ports were usable. Now many only allow one.

But people still want to do many things, and now want to use port 80 for those multiple things. So people put large bytage HTTP headers on port 80 messages to indicate what they want to do.

Some common things people will do with HTTP will not be what is desired,
so firewalls will now restrict what HTTP headers are allowed. (and they are STATEFUL [buzzz] as well)

We will of course be in the same place, but will have a large useless baggage header (old versions of request or type differentiator(s)) which have now been narrowed to one possibility - But at the highest level protocol there will still be the desire to "tunnel" which will require an additional differentiator :)

Maybe this process is why there are so many "useless" genes in our cells :O

Peter K





---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to