On Mon, Oct 01, 2001 at 10:13:16PM -0700, Ian Holsman wrote:
> * Simple 'GET/Post/Head' function just does it and returns
>   the contents in a bucket-brigade

Sure.  I'd like to make it able to speak any protocol that
has similar functionality to HTTP but isn't quite HTTP/1.1 - i.e. 
try to be as protocol-agnostic as we can make it.  If HTTP 2.0 is 
ever developed (rather when), I'd like to see this client API be 
able to handle that as well.  (This was one of the goals of flood 
was to be able to rip out the protocol engine with the next 
generation whenever we have a spec for it - most likely we should 
be the first ones on the block with an implementation for it...)

> * Async version of above.. (like LWP::Parallel) I just push
>   GET requests and it notifies me when there is an event for
>   one of them.

I'm just not sure that we really want to make the API event-based.
I'm really not a fan of event-based or callback-based programming
(if you want event-based, use libwww in its various incarnations).
I think it makes things too confusing.  I much prefer the
paralleism to be at the thread-level rather than select/poll-based.
I know that Dean has spoken at length on new-httpd about this
paradigm and for the most part httpd/apr is not event-driven.  I 
also find that most web tasks don't need to be event-driven - 
they are all essentially parallel tasks.  I think you sacrifice 
too much of the API to get callbacks and async.

> * filters used for 'serf' as similiar in syntax/use to HTTPD

One thing I'd like to see is that we use filter framework as what 
is already in httpd.  This is good in two respects - we force 
ourselves to clean up the filtering API in httpd (most of us are
active in httpd as well) and we can reuse the same code in both 
places.  This would mean that all of the filter code may need 
to get moved to a neutral repository.  Not sure I know where 
it'd go or if this is even a good idea.  But, most likely, any
filters that we write will be implemented in both places - that
seems a bit silly.

> * HTTP/1.1 Support

As Greg said, we should only send out HTTP/1.1 requests.  This is
the way we did it with flood.  The only thing we don't handle
correctly as part of HTTP/1.1 is chunking.  And, I've got a start
on it in my tree, but I got sidetracked by the input filtering
in httpd and now this.

> * SSL Support

flood already does this with OpenSSL, so I think it should be
straightforward as most of the heavy lifting has already been
done.  Client certs might need to be added, but that's not too
hard.

> * Connection Pooling per server (ie we keep a max of n open connections
>   to a server and we re-use these on different requests)

Eh, I'm not terribly sure about this though.  I'm not sure how you
would get the lifetimes right.  Ideally, the connections should 
timeout themselves.  I guess we could enforce the lifetimes with 
socket timeouts (but that'd only be on our side not the server 
side).  I think that if you want connection-pooling, that *might* be 
outside of the scope of this library.  But, I'm not really sure.
Thoughts?

FWIW, I think RFC 2616 8.1.4 says that you should only keep two 
connections per server open at any time for single-client machines.
Proxy<->Proxy connections may be at 2*N (N == number of concurrent 
users) per server.  I don't expect that we should have knowledge
of this within apr-serf - I just think this is something left to
the program that uses apr-serf.  My $.02.

> * send requests to a server from a list of servers/ports (via round-robin)

Oh, Flood does this.  =) -- justin

Reply via email to