Dean Michael Berris wrote:
> Hi John,
>
> On Tue, Jan 13, 2009 at 2:02 PM, John P. Feltz <[email protected]> wrote:
>   
>> Dean Michael Berris wrote:
>>     
>>> Hi John,
>>>
>>> On Mon, Jan 12, 2009 at 11:50 PM, John P. Feltz <[email protected]> 
>>> wrote:
>>>
>>>       
>>>> This would replace get() and other applicable base post/put/del/head's.
>>>>
>>>>
>>>>         
>>> Are you sure you really want to make the API's more complicated than
>>> it currently stands?
>>>
>>>
>>>       
>> In this case: Yes, and this is only the tip of the ice-berg.
>>     
>
> I think we may be running into some needless rigidity or complexity
> here. Let me address some of the issues you've raised below.
>
>   
>>> I like the idea of doing:
>>>
>>> using http::request;
>>> using http::response;
>>> typedef client<message_tag, 1, 1, policies<persistent, caching,
>>> cookies> > http_client;
>>> http_client client_;
>>> request request_("http://boost.org";);
>>> response response_ = client_.get(request_);
>>>
>>> And keeping the API simple.
>>>
>>>       
>> I do not want to confuse things by trying to equate the above exactly
>> with basic_client in my branch. I'll start by saying that first -from
>> implementation experience, there is no such thing as a 1.1. or 1.0
>> client. At best you have an old 1.0, a revised 1.0, and 1.1. The
>> specifications by which this basic_client functions off of are the only
>> feasible way to identify and delineate it's behavior. This is why in my
>> branch this takes the form of policies which are based off of either one
>> specification, or a certain combination.
>>
>>     
>
> I agree that based on experience there aren't strictly 1.0 or 1.1
> clients -- but the original intent of the library was to keep it
> simple: so simple in fact that much of the nuances of the HTTP
> protocol are hidden from the user. The aim is not to treat the users
> as "stupid" but to give them a consistent and simple API from which
> they can write their more important application logic around.
>
>   
I'm not suggesting getting rid of this simple API. rfc based policies 
are crucial as a foundation for defining client behavior at certain 
level. I'm suggesting that level to be at the implementation. A wrapper 
can be provided with certain predefined options, be those polices or 
non-policies to do what was the original design goal for the client class.
> I appreciate the idea of using policies, but the idea of the policies
> as far as generic programming and design goes allows for well-defined
> points of specialization. Let me try an example (untested):
>
> template <
>   class Tag,
>   class Policies,
>   unsigned version_major,
>   unsigned version_minor
>   
> class basic_client : mpl:inherit_linearly<Policies>::type {
>   private:
>     typedef mpl:inherit_linearly<Policies>::type base;
>   // ...
>   basic_response<Tag> get(basic_request<Tag> const & request) {
>     shared_ptr<socket> socket_ = base::init_connection(request);
>     return base::perform_request(request, "GET");
>   }
> }
> whatever implementation of init_connection is available to the
> available policies. Of course this assumes that the policies used are
> orthogonal. Design-wise it would be nice to enforce at compile-time
> the concepts of the policies valid for the basic_client, but that's
> for later. ;)
>   
Of the necessary concerns, I see the need to provide one for supplying 
the resolved, the sockets, and logic governing both. These seem 
nonorthogonal, and so facading is more appropriate for right now. 
Additionally, I don't write code through the lens of a policy based 
strategy until I've seen what the implementation requires.
> The lack of an existing HTTP Client that's "really 1.0" or "really
> 1.1" is not an excuse to come up with one. ;-)
>   
I'm sure users wishing to optimize requests to legacy servers or servers 
which do not follow a complete 1.1 spec will find that rather 
frustrating. Especially since it something so simple to design for (see 
branch).
> So what I'm saying really is, we have a chance (since this is
> basically from scratch anyway) to be able to do something that others
> have avoided: writing a simple HTTP client which works when you say
> it's really 1.0, or that it's really 1.1 -- in the simplest way
> possible. The reason they're templates are that if you don't like how
> they're implemented, then you can specialize them to your liking. :-)
>   
This is a fine goal, however assuming that the simplest design will 
address what I believe to be all the complex concerns is flawed. Let us 
solve the complex issues _first_, then simplify. There is nothing wrong 
with the basic_clients API in principle, though I think this should 
address complex users needs first and so delegation of that class to 
single request processing + a wrapper incorporating certain policies 
with predefined behavior is the best course for now.
>> Speaking of ice-bergs, while I do appreciate the original intentions of
>> simplicity behind the client's API, due to expanding implementation
>> concerns and overlooked error handling issues, this view might warrant
>> changing. Consider the case where prior-cached connection fails: a
>> connection which was retrieved from a supplier external to the
>> client-with the supplier being possibly shared by other clients. This
>> poses a problem for the user. If the connection was retrieved and used
>> as part of a forwarded request several levels deep, the resulting error
>> isn't going to be something easily identifiable or managed. While this
>> is perhaps a case for encapsulating the client completely, it all
>> depends on how oblivious we expect users of this basic_client to be. At
>> the moment, I had planned in the next branch release that auto
>> forwarding and dealing with persistent connections to be something
>> removed from the basic_client. Instead a optional wrapper would perform
>> the "driving" of a forwarded request and additionally encapsulate the
>> connection/resolved cache. This would take the shape of your previous
>> source example and I don't see this as a significant change. If this
>> could be done at the basic_client level through a policy configuration
>> than I would support that as well, however for _right now_, I don't see
>> an easy to way to do that.
>>     
>
> Actually when it comes to error-handling efficiency (i.e. avoiding
> exceptions which I think are perfectly reasonable to have), I would
> have been happy with something like this:
>
> template <...>
> class basic_client {
>   basic_response<Tag> get(basic_request<Tag> const & request);
>   tuple<basic_response<Tag>, error_code> get(basic_request<Tag> const
> & request, no_throw_t(*)());
> }
>
> This way, if you call 'get' with the nothrow function pointer
> argument, then you've got yourself covered -- and instead of a
> throwing implementation, the client can return a pair containing the
> (possibly empty) basic_response<Tag> and an (possibly
> default-constructed) error_code.
>   
I don't see where function pointers belong in a client which intends to 
remain simple. I'm also against a no-throw parameter because it is less 
explicit. If an exception is thrown then the user knows there's an issue 
and a valid response was not returned. If a response is returned either 
way, than it is easier for the user to miss.
> About expecting the users to be oblivious, yes this is part of the
> point -- I envisioned not making the user worry about recoverable
> errors from within the client. There's even a way to make this happen
> without making the implementation too complicated. I can say I'm
> working on my own refactorings, but I'm doing it as part of Friendster
> at the moment so I need to clear them first before releasing as open
> source.
>
> It's more or less what I'd call a "just work" API -- and in case of
> unrecoverable failures, throw/return an error.
>
>   
>>>> The deviations would be based off two criteria:
>>>> -The specification(ie: rfc1945) by which the request_policy processes
>>>> the request (it's coupled with policy)
>>>>
>>>>         
>>> The get/put/head/delete/post(...) functions don't have to be too
>>> complicated. If it's already part of the policies chosen, we can have
>>> a default deviation as part of the signature. At most we can provide
>>> an overload to the existing API instead of replacing the simple API
>>> we've already been presenting.
>>>
>>> The goal is really simplicity more than really sticking hard to standards.
>>>
>>>
>>>       
>> A default is fine.
>>     
>>>> -In cases that, while still allowing processing of a get/post() etc,
>>>> would do something counter to what the user expects from the interface,
>>>> such as a unmatched http version or persistence.
>>>>
>>>>
>>>>         
>>> Actually, if you notice HTTP 1.1 is meant to be backwards compatible
>>> to HTTP 1.0. At best, you just want to make the version information
>>> available in the response and let the users deal with a different HTTP
>>> version in the response rather than making the library needlessly
>>> complicated in terms of API.
>>>
>>>       
>> If the user receives a version that is out of spec -in many cases they
>> have a strong reason not to complete a request. This is important for
>> both efficiency and compliance.
>>     
>
> Actually, there's nothing in the HTTP 1.0 spec that says a response
> that's HTTP 1.x where x != 0 is an invalid response. There's also
> nothing in the spec that says that HTTP 1.1 requests cannot be
> completed when the HTTP 1.0 response is received.
>   
These are cases that I'm not concerned with.
> There are however some servers which will not accept certain HTTP
> versions -- sometimes you can write an HTTP Server that will send an
> HTTP error 505 
> (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.6)
> when it doesn't support the version you're trying to use as far as
> HTTP is concerned. I don't know of any servers though which will not
> honor HTTP 1.0 client requests and respond accordingly.
>
> BTW, if you notice, HTTP 1.1 enables some new things that were not
> possible with HTTP 1.0 (or were technically unsupported) like
> streaming encodings and persistent connections / request pipelining on
> both the client and server side. The specification is clear about what
> the client should and should not do when certain things happen in
> these persistent connections and pipelined request scenarios.
>
> So I don't see how strict compliance cannot be possible with the
> current API and with the current design of the client.
>
>   
>>> I can understand the need for the asynchronous client to be a little
>>> bit more involved (like taking a reference to an io_service at
>>> construction time and allowing the io_service object to be run
>>> externally of the HTTP client) and taking functions that deal with raw
>>> buffers or even functions that deal with already-crafted
>>> http::response objects. However even in these situations, let's try to
>>> be consistent with the theme of simplicity of an API.
>>>
>>>       
>> I have no comment regarding a-sync usage as I've not looked into that
>> issue in depth, I've only tried to make existing policy functions as
>> re-usable as possible for that case.
>>     
>
> Which I think is the wrong approach if you ask me.
>
>   
By existing I was referring to things like stateless items such as 
append_* in the branch. If these can't be used asynchronously I would be 
curious as to why. I've expended no other effort in regards to this.
> The idea (and the process) of generic programming will usually lead
> you from specific to generic. My best advice (?) would be to implement
> the specific first, write the tests/specifications (if you're doing
> Behavior Driven Development) and then refactor mercilessly in the end.
>
> The aim of the client is to be generic as well as compliant up to a
> point. HTTP 1.0 doesn't support persistent connections, although
> extensions were made with 'Connection: Keep-Alive' and the
> 'Keep-Alive: ...' headers -- these can be supported, but they're going
> to be different specializations of the basic_client.
>
>   
>>> I particularly don't like the idea that I need to set up deviations
>>> when I've already chosen which HTTP version I want to use -- and that
>>> deviations should complicate my usage of an HTTP client when I only
>>> usually want to get the body of the response instead of sticking hard
>>> to the standard. ;)
>>>
>>>       
>> A default is fine.
>>     
>>> If you meant that these were to be overloads instead of replacements
>>> (as exposed publicly through the basic_client<> specializations) then
>>> I wouldn't have any objections to them. At this time, I need to see
>>> the code closer to see what you intend to do with it. :)
>>>
>>> HTH
>>>
>>>
>>>       
>> Derived overloads might work, though you run into cases of un-orthogonal
>> policies (at least I have with this). That would also require
>> specialization and/or sub-classing of a deviation/non-deviation rfc
>> policies in my branch and I would prefer to keep the current set for now.
>>
>>     
>
> I think if you really want to be able to decompose the client into
> multiple orthogonal policies, you might want to look into introducing
> more specific extension points in the code instead of being hampered
> by earlier (internal) design decisions. ;)
>
>   
I am growing tired of discussing this and would prefer to just get on 
with implementing _something_ in the branch. After tomorrow I'll be 
going back to my day job and time for working on this will be limited 
and I don't want to spend that time debating design decisions in which 
there can be very little compromise.

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Cpp-netlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/cpp-netlib-devel

Reply via email to