Hi Martin,
Thank you for the feedback and your comments during the session. I agree
there is a tradeoff between how REST-ful we really want to make this, and this
is exactly the discussion we were hoping to get from the WG to get a good,
clean specification. Please see below for some specific comments:
On Monday 22 March 2010 9:57:44 pm Thomson, Martin wrote:
> Taking Lisa's mic comments a little further, I'd like to point out that
> what is in this draft isn't a particularly faithful embodiment of a
> RESTful service.
I'd agree with this, but as is mentioned below, we may not actually want
something REST-ful to the extent that _everything_ is a resource.
> In my opinion, a RESTful service would have more resources than are
> identified in the protocol. Of course, the degree to which anything can
> be considered "RESTful" seems to be highly subjective. To me, this seems
> more like XXX over HTTP than a use of HTTP that follows the REST axioms.
>
> The question is, whether you are able to model your application (i.e. ALTO)
> with resources. I'm not sure how that can be done within the constraints
> that you have, but it seems that more could be done.
For certain services, such as the Map service, this is already using
resources. However, a couple of other current limitations in the Map service
(that might be exposed by providing more Resources) are:
1) It only provides maps for the default cost type:
This might be rectified by having an interface such as:
/map/core/pid/cost/routingcost
/map/core/pid/cost/hopcount
/map/core/pid/cost/<some other cost type...>
where each resource is a map of a different cost type.
2) Depending on how we handle IPv4/v6, we might also parametrize maps by the
address family provided in them. (However, given the WG discussion on
Monday, we still have some work to do in figuring out the use cases and
what types of preferences need to be conveyed before working to specify
anything.)
Currently we have taken the approach to just make the Network Maps and the
Cost Maps as the resources exposed by the server (with the simplification that
the Map Service provides a view of the Cost Map with only the default cost
type.)
To go a bit further, perhaps we can identify other ways to define the ALTO
Server's resources that are exposed (or alternatively, which parameters make
the most sense to put in the path instead of the query-string or body).
One way is like I described it above, where there is a single Network Map
resource, and multiple Cost Map resources (one for each cost type defined by
the ALTO Server).
> For instance, included amoung those resources would be indexes, which might
> solve your capabilities/multiple server problem. That makes the server
> capabilities discoverable. There might be a combinatorial explosion if
> you allow for freedom on many axes, but I don't actually think that this
> is a high risk, given my limited understanding of the current solution.
This is certainly something that we can think about here. Related to this
point, another thing that we were trying to do in the design (adopted from the
Infoexport proposal) was to make deployment of the required ALTO interfaces as
simple as distributing a few static files from a web server. Personally, I
see this as appealing since it can lower the barrier towards deploying an ALTO
Server for an ISP, since all that would need to be done is encode the ALTO
information in a particular format and post it to a web server. To get
certain added functionality, another solution might be needed, but at least
the required parts would be extremely simple.
> However, if clients are given the ability, as the draft describes, to
> specify multiple input values to query the database, as is the case with
> map filtering, then REST is less applicable. Such aspects of the protocol
> might be hard to fit into a pure resource model.
>
> The risk here is that the protocol reinforces client-server dependencies.
Agreed, but I also agree with you and Lisa that we may want to extend the
Resource model and look at putting some of the parameters in the path instead.
> One potential thing to look at is URI templates, which constrain the form
> of URIs based on server preferences (rather than client). While it still
> gives the client greater control over the server than might be ideal, it
> at least reduces some of the degrees of freedom and (slightly) loosens the
> coupling.
Thanks for the suggestion here. This is something that we can at least
explore.
> I recommend a little more background reading on REST- you may come to the
> conclusion that ALTO over HTTP is preferable, or you might find a subset
> of REST-like capabilities that best fit this.
The idea in the design thus far was to go down the REST route, and see how far
it made sense to go. It seemed clear (to me at least) that we had a number of
cases where either (1) there were quite a few parameters that needed to be
passed, (2) an arbitrary-length list of values needed to be passed, or (3) the
data being passed to the ALTO Server actually had some structure to it. In
that sense, forcing the design into a pure REST interface appeared to be too
limiting for our needs.
>
> --Martin
>
> ...and while I'm at it...
>
> I also have to support Lisa's suggestion on content negotiation (on media
> type) for version negotiation. That works very well, in my experience.
I would tend to agree as well. The tradeoff here is that an implementation
may no longer be as simple as "copy a few files to a web server". While I
find it attractive to do that way, I also realize that a good, clean design
may be contrary to that implementation.
> It is revealing when you talk about "persisting" responses. That's a
> caching function and there's good support for that in HTTP.
>
> Comments on the mic about JSON and XML and validation were particularly
> good. The "schema" definition in the document is nice, but hard to use in
> tools. XML schema has good tool support, which is not necessarily true
> for the JSON stuff. I'm not up to date on the mapping status between XML
> and JSON, but I do know that Jersey and JAXB provide these functions.
Yes, certainly. I would personally be a bit hesitant to constrain ourselves
to using a particular tool where there is no standard. In particular, I think
it would be dangerous to go the route of defining an XML schema, and then
using a particular (non-standardized) tool to convert it to JSON. Such an
approach means that we are at the mercy of the particular translation that
would be done, both in terms of whether it is "clean" and also updates to the
tool that might affect the translation.
--
Richard Alimi
Department of Computer Science
Yale University
_______________________________________________
alto mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/alto