Hi Richard,

A few minor additions and suggestions, but I think that you have the right idea 
:)

>For certain services, such as the Map service, this is already using
>resources.  However, a couple of other current limitations in the Map service
>(that might be exposed by providing more Resources) are:
>1) It only provides maps for the default cost type:
>   This might be rectified by having an interface such as:
>     /map/core/pid/cost/routingcost
>     /map/core/pid/cost/hopcount
>     /map/core/pid/cost/<some other cost type...>
>   where each resource is a map of a different cost type.

Might I suggest that you stop using these sorts of names, and start talking 
about the document formats that you intend to create?  The actual URL used for 
any particular resource is unimportant - the data is the important thing.  With 
hyperlinking, the server is able to choose how to name resources.

For all these "cost" resources, they might share a common format, but the 
context that hyperlinks to these resources might include additional supporting 
data that identifies the resource, e.g.

{ "costmaps" : [ { "url" : "/this/is/not/important/", "type" : "routingcost" }, 
{ "url" : "whatever", "type" : "hopcount" }, ... ] }

> 2) Depending on how we handle IPv4/v6, we might also parametrize maps by the
>   address family provided in them. (However, given the WG discussion on
>   Monday, we still have some work to do in figuring out the use cases and
>   what types of preferences need to be conveyed before working to specify
>   anything.)

That leads to more permutations, more resources.  The only risk you run is 
combinatorial explosion.  That's where the templating thing might help.  (I'm 
not a big fan of URI templating, it's only a marginal improvement over query 
parameters.)

>Currently we have taken the approach to just make the Network Maps and the
>Cost Maps as the resources exposed by the server (with the simplification that
>the Map Service provides a view of the Cost Map with only the default cost
>type.)

These sorts of constraints can help resource proliferation.

>To go a bit further, perhaps we can identify other ways to define the ALTO
>Server's resources that are exposed (or alternatively, which parameters make
>the most sense to put in the path instead of the query-string or body).

The way that the current draft describes the use of request bodies is 
worrisome.  Bodies in GET requests, which is almost implied, are inadvisable.  
For your more complicated requests, it's possible that you want a way to create 
a filtered resource, using POST is probably necessary.

>> For instance, included amoung those resources would be indexes, which might
>> solve your capabilities/multiple server problem.  That makes the server
>> capabilities discoverable.  There might be a combinatorial explosion if
>> you allow for freedom on many axes, but I don't actually think that this
>> is a high risk, given my limited understanding of the current solution.
>
>This is certainly something that we can think about here.  Related to this
>point, another thing that we were trying to do in the design (adopted from the
>Infoexport proposal) was to make deployment of the required ALTO interfaces as
>simple as distributing a few static files from a web server.  Personally, I
>see this as appealing since it can lower the barrier towards deploying an ALTO
>Server for an ISP, since all that would need to be done is encode the ALTO
>information in a particular format and post it to a web server.  To get
>certain added functionality, another solution might be needed, but at least
>the required parts would be extremely simple.

See the above example JSON for a concrete instance of what an index might look 
like.  This doesn't preclude deployment of static documents, it actually makes 
it easier in some sense, because you can put files where it is convenient.  An 
index solves all your deployment problems.

That also obviates all the "here's another server" and the "and here are my 
capabilities" functions.  You provide a URL, describe its characteristics and 
it doesn't matter where that URL is hosted.

>> The risk here is that the protocol reinforces client-server dependencies.
>
>Agreed, but I also agree with you and Lisa that we may want to extend the
>Resource model and look at putting some of the parameters in the path instead.

It helps if you don't think of it like this. "Parameters in the path" leads to 
the coupling I'm talking about.  Think of it more in terms of resources that 
each have various characteristics.  An _implementation_ might use path 
components or even query strings, but that's a server and implementation choice.

>The idea in the design thus far was to go down the REST route, and see how far
>it made sense to go.  It seemed clear (to me at least) that we had a number of
>cases where either (1) there were quite a few parameters that needed to be
>passed, (2) an arbitrary-length list of values needed to be passed, or (3) the
>data being passed to the ALTO Server actually had some structure to it.  In
>that sense, forcing the design into a pure REST interface appeared to be too
>limiting for our needs.

And that's where you might come to the conclusion that REST doesn't work for 
you.  But it's not necessarily doomed.

One potential pattern for dealing with arbitrary request complexity for an 
application like this is:

 - Send a POST request with a body that has the complex document; the response 
identifies 303 (See Other) with a Location URI
 - Client sends a GET to the identified resource to retrieve data

This adds a round-trip, but not necessarily, but it's possible that the 303 
body could include the desired data.

This leaves the server with control over the resources and how they are 
identified.  It also means that the client can repeat the request without the 
POST.  The response is cacheable.

>> I also have to support Lisa's suggestion on content negotiation (on media
>> type) for version negotiation.  That works very well, in my experience.
>
>I would tend to agree as well.  The tradeoff here is that an implementation
>may no longer be as simple as "copy a few files to a web server".  While I
>find it attractive to do that way, I also realize that a good, clean design
>may be contrary to that implementation.

Version negotiation is going to require some work.  That's probably 
unavoidable. If you need static files, then you can deal with the content 
negotiation with redirection that points at the static files.

> I would personally be a bit hesitant to constrain ourselves
to using a particular tool where there is no standard.  In particular, I think
it would be dangerous to go the route of defining an XML schema, and then
using a particular (non-standardized) tool to convert it to JSON. 

Oh no, you would never want to do that; that was merely an existence proof.  
You would have to define (or steal) a mapping.  It's actually relatively 
straightforward - the JAXB JSR (I forget the number) effectively defines this 
mapping and it's not that big.

--Martin

>--
>Richard Alimi
>Department of Computer Science
>Yale University
_______________________________________________
alto mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/alto

Reply via email to