> I think the best compromise is what Google ends up doing with many of
> their APIs. Allow access without an API key, but with a fairly minimal
> number of accesses-per-time-period allowed (couple hundred a day, is
> what I think google often does).
-- Agreed.

I certainly didn't mean to suggest that there were not legitimate use cases for API keys. That said, my gut (plus experience sitting in multiple meetings during which the need for an access mechanism landed on the table as a primary requirement) says people believe they need an API key before alternatives have been fully considered and even before there is an actual, defined need for one. Server logs often reveal most types of "usage statistics" service operators are interested in and there are ways to throttle traffic at the caching level (the latter can be a little tricky to implement, however).

Yours,
Kevin


On 12/02/2013 12:38 PM, Jonathan Rochkind wrote:
There are plenty of non-free API's, that need some kind of access
control. A different side discussion is what forms of access control are
the least barrier to developers while still being secure (a lot of
services mess this up in both directions!).

However, there are also some free API's whcih still require API keys,
perhaps because the owners want to track usage or throttle usage or what
have you.

Sometimes you need to do that too, and you need to restrict access, so
be it. But it is probably worth recognizing that you are sometimes
adding barriers to succesful client development here -- it seems like a
trivial barrier from the perspective of the developers of the service,
because they use the service so often. But to a client developer working
with a dozen different API's, the extra burden to get and deal with the
API key and the access control mechanism can be non-trivial.

I think the best compromise is what Google ends up doing with many of
their APIs. Allow access without an API key, but with a fairly minimal
number of accesses-per-time-period allowed (couple hundred a day, is
what I think google often does). This allows the developer to evaluate
the api, explore/debug the api in the browser, and write automated tests
against the api, without worrying about api keys. But still requires an
api key for 'real' use, so the host can do what tracking or throttling
they want.

Jonathan

On 12/2/13 12:18 PM, Ross Singer wrote:
I'm not going to defend API keys, but not all APIs are open or free.  You
need to have *some* way to track usage.

There may be alternative ways to implement that, but you can't just hand
wave away the rather large use case for API keys.

-Ross.


On Mon, Dec 2, 2013 at 12:15 PM, Kevin Ford <k...@3windmills.com> wrote:

Though I have some quibbles with Seth's post, I think it's worth drawing
attention to his repeatedly calling out API keys as a very significant
barrier to use, or at least entry.  Most of the posts here have given
little attention to the issue API keys present.  I can say that I have
quite often looked elsewhere or simply stopped pursuing my idea the
moment
I discovered an API key was mandatory.

As for the presumed difficulty with implementing content negotiation
(and,
especially, caching on top), it seems that if you can implement an
entire
system to manage assignment of and access by API key, then I do not
understand how content negotiation and caching are significantly
"harder to
implement."

In any event, APIs and content negotiation are not mutually
exclusive. One
should be able to use the HTTP URI to access multiple representations of
the resource without recourse to a custom API.

Yours,
Kevin





On 11/29/2013 02:44 PM, Robert Sanderson wrote:

(posted in the comments on the blog and reposted here for further
discussion, if interest)


While I couldn't agree more with the post's starting point -- URIs
identify
(concepts) and use HTTP as your API -- I couldn't disagree more with
the
"use content negotiation" conclusion.

I'm with Dan Cohen in his comment regarding using different URIs for
different representations for several reasons below.

It's harder to implement Content Negotiation than your own API, because
you
get to define your own API whereas you have to follow someone else's
rules
when you implement conneg.  You can't get your own API wrong.  I agree
with
Ruben that HTTP is better than rolling your own proprietary API, we
disagree that conneg is the correct solution.  The choice is between
conneg
or regular HTTP, not conneg or a proprietary API.

Secondly, you need to look at the HTTP headers and parse quite a
complex
structure to determine what is being requested.  You can't just put
a file
in the file system, unlike with separate URIs for distinct
representations
where it just works, instead you need server side processing.  This
also
makes it much harder to cache the responses, as the cache needs to
determine whether or not the representation has changed -- the cache
also
needs to parse the headers rather than just comparing URI and content.
  For
large scale systems like DPLA and Europeana, caching is essential for
quality of service.

How do you find our which formats are supported by conneg? By
reading the
documentation. Which could just say "add .json on the end". The Vary
header
tells you that negotiation in the format dimension is possible, just
not
what to do to actually get anything back. There isn't a way to find
this
out from HTTP automatically,so now you need to read both the site's
docs
AND the HTTP docs.  APIs can, on the other hand, do this.  Consider
OAI-PMH's ListMetadataFormats and SRU's Explain response.

Instead you can have a separate URI for each representation and link
them
with Link headers, or just a simple rule like add '.json' on the
end. No
need for complicated content negotiation at all.  Link headers can be
added
with a simple apache configuration rule, and as they're static are
easy to
cache. So the server side is easy, and the client side is trivial.
   Compared to being difficult at both ends with content negotiation.

It can be useful to make statements about the different
representations,
and especially if you need to annotate the structure or content.  Or
share
it -- you can't email someone a link that includes the right Accept
headers
to send -- as in the post, you need to send them a command line like
curl
with -H.

An experiment for fans of content negotiation: Have both .json and 302
style conneg from your original URI to that .json file. Advertise both.
See
how many people do the conneg. If it's non-zero, I'll be extremely
surprised.

And a challenge: Even with libraries there's still complexity to
figuring
out how and what to serve. Find me sites that correctly implement *
based
fallbacks. Or even process q values. I'll bet I can find 10 that do
content
negotiation wrong, for every 1 that does it correctly.  I'll start:
dx.doi.org touts its content negotiation for metadata, yet doesn't
implement q values or *s. You have to go to the documentation to figure
out
what Accept headers it will do string equality tests against.

Rob



On Fri, Nov 29, 2013 at 6:24 AM, Seth van Hooland <svhoo...@ulb.ac.be>
wrote:


Dear all,

I guess some of you will be interested in the blogpost of my colleague

and co-author Ruben regarding the misunderstandings on the use and
abuse
of
APIs in a digital libraries context, including a description of both
good
and bad practices from Europeana, DPLA and the Cooper Hewitt museum:


http://ruben.verborgh.org/blog/2013/11/29/the-lie-of-the-api/

Kind regards,

Seth van Hooland
Président du Master en Sciences et Technologies de l'Information et
de la

Communication (MaSTIC)

Université Libre de Bruxelles
Av. F.D. Roosevelt, 50 CP 123  | 1050 Bruxelles
http://homepages.ulb.ac.be/~svhoolan/
http://twitter.com/#!/sethvanhooland
http://mastic.ulb.ac.be
0032 2 650 4765
Office: DC11.102




Reply via email to