A while back I was frustrated with an OS library method that had a well known 
quirk that never got fixed.  When I cornered an engineer at a dev conference he 
told me something profound: Once an API has been released it's locked in.  It's 
being used by tens of thousands of apps and millions of users.  Changing the 
behavior in any way, even to improve it, is likely to hurt more than it helps.

Last night Twitter added a new field to the search results.  I use a publicly 
available JSON parser bolted on to MGTwitterEngine (both popular choices, I 
think), it had an odd trigger for detecting an early end of the Results 
dictionary and trying to fail gracefully.  The new field triggered this 
condition.  In other words, the lib author made a bad assumption. Fortunately a 
very easy bug to find and correct (even at 2am).

However, it brings me to my point: that the search results *did* change.  They 
are to spec of course, and the JSON lib *should* have been flexible.  But it 
wasn't.  And there was little chance of anyone finding out until it affected 
everyone using my software.  For once, I was grateful that I only have a few 
thousand users.



But, doesn't every change have this potential?  The potential of triggering a 
well hidden bug in a client that **is** popular and be hugely catastrophic.  Am 
I wrong in thinking this?



To make this critique a bit more constructive I'd offer some suggestions of 
what I see in other popular API:
1.  Every change must be opted into or deprecated away.
2.  Deprecation periods need to last for months, not days.
3.  Version the endpoints to allow for a one-shot way to know which behavior to 
expect.
4.  Document the version differences in the API docs.
5.  Add a sandbox for clients to test new endpoints or new changes in a safe 
way.


isaiah
http://twitter.com/isaiah



-- 
To unsubscribe, reply using "remove me" as the subject.

Reply via email to