On Sun, Aug 9, 2009 at 2:23 AM, John Kalucki <jkalu...@gmail.com> wrote:
> There also may be some interesting scaling issues with a Request-
> Response push mechanism that are avoided with a streaming approach.
> We'd need quite a farm of threads to have sufficient outbound
> throughput against the RTT latency of an HTTP post. I would have to
> assume that nearly all high-volume updaters and most mid-volume
> updaters would be pushed to a non-trivial number of hubs. Tractable,
> but it would require some effort, especially to deal with unreliable
> and slow hubs.
No, not necessarily - through HTTP pipelining and persistent connections, it
should be relatively little cost on your end, possibly even less than you
are using currently, utilizing an open standard everyone is familiar with to
do so. See this:
My reason for suggesting this, while I understand you have a way to do so,
is that this uses existing protocols to build your API around. Therefore
it's less development cost on your end, less development cost for the
developers wanting to implement, and Twitter becomes more of a utility and
less of a walled garden on the streaming feed. In the end, with community
(and Twitter's) involvement, I think you'll see much less cost on your end
by utilizing an open standard like this, vs. integrating your own solution.
I'd really like to see Twitter join the rest of the community building on
these open standards. I think it would be a huge value to the open
standards community, regardless.
Also, add to that the potential for distribution in an event like this DDoS.
Twitter could very simply utilize Feedburner and other Hubs to distribute
their content, real-time, with even less cost to their production
environment, and more developers embracing the platform. Twitter could even
do this selectively if their intent is to monetize the full firehose, only
enabling user timelines pubsub-accessible and available to 3rd-party hubs
like Feedburner. I think it would be a huge win for Twitter.