Given a reasonable stack, it shouldn't be all that hard to build something
robust. Our internal streaming client, which transits every tweet that you
see on the streaming api, seems to work just fine through various forms of
abuse, and it's, roughly, a few hundred lines wrapped around Apache
httpclient.

On the other hand, I suspect that dependability is all but impossible on
some stacks, or will require some heroism on the part of a library
developer.

As a community, we need clients that trivially allow robustness in a variety
of stacks. We'll get there soon enough.

On Sat, Jan 16, 2010 at 10:05 PM, M. Edward (Ed) Borasky
<zzn...@gmail.com>wrote:

>
>
> On Jan 16, 7:28 pm, John Kalucki <j...@twitter.com> wrote:
> > I'd strongly suggest consuming the Streaming API only from persistent
> > processes that write into some form of durable asynchronous queue (of any
> > type) for your application to consume. Running curl periodically is
> unlikely
> > to be a robust solution.
> >
> > Select one of the existing Streaming API clients out there and wrap it in
> a
> > durable process. Write to rotated log files, a message queue, or whatever
> > other mechanism that you choose, to buffer the arrival of new statuses
> > before consumption by your application. This will allow you to restart
> your
> > application at will without data loss.
>
> I don't know that there are any open source libraries out there yet
> that are robust enough to do that. At the moment, I'm working
> exclusively in Perl, and "AnyEvent::Twitter::Stream" seems to be the
> only Perl Streaming API consumer with any kind of mileage on it. As
> you point out, real-time programming for robustness is a non-trivial
> exercise. It would be nice if someone would build a C library and SWIG
> ".i" files. ;-)
>
> --
> M. Edward (Ed) Borasky
> http://borasky-research.net/smart-at-znmeb
>
> "A mathematician is a device for turning coffee into theorems." ~ Paul
> Erdős
>

Reply via email to