David, I don't know Ruby, so I don't know if this is possible.
But, if possible you need to edit your copy of the Twitter API wrapper and set the user agent to something that is unique to your service. If you use the same user agent as everyone else who are using that wrapper, then you are going to suffer if they do naughty things. Dewald On Aug 11, 2:45 pm, David Fisher <tib...@gmail.com> wrote: > The user agent for each search request is the same. I'm using the Ruby > Twitter API wrapper, so sending anything else with search requests > isn't possible unless that is now deprecated. > > dave > > On Aug 11, 10:36 am, Andrew Badera <and...@badera.us> wrote: > > > On Tue, Aug 11, 2009 at 10:30 AM, David Fisher<tib...@gmail.com> wrote: > > > > While i haven't done scientific testing of this, I was able to run up > > > to 3-4 instances of my search script prior at a time before it told me > > > to enhance my calm. Now I'm barely able to run one without hitting the > > > limit. I can put delays in my code to slow it down, but I'm wondering > > > if this is just a symptom of the aftermath of the DDoS attack or > > > something else? My server has a dedicated IP and no one else runs code > > > from it, so it isn't other people on my IP hitting the Search API. > > > > Maybe I need to talk about Search API whitelisting... > > > > dave > > >http://webecologyproject.org > > > Are you sending a unique client ID header with each request, per > > previous Search API throttling conversations? (Not sure that it > > matters, that seemed pretty fuzzy when discussed ...) > > > ∞ Andy Badera > > ∞ This email is: [ ] bloggable [x] ask first [ ] private > > ∞ Google me:http://www.google.com/search?q=(andrew+badera)+OR+(andy+badera)