[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Michael Ivey
If the User-Agent/Referrer says "Twitpay", and it's really me, when Twitter
contacts me, I'll answer, and we'll work it out.
If the User-Agent/Referrer says "Twitpay", and it's *not* really me, when
Twitter contacts me, I'll tell them, and they'll block the IP.

It's a starting point for figuring things out, not an authorization scheme.

 -- ivey


On Tue, Jun 16, 2009 at 2:39 PM, Stuart  wrote:

> 2009/6/16 Naveen Kohli 
>
>> Redefining HTTP spec, eh :-)
>> Whatever makes twitter boat float. Lets hope for the best. Just concerned
>> that some firewalls or proxies tend to remove "referrer".
>>
>
> What a completely ridiculous thing to say. It's not "redefining" anything.
> If Twitter want to require something in order to access their service they
> absolutely have that right. It's not like they're saying every HTTP server
> should start requiring these headers.
>
> It's true that some firewalls and proxies remove the referrer header, and
> some also remove the user agent header.
>
> I'm somewhat unclear on exactly how this stuff is supposed to help. If an
> application sets out to abuse the system they'll simply set the headers so
> they look like a normal browser. I don't see what purpose requiring these
> headers to be something useful will actually serve. IMHO you might as well
> "require" the source parameter for all API requests that use basic auth
> which is simple for all apps to implement; OAuth clearly carries
> identification with it already.
>
> -Stuart
>
> --
> http://stut.net/projects/twitter
>
> On Tue, Jun 16, 2009 at 1:05 PM, Stuart  wrote:
>>
>>>
>>> It's optional in the HTTP spec, but mandatory for the Twitter Search
>>> API. I don't see a problem with that.
>>>
>>> Doug: Presumably the body of the 403 response will contain a suitable
>>> descriptive error message in the usual format?
>>>
>>> -Stuart
>>>
>>> --
>>> http://stut.net/projects/twitter
>>>
>>> 2009/6/16 Naveen Kohli :
>>> > Why would you make decision based on "Referrer" which is an OPTIONAL
>>> header
>>> > field in HTTP protocol? Making decision based on something that is
>>> > "REQUIRED" may be more appropriate.
>>> >
>>> >
>>> > On Tue, Jun 16, 2009 at 12:33 PM, Doug Williams 
>>> wrote:
>>> >>
>>> >> Hi all,
>>> >> The Search API will begin to require a valid HTTP Referrer, or at the
>>> very
>>> >> least, a meaningful and unique user agent with each request. Any
>>> request not
>>> >> including this information will be returned a 403 Forbidden response
>>> code by
>>> >> our web server.
>>> >>
>>> >> This change will be effective within the next few days, so please
>>> check
>>> >> your applications using the Search API and make any necessary code
>>> changes.
>>> >>
>>> >> Thanks,
>>> >> Doug
>>> >
>>> >
>>> >
>>> > --
>>> > Naveen K Kohli
>>> > http://www.netomatix.com
>>> >
>>>
>>
>>
>>
>> --
>> Naveen K Kohli
>> http://www.netomatix.com
>>
>
>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Stuart
2009/6/16 Naveen Kohli 

> Redefining HTTP spec, eh :-)
> Whatever makes twitter boat float. Lets hope for the best. Just concerned
> that some firewalls or proxies tend to remove "referrer".


What a completely ridiculous thing to say. It's not "redefining" anything.
If Twitter want to require something in order to access their service they
absolutely have that right. It's not like they're saying every HTTP server
should start requiring these headers.

It's true that some firewalls and proxies remove the referrer header, and
some also remove the user agent header.

I'm somewhat unclear on exactly how this stuff is supposed to help. If an
application sets out to abuse the system they'll simply set the headers so
they look like a normal browser. I don't see what purpose requiring these
headers to be something useful will actually serve. IMHO you might as well
"require" the source parameter for all API requests that use basic auth
which is simple for all apps to implement; OAuth clearly carries
identification with it already.

-Stuart

-- 
http://stut.net/projects/twitter

On Tue, Jun 16, 2009 at 1:05 PM, Stuart  wrote:
>
>>
>> It's optional in the HTTP spec, but mandatory for the Twitter Search
>> API. I don't see a problem with that.
>>
>> Doug: Presumably the body of the 403 response will contain a suitable
>> descriptive error message in the usual format?
>>
>> -Stuart
>>
>> --
>> http://stut.net/projects/twitter
>>
>> 2009/6/16 Naveen Kohli :
>> > Why would you make decision based on "Referrer" which is an OPTIONAL
>> header
>> > field in HTTP protocol? Making decision based on something that is
>> > "REQUIRED" may be more appropriate.
>> >
>> >
>> > On Tue, Jun 16, 2009 at 12:33 PM, Doug Williams 
>> wrote:
>> >>
>> >> Hi all,
>> >> The Search API will begin to require a valid HTTP Referrer, or at the
>> very
>> >> least, a meaningful and unique user agent with each request. Any
>> request not
>> >> including this information will be returned a 403 Forbidden response
>> code by
>> >> our web server.
>> >>
>> >> This change will be effective within the next few days, so please check
>> >> your applications using the Search API and make any necessary code
>> changes.
>> >>
>> >> Thanks,
>> >> Doug
>> >
>> >
>> >
>> > --
>> > Naveen K Kohli
>> > http://www.netomatix.com
>> >
>>
>
>
>
> --
> Naveen K Kohli
> http://www.netomatix.com
>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Naveen Kohli
Redefining HTTP spec, eh :-)
Whatever makes twitter boat float. Lets hope for the best. Just concerned
that some firewalls or proxies tend to remove "referrer".


On Tue, Jun 16, 2009 at 1:05 PM, Stuart  wrote:

>
> It's optional in the HTTP spec, but mandatory for the Twitter Search
> API. I don't see a problem with that.
>
> Doug: Presumably the body of the 403 response will contain a suitable
> descriptive error message in the usual format?
>
> -Stuart
>
> --
> http://stut.net/projects/twitter
>
> 2009/6/16 Naveen Kohli :
> > Why would you make decision based on "Referrer" which is an OPTIONAL
> header
> > field in HTTP protocol? Making decision based on something that is
> > "REQUIRED" may be more appropriate.
> >
> >
> > On Tue, Jun 16, 2009 at 12:33 PM, Doug Williams 
> wrote:
> >>
> >> Hi all,
> >> The Search API will begin to require a valid HTTP Referrer, or at the
> very
> >> least, a meaningful and unique user agent with each request. Any request
> not
> >> including this information will be returned a 403 Forbidden response
> code by
> >> our web server.
> >>
> >> This change will be effective within the next few days, so please check
> >> your applications using the Search API and make any necessary code
> changes.
> >>
> >> Thanks,
> >> Doug
> >
> >
> >
> > --
> > Naveen K Kohli
> > http://www.netomatix.com
> >
>



-- 
Naveen K Kohli
http://www.netomatix.com


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread burton

Hey guys.

This has already been banged out in the RSS wars (of which I'm a
veteran and have the battle scars).

Don't use a Referrer unless it's literally a page with a link or
search page.

You should use a User-Agent here (which is what it is designed for).

The browser should generally send the Referrer ..

We send a User-Agent

Kevin

On Jun 16, 10:04 am, Stuart  wrote:
> The logical thing would be to set the referrer to the domain name of
> your application. If it doesn't have one I'd say use your Twitter user
> URL (i.e.http://twitter.com/stut).
>
> Most HTTP libs in most languages will set a default user agent, and
> it's usually pretty easy to override it. I'd suggest appname/0.1 where
> appname is something that identifies your app and is a valid user
> agent - Google can help you there. I doubt the version number is
> important to anyone but you.
>
> -Stuart
>
> --http://stut.net/projects/twitter
>
> 2009/6/16 funkatron :
>
>
>
> > Indeed, some clearer criteria would be most appreciated.
>
> > --
> > Ed Finkler
> >http://funkatron.com
> > Twitter:@funkatron
> > AIM: funka7ron
> > ICQ: 3922133
> > XMPP:funkat...@gmail.com
>
> > On Jun 16, 12:51 pm, Justyn Howard  wrote:
> >> Thanks Doug - Any additional info to help us know if we comply? My dev is
> >> out of the country on vacation and want to make sure we don¹t miss 
> >> anything.
>
> >> On 6/16/09 11:33 AM, "Doug Williams"  wrote:
>
> >> > Hi all,
> >> > The Search API will begin to require a valid HTTP Referrer, or at the 
> >> > very
> >> > least, a meaningful and unique user agent with each request. Any request 
> >> > not
> >> > including this information will be returned a 403 Forbidden response 
> >> > code by
> >> > our web server.
>
> >> > This change will be effective within the next few days, so please check 
> >> > your
> >> > applications using the Search API and make any necessary code changes.
>
> >> > Thanks,
> >> > Doug


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Stuart
2009/6/16 Chad Etzel 

>
> On Tue, Jun 16, 2009 at 1:05 PM, Stuart wrote:
> >
> > It's optional in the HTTP spec, but mandatory for the Twitter Search
> > API. I don't see a problem with that.
>
> Erm, for sites like TweetGrid, TweetChat, etc, which are all
> browser-based client-side driven sites, the users' browser will make
> the request.  In this case the HTTP Referrer can be (and often is)
> unset.  The user-agent, however, is usually set for all browsers, but
> sometimes people use plugins to mask or delete that, even. Just and
> FYI that not all of us have control over this.


Where a request is made from one page to another, even if it's via JS most
browsers will set the referrer to the current URL.

Besides I wasn't claiming that it wouldn't be an issue for anyone, I was
just commenting on the fact that just because it's optional in HTTP doesn't
in any way stop Twitter from making it mandatory for their APIs. That's the
only point I was trying to make. It's like saying "my car doesn't make me
wear a seatbelt so neither can you".

-Stuart

-- 
http://stut.net/projects/twitter


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Brooks Bennett

Thanks for chiming in on this Chad!

On Jun 16, 12:10 pm, Chad Etzel  wrote:
> On Tue, Jun 16, 2009 at 1:05 PM, Stuart wrote:
>
> > It's optional in the HTTP spec, but mandatory for the Twitter Search
> > API. I don't see a problem with that.
>
> Erm, for sites like TweetGrid, TweetChat, etc, which are all
> browser-based client-side driven sites, the users' browser will make
> the request.  In this case the HTTP Referrer can be (and often is)
> unset.  The user-agent, however, is usually set for all browsers, but
> sometimes people use plugins to mask or delete that, even. Just and
> FYI that not all of us have control over this.
>
> -Chad


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Brooks Bennett

I checked and TweetGrid was setting a referrer (on the page I tested,
it was http://tweetgrid.com/grid?l=0), and as Matt said all should be
fine for us Client-side Search API peeps.

Brooks

On Jun 16, 12:10 pm, Chad Etzel  wrote:
> On Tue, Jun 16, 2009 at 1:05 PM, Stuart wrote:
>
> > It's optional in the HTTP spec, but mandatory for the Twitter Search
> > API. I don't see a problem with that.
>
> Erm, for sites like TweetGrid, TweetChat, etc, which are all
> browser-based client-side driven sites, the users' browser will make
> the request.  In this case the HTTP Referrer can be (and often is)
> unset.  The user-agent, however, is usually set for all browsers, but
> sometimes people use plugins to mask or delete that, even. Just and
> FYI that not all of us have control over this.
>
> -Chad


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Matt Sanford


Hi all,

Let me clarify a bit. For server-side processing please set the  
User-Agent header. I recommend using your domain name, or if you don't  
have one (which is odd) your appname. Something like "myapp.com" or  
"myapp". By using domain name we'll be able to check out the site and  
reach out to contact people if we suspect them of abuse. Spammers  
often don't respond to questions from the services they abuse, and if  
someone is using your user-agent falsely you'll have the possibility  
of saying "That's not me, I'm not on app engine". For client-side  
processing like TweetGrid the browser will send a User-Agent and  
referrer unless you're doing something exceedingly odd, so you should  
be fine.


This change is mostly to combat an increasing amount of spam  
coming from "cloud" services like ecs and appengine. At first we'll  
only be applying this restriction to those IP addresses but it may  
need to be broadened as time goes on. If you're writing client  
software please add a user-agent in case we end up having to widen  
this in the future. This seems like a better plan than the Media  
Temple fiasco we went though last time we blocked a shared service for  
hosting spammers [1].


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

[1] - https://twitter.com/mzsanford/status/1924718435

On Jun 16, 2009, at 10:10 AM, funkatron wrote:



Totally understand the need. I asked for clearer criteria because in
message one, you state you'll require

"a valid HTTP Referrer" or "a meaningful and unique user agent"

I can probably define a valid HTTP Referrer as containing a URL that
exists, but a meaningful/unique user agent is somewhat in the eye of
the beholder. In the second message, you say you'll require

"a valid HTTP Referrer and/or a User Agent"

I'm not sure how to define a "valid" user agent. That's why I'd like
to see *your* definition for these things so we can meet your
criteria.

--
Ed Finkler
http://funkatron.com
Twitter:@funkatron
AIM: funka7ron
ICQ: 3922133
XMPP:funkat...@gmail.com


On Jun 16, 12:56 pm, Doug Williams  wrote:
All we ask is that you include a valid HTTP Referrer and/or a User  
Agent
with each request which is easy to do in almost every language.  
Both would
be helpful but we only require one at this time. We simply want to  
be able
to identify apps and have the ability to communicate with the  
authors.


Thanks,
Doug

On Tue, Jun 16, 2009 at 9:51 AM, Justyn Howard  
wrote:


 Thanks Doug - Any additional info to help us know if we comply?  
My dev is
out of the country on vacation and want to make sure we don’t miss  
anything.



On 6/16/09 11:33 AM, "Doug Williams"  wrote:



Hi all,
The Search API will begin to require a valid HTTP Referrer, or at  
the very
least, a meaningful and unique user agent with each request. Any  
request not
including this information will be returned a 403 Forbidden  
response code by

our web server.


This change will be effective within the next few days, so please  
check
your applications using the Search API and make any necessary code  
changes.



Thanks,
Doug







[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread funkatron

Totally understand the need. I asked for clearer criteria because in
message one, you state you'll require

"a valid HTTP Referrer" or "a meaningful and unique user agent"

I can probably define a valid HTTP Referrer as containing a URL that
exists, but a meaningful/unique user agent is somewhat in the eye of
the beholder. In the second message, you say you'll require

"a valid HTTP Referrer and/or a User Agent"

I'm not sure how to define a "valid" user agent. That's why I'd like
to see *your* definition for these things so we can meet your
criteria.

--
Ed Finkler
http://funkatron.com
Twitter:@funkatron
AIM: funka7ron
ICQ: 3922133
XMPP:funkat...@gmail.com


On Jun 16, 12:56 pm, Doug Williams  wrote:
> All we ask is that you include a valid HTTP Referrer and/or a User Agent
> with each request which is easy to do in almost every language. Both would
> be helpful but we only require one at this time. We simply want to be able
> to identify apps and have the ability to communicate with the authors.
>
> Thanks,
> Doug
>
> On Tue, Jun 16, 2009 at 9:51 AM, Justyn Howard wrote:
>
> >  Thanks Doug - Any additional info to help us know if we comply? My dev is
> > out of the country on vacation and want to make sure we don’t miss anything.
>
> > On 6/16/09 11:33 AM, "Doug Williams"  wrote:
>
> > Hi all,
> > The Search API will begin to require a valid HTTP Referrer, or at the very
> > least, a meaningful and unique user agent with each request. Any request not
> > including this information will be returned a 403 Forbidden response code by
> > our web server.
>
> > This change will be effective within the next few days, so please check
> > your applications using the Search API and make any necessary code changes.
>
> > Thanks,
> > Doug
>
>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Chad Etzel

On Tue, Jun 16, 2009 at 1:05 PM, Stuart wrote:
>
> It's optional in the HTTP spec, but mandatory for the Twitter Search
> API. I don't see a problem with that.

Erm, for sites like TweetGrid, TweetChat, etc, which are all
browser-based client-side driven sites, the users' browser will make
the request.  In this case the HTTP Referrer can be (and often is)
unset.  The user-agent, however, is usually set for all browsers, but
sometimes people use plugins to mask or delete that, even. Just and
FYI that not all of us have control over this.

-Chad


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Stuart

It's optional in the HTTP spec, but mandatory for the Twitter Search
API. I don't see a problem with that.

Doug: Presumably the body of the 403 response will contain a suitable
descriptive error message in the usual format?

-Stuart

-- 
http://stut.net/projects/twitter

2009/6/16 Naveen Kohli :
> Why would you make decision based on "Referrer" which is an OPTIONAL header
> field in HTTP protocol? Making decision based on something that is
> "REQUIRED" may be more appropriate.
>
>
> On Tue, Jun 16, 2009 at 12:33 PM, Doug Williams  wrote:
>>
>> Hi all,
>> The Search API will begin to require a valid HTTP Referrer, or at the very
>> least, a meaningful and unique user agent with each request. Any request not
>> including this information will be returned a 403 Forbidden response code by
>> our web server.
>>
>> This change will be effective within the next few days, so please check
>> your applications using the Search API and make any necessary code changes.
>>
>> Thanks,
>> Doug
>
>
>
> --
> Naveen K Kohli
> http://www.netomatix.com
>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Stuart

The logical thing would be to set the referrer to the domain name of
your application. If it doesn't have one I'd say use your Twitter user
URL (i.e. http://twitter.com/stut).

Most HTTP libs in most languages will set a default user agent, and
it's usually pretty easy to override it. I'd suggest appname/0.1 where
appname is something that identifies your app and is a valid user
agent - Google can help you there. I doubt the version number is
important to anyone but you.

-Stuart

-- 
http://stut.net/projects/twitter

2009/6/16 funkatron :
>
> Indeed, some clearer criteria would be most appreciated.
>
> --
> Ed Finkler
> http://funkatron.com
> Twitter:@funkatron
> AIM: funka7ron
> ICQ: 3922133
> XMPP:funkat...@gmail.com
>
>
> On Jun 16, 12:51 pm, Justyn Howard  wrote:
>> Thanks Doug - Any additional info to help us know if we comply? My dev is
>> out of the country on vacation and want to make sure we don¹t miss anything.
>>
>> On 6/16/09 11:33 AM, "Doug Williams"  wrote:
>>
>> > Hi all,
>> > The Search API will begin to require a valid HTTP Referrer, or at the very
>> > least, a meaningful and unique user agent with each request. Any request 
>> > not
>> > including this information will be returned a 403 Forbidden response code 
>> > by
>> > our web server.
>>
>> > This change will be effective within the next few days, so please check 
>> > your
>> > applications using the Search API and make any necessary code changes.
>>
>> > Thanks,
>> > Doug
>>
>>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Naveen Kohli
Why would you make decision based on "Referrer" which is an OPTIONAL header
field in HTTP protocol? Making decision based on something that is
"REQUIRED" may be more appropriate.


On Tue, Jun 16, 2009 at 12:33 PM, Doug Williams  wrote:

> Hi all,
> The Search API will begin to require a valid HTTP Referrer, or at the very
> least, a meaningful and unique user agent with each request. Any request not
> including this information will be returned a 403 Forbidden response code by
> our web server.
>
> This change will be effective within the next few days, so please check
> your applications using the Search API and make any necessary code changes.
>
> Thanks,
> Doug
>



-- 
Naveen K Kohli
http://www.netomatix.com


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Justyn Howard
Thanks, pretty sure we do both. Will this new (or newly enforced) policy
help clean up some garbage?


On 6/16/09 11:56 AM, "Doug Williams"  wrote:

> All we ask is that you include a valid HTTP Referrer and/or a User Agent with
> each request which is easy to do in almost every language. Both would be
> helpful but we only require one at this time. We simply want to be able to
> identify apps and have the ability to communicate with the authors.
> 
> Thanks,
> Doug
> 
> 
> 
> 
> On Tue, Jun 16, 2009 at 9:51 AM, Justyn Howard 
> wrote:
>> Thanks Doug - Any additional info to help us know if we comply? My dev is out
>> of the country on vacation and want to make sure we don¹t miss anything.
>> 
>> 
>> 
>> On 6/16/09 11:33 AM, "Doug Williams" >  > wrote:
>> 
>>> Hi all,
>>> The Search API will begin to require a valid HTTP Referrer, or at the very
>>> least, a meaningful and unique user agent with each request. Any request not
>>> including this information will be returned a 403 Forbidden response code by
>>> our web server.
>>> 
>>> This change will be effective within the next few days, so please check your
>>> applications using the Search API and make any necessary code changes.
>>> 
>>> Thanks,
>>> Doug
>>> 
> 
> 



[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread funkatron

Indeed, some clearer criteria would be most appreciated.

--
Ed Finkler
http://funkatron.com
Twitter:@funkatron
AIM: funka7ron
ICQ: 3922133
XMPP:funkat...@gmail.com


On Jun 16, 12:51 pm, Justyn Howard  wrote:
> Thanks Doug - Any additional info to help us know if we comply? My dev is
> out of the country on vacation and want to make sure we don¹t miss anything.
>
> On 6/16/09 11:33 AM, "Doug Williams"  wrote:
>
> > Hi all,
> > The Search API will begin to require a valid HTTP Referrer, or at the very
> > least, a meaningful and unique user agent with each request. Any request not
> > including this information will be returned a 403 Forbidden response code by
> > our web server.
>
> > This change will be effective within the next few days, so please check your
> > applications using the Search API and make any necessary code changes.
>
> > Thanks,
> > Doug
>
>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Doug Williams
All we ask is that you include a valid HTTP Referrer and/or a User Agent
with each request which is easy to do in almost every language. Both would
be helpful but we only require one at this time. We simply want to be able
to identify apps and have the ability to communicate with the authors.

Thanks,
Doug




On Tue, Jun 16, 2009 at 9:51 AM, Justyn Howard wrote:

>  Thanks Doug - Any additional info to help us know if we comply? My dev is
> out of the country on vacation and want to make sure we don’t miss anything.
>
>
>
> On 6/16/09 11:33 AM, "Doug Williams"  wrote:
>
> Hi all,
> The Search API will begin to require a valid HTTP Referrer, or at the very
> least, a meaningful and unique user agent with each request. Any request not
> including this information will be returned a 403 Forbidden response code by
> our web server.
>
> This change will be effective within the next few days, so please check
> your applications using the Search API and make any necessary code changes.
>
> Thanks,
> Doug
>
>


[twitter-dev] Re: Search API to require HTTP Referrer and/or User Agent

2009-06-16 Thread Justyn Howard
Thanks Doug - Any additional info to help us know if we comply? My dev is
out of the country on vacation and want to make sure we don¹t miss anything.


On 6/16/09 11:33 AM, "Doug Williams"  wrote:

> Hi all,
> The Search API will begin to require a valid HTTP Referrer, or at the very
> least, a meaningful and unique user agent with each request. Any request not
> including this information will be returned a 403 Forbidden response code by
> our web server.
> 
> This change will be effective within the next few days, so please check your
> applications using the Search API and make any necessary code changes.
> 
> Thanks,
> Doug
> 



[twitter-dev] Re: Search problems for from:username searches

2009-06-06 Thread Barry Hess
Unfortunately I'm not in the position to file a ticket with support for
every "spam" user on Twitter. You may want to consider a more thorough
algorithm for your spam filtering. For instance, I'm pretty sure public
radio assets are not taking part in spam activities.
We'll just need to code around the issue and provide a message to our users
to get in touch with you guys if they think this is in error.

Sorry for the recursive loop I posted before.  Apparently I don't understand
how to use email. :)

Thanks,
--
Barry Hess
http://bjhess.com
http://iridesco.com


On Fri, Jun 5, 2009 at 7:50 AM, bjhess  wrote:

> We have had some users complain about not being able to find
> themselves on http://followcost.com.  I've dug into the code and it
> appears the failure is happening on queries to the search API of the
> form "from:username".
>
> A couple example queries that return zero results:
>
>  http://search.twitter.com/search.json?q=from%3A1918
>  http://search.twitter.com/search.json?q=from%3Athecurrent
>
> Yet clearly these users are active, and legitimate, Twitter users:
>
>  http://twitter.com/1918
>  http://twitter.com/thecurrent
>
> But sadly, is it that these users are not being indexed at all in the
> search DB?  I get zero results doing a simple from:username search for
> the same users:
>
>  http://search.twitter.com/search?q=from%3A1918
>  http://search.twitter.com/search?q=from%3Athecurrent
>
> These are just a couple examples.  Is it common for legitimate,
> upstanding Twitter users to be unindexed in the search DB?
>
> --
> Barry Hess
> http://followcost.com
> http://bjhess.com


[twitter-dev] Re: search is acting strange

2009-06-05 Thread Jonas

Matt,

It looks like this problem has diminished but not gone away.  When I
do several consecutive searches, the latest tweet is never more than a
few minutes out of sync.  I guess there will always be some small
amount of time that the twitter servers will be out of sync.  I'm just
wondering if there will be any more improvement, or if this is the
best it will be.

Thanks,
Jonas

On Jun 3, 1:20 pm, Matt Sanford  wrote:
> Hi there,
>
>      This is a known issue [1] we're working on. Some servers are  
> behind and we're trying to get them back up to date. Mark the Google  
> Code issue [1] with a star to get updates … no need to leave comments  
> in the ticket.
>
> Thanks;
>   – Matt Sanford / @mzsanford
>       Twitter Dev
>
> [1] -http://code.google.com/p/twitter-api/issues/detail?id=646
>
> On Jun 3, 2009, at 10:10 AM, Jonas wrote:
>
>
>
> > When I do:http://search.twitter.com/search.atom?q=blah
>
> > The most recent tweet is a couple of minutes old.  The next time I do
> > it the most recent is an hour old.  The next time a half hour old.
> > The next time a minute old, etc, etc.
>
>


[twitter-dev] Re: search is acting strange

2009-06-05 Thread 0 3
Matt,

This is a critical issue for the app I am working on.  Do you have any
information as to when this might get resolved?

Thanks,
Jonas

On Wed, Jun 3, 2009 at 1:20 PM, Matt Sanford  wrote:

>
> Hi there,
>
>This is a known issue [1] we're working on. Some servers are behind and
> we're trying to get them back up to date. Mark the Google Code issue [1]
> with a star to get updates … no need to leave comments in the ticket.
>
> Thanks;
>  – Matt Sanford / @mzsanford
> Twitter Dev
>
> [1] - http://code.google.com/p/twitter-api/issues/detail?id=646
>
>
> On Jun 3, 2009, at 10:10 AM, Jonas wrote:
>
>
>> When I do: http://search.twitter.com/search.atom?q=blah
>>
>> The most recent tweet is a couple of minutes old.  The next time I do
>> it the most recent is an hour old.  The next time a half hour old.
>> The next time a minute old, etc, etc.
>>
>
>


[twitter-dev] Re: Search problems for from:username searches

2009-06-05 Thread Abraham Williams
My experience interacting with http://help.twitter.com this year has been
nothing for 2 months until the ticket auto closes. Support is hard to scale
for 40 million accounts.

On Fri, Jun 5, 2009 at 10:33, Howard Siegel  wrote:

> Doug,
>
> I've been having a problem seeing my own tweets in search for quite a few
> months, and I know my tweets were not showing up in a hashtag search at a
> conference I was at a few weeks ago (which made it really hard to
> participate in the conference's twitter conversation!).  I did file a help
> ticket a while back and was basically put off by the response from support
> (essentially it said "too bad, so sad") and they closed the ticket on me.  I
> have not had the time nor patience to follow up on it, though, as I know
> that my tweets are getting out since people do respond to them.  Would be
> nice if my tweets showed in searches, though.
>
> - h
>
>
> On Fri, Jun 5, 2009 at 08:20, Doug Williams  wrote:
>
>> Please file a help ticket at http://help.twitter.com. @thecurrents tweets
>> almost always have links that point back to the same source. This is
>> normally indicative of spam which may explain why the account is no longer
>> in search. The folks in support can help you take care issues like these.
>> Thanks,
>> Doug
>>
>> On Fri, Jun 5, 2009 at 8:04 AM, Barry Hess  wrote:
>>>
>>>
>>> http://groups.google.com/group/twitter-development-talk/browse_thread/thread/f3859409fb05127c
>>> --
>>> Barry Hess
>>> http://bjhess.com
>>> http://iridesco.com
>>>
>>>
>>>
>>> On Fri, Jun 5, 2009 at 7:50 AM, bjhess  wrote:
>>>
 We have had some users complain about not being able to find
 themselves on http://followcost.com.  I've dug into the code and it
 appears the failure is happening on queries to the search API of the
 form "from:username".

 A couple example queries that return zero results:

  http://search.twitter.com/search.json?q=from%3A1918
  http://search.twitter.com/search.json?q=from%3Athecurrent

 Yet clearly these users are active, and legitimate, Twitter users:

  http://twitter.com/1918
  http://twitter.com/thecurrent

 But sadly, is it that these users are not being indexed at all in the
 search DB?  I get zero results doing a simple from:username search for
 the same users:

  http://search.twitter.com/search?q=from%3A1918
  http://search.twitter.com/search?q=from%3Athecurrent

 These are just a couple examples.  Is it common for legitimate,
 upstanding Twitter users to be unindexed in the search DB?

 --
 Barry Hess
 http://followcost.com
 http://bjhess.com
>>>
>>>
>>>
>>
>


-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Project | http://fireeagle.labs.poseurtech.com
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search problems for from:username searches

2009-06-05 Thread Howard Siegel
Doug,

I've been having a problem seeing my own tweets in search for quite a few
months, and I know my tweets were not showing up in a hashtag search at a
conference I was at a few weeks ago (which made it really hard to
participate in the conference's twitter conversation!).  I did file a help
ticket a while back and was basically put off by the response from support
(essentially it said "too bad, so sad") and they closed the ticket on me.  I
have not had the time nor patience to follow up on it, though, as I know
that my tweets are getting out since people do respond to them.  Would be
nice if my tweets showed in searches, though.

- h

On Fri, Jun 5, 2009 at 08:20, Doug Williams  wrote:

> Please file a help ticket at http://help.twitter.com. @thecurrents tweets
> almost always have links that point back to the same source. This is
> normally indicative of spam which may explain why the account is no longer
> in search. The folks in support can help you take care issues like these.
> Thanks,
> Doug
>
> On Fri, Jun 5, 2009 at 8:04 AM, Barry Hess  wrote:
>>
>>
>> http://groups.google.com/group/twitter-development-talk/browse_thread/thread/f3859409fb05127c
>> --
>> Barry Hess
>> http://bjhess.com
>> http://iridesco.com
>>
>>
>>
>> On Fri, Jun 5, 2009 at 7:50 AM, bjhess  wrote:
>>
>>> We have had some users complain about not being able to find
>>> themselves on http://followcost.com.  I've dug into the code and it
>>> appears the failure is happening on queries to the search API of the
>>> form "from:username".
>>>
>>> A couple example queries that return zero results:
>>>
>>>  http://search.twitter.com/search.json?q=from%3A1918
>>>  http://search.twitter.com/search.json?q=from%3Athecurrent
>>>
>>> Yet clearly these users are active, and legitimate, Twitter users:
>>>
>>>  http://twitter.com/1918
>>>  http://twitter.com/thecurrent
>>>
>>> But sadly, is it that these users are not being indexed at all in the
>>> search DB?  I get zero results doing a simple from:username search for
>>> the same users:
>>>
>>>  http://search.twitter.com/search?q=from%3A1918
>>>  http://search.twitter.com/search?q=from%3Athecurrent
>>>
>>> These are just a couple examples.  Is it common for legitimate,
>>> upstanding Twitter users to be unindexed in the search DB?
>>>
>>> --
>>> Barry Hess
>>> http://followcost.com
>>> http://bjhess.com
>>
>>
>>
>


[twitter-dev] Re: Search problems for from:username searches

2009-06-05 Thread Doug Williams
Please file a help ticket at http://help.twitter.com. @thecurrents tweets
almost always have links that point back to the same source. This is
normally indicative of spam which may explain why the account is no longer
in search. The folks in support can help you take care issues like these.
Thanks,
Doug

On Fri, Jun 5, 2009 at 8:04 AM, Barry Hess  wrote:
>
>
> http://groups.google.com/group/twitter-development-talk/browse_thread/thread/f3859409fb05127c
> --
> Barry Hess
> http://bjhess.com
> http://iridesco.com
>
>
>
> On Fri, Jun 5, 2009 at 7:50 AM, bjhess  wrote:
>
>> We have had some users complain about not being able to find
>> themselves on http://followcost.com.  I've dug into the code and it
>> appears the failure is happening on queries to the search API of the
>> form "from:username".
>>
>> A couple example queries that return zero results:
>>
>>  http://search.twitter.com/search.json?q=from%3A1918
>>  http://search.twitter.com/search.json?q=from%3Athecurrent
>>
>> Yet clearly these users are active, and legitimate, Twitter users:
>>
>>  http://twitter.com/1918
>>  http://twitter.com/thecurrent
>>
>> But sadly, is it that these users are not being indexed at all in the
>> search DB?  I get zero results doing a simple from:username search for
>> the same users:
>>
>>  http://search.twitter.com/search?q=from%3A1918
>>  http://search.twitter.com/search?q=from%3Athecurrent
>>
>> These are just a couple examples.  Is it common for legitimate,
>> upstanding Twitter users to be unindexed in the search DB?
>>
>> --
>> Barry Hess
>> http://followcost.com
>> http://bjhess.com
>
>
>


[twitter-dev] Re: Search problems for from:username searches

2009-06-05 Thread Abraham Williams
O! I love recursion!

On Fri, Jun 5, 2009 at 10:04, Barry Hess  wrote:

>
> http://groups.google.com/group/twitter-development-talk/browse_thread/thread/f3859409fb05127c
> --
> Barry Hess
> http://bjhess.com
> http://iridesco.com
>
>
>
> On Fri, Jun 5, 2009 at 7:50 AM, bjhess  wrote:
>
>> We have had some users complain about not being able to find
>> themselves on http://followcost.com.  I've dug into the code and it
>> appears the failure is happening on queries to the search API of the
>> form "from:username".
>>
>> A couple example queries that return zero results:
>>
>>  http://search.twitter.com/search.json?q=from%3A1918
>>  http://search.twitter.com/search.json?q=from%3Athecurrent
>>
>> Yet clearly these users are active, and legitimate, Twitter users:
>>
>>  http://twitter.com/1918
>>  http://twitter.com/thecurrent
>>
>> But sadly, is it that these users are not being indexed at all in the
>> search DB?  I get zero results doing a simple from:username search for
>> the same users:
>>
>>  http://search.twitter.com/search?q=from%3A1918
>>  http://search.twitter.com/search?q=from%3Athecurrent
>>
>> These are just a couple examples.  Is it common for legitimate,
>> upstanding Twitter users to be unindexed in the search DB?
>>
>> --
>> Barry Hess
>> http://followcost.com
>> http://bjhess.com
>
>
>


-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Project | http://fireeagle.labs.poseurtech.com
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search problems for from:username searches

2009-06-05 Thread Barry Hess
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/f3859409fb05127c
--
Barry Hess
http://bjhess.com
http://iridesco.com


On Fri, Jun 5, 2009 at 7:50 AM, bjhess  wrote:

> We have had some users complain about not being able to find
> themselves on http://followcost.com.  I've dug into the code and it
> appears the failure is happening on queries to the search API of the
> form "from:username".
>
> A couple example queries that return zero results:
>
>  http://search.twitter.com/search.json?q=from%3A1918
>  http://search.twitter.com/search.json?q=from%3Athecurrent
>
> Yet clearly these users are active, and legitimate, Twitter users:
>
>  http://twitter.com/1918
>  http://twitter.com/thecurrent
>
> But sadly, is it that these users are not being indexed at all in the
> search DB?  I get zero results doing a simple from:username search for
> the same users:
>
>  http://search.twitter.com/search?q=from%3A1918
>  http://search.twitter.com/search?q=from%3Athecurrent
>
> These are just a couple examples.  Is it common for legitimate,
> upstanding Twitter users to be unindexed in the search DB?
>
> --
> Barry Hess
> http://followcost.com
> http://bjhess.com


[twitter-dev] Re: Search API results return more info

2009-06-04 Thread Abraham Williams
This would probably heavier for Twitter to add since the to_user info is
already included in the status info but the profile_image_url would have to
be looked up to be added. It should be easy once the APIs merge.

You should open an issue in the Google Code project so Twitter can keep
track of the request.

On Mon, Jun 1, 2009 at 12:27, Coderanger  wrote:

>
> The search results already have "profile_image_url" which is the
> "from_user" profile image, but could the "to_user" have its profile
> image url included as well. This avoids me making multiple calls and
> hitting your server unnnecessarily.
>
> It would clearly only be included/filled when there is a "to_user". I
> think the bandwidth overhead would be minimal, if not it could be
> added thru an extra parameter "&extended_user_data=1" or
> "&include_to_user_profile" so it could be selectively used if
> required.
>
> This addition enables me (http://coderanger.com/twitcher) or other
> apps to properly attribute the search results like a normal user
> status.
>



-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Project | http://fireeagle.labs.poseurtech.com
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: search is acting strange

2009-06-03 Thread Matt Sanford


Hi there,

This is a known issue [1] we're working on. Some servers are  
behind and we're trying to get them back up to date. Mark the Google  
Code issue [1] with a star to get updates … no need to leave comments  
in the ticket.


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

[1] - http://code.google.com/p/twitter-api/issues/detail?id=646

On Jun 3, 2009, at 10:10 AM, Jonas wrote:



When I do: http://search.twitter.com/search.atom?q=blah

The most recent tweet is a couple of minutes old.  The next time I do
it the most recent is an hour old.  The next time a half hour old.
The next time a minute old, etc, etc.




[twitter-dev] Re: search is acting strange

2009-06-03 Thread Chad Etzel

There are many many search servers which give the results (around 30
last time I prodded around).  Using search.twitter.com is actually a
load balancer which redirects the request to one of these servers.
They may not be all synchronized with each other, so that would
produce the results you are seeing.  If Matt/Doug want to pile on more
info on this topic, great... but I'll stop short here.

-Chad

On Wed, Jun 3, 2009 at 1:10 PM, Jonas  wrote:
>
> When I do: http://search.twitter.com/search.atom?q=blah
>
> The most recent tweet is a couple of minutes old.  The next time I do
> it the most recent is an hour old.  The next time a half hour old.
> The next time a minute old, etc, etc.
>


[twitter-dev] Re: Search API

2009-06-01 Thread Matt Sanford


Hi there,

To get more results you'll need to paginate. We cannot offer an  
API that returns thousands (or millions) or results in one request.


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

On May 31, 2009, at 5:53 PM, Joseph wrote:



If I do a search the API, is there an easier way to get all output,
than doing multiple calls, each specifying a page number?




[twitter-dev] Re: Search Twitter (Java, C#) - Language Preferences?

2009-05-27 Thread Merrows



On May 26, 3:10 pm, Andrew Badera  wrote:
> The language you're using is going to be pretty agnostic to the
> performance of search.twitter.com. You're dealing with a loosely
> coupled architecture over an Internet WAN connection ... and nothing
> you do will change the base performance of search.twitter.com itself.
>
> The specific API you select could be an entirely different story, but
> with a RESTful API, most APIs are going to have a hard time making
> performance mistakes.
>
> If you have a lot of client-side processing, C# may be your best bet
> on a Windows x86 or x64 machine, with Java equal, or a close second.
> (Java's only faster on Java processors, and really only at scale.) Any
> interpreted languages are going to have a much harder time doing
> in-memory or I/O bound work with the same level of performance, if
> that's what you're after.
>
> Thanks-
> - Andy Badera
> - and...@badera.us
> - Google me:http://www.google.com/search?q=andrew+badera
> - This email is: [ ] bloggable [x] ask first [ ] private
>
>
>
> On Tue, May 26, 2009 at 8:32 AM, Merrows  wrote:
>
> > I have a system already written in C# and .NET which I started in
> > 2003. I have been happy with using c# and .NET as it has a good class
> > structure, and also Winforms works well for writing client-server
> > applications. Recently, I have seen much less interest in C# from
> > developers.
>
> > I want to integrate search results from twitter into the current
> > system and I am thinking of what languages to use.
>
> > I have googled what language to use, and the limits of JSON and ATOM
> > have placed some restrictions on what I can do. Especially, some
> > developers have complained about performance issues using C# and .NET
> > related to serialization of the data.
>
> > Does anyone have any experience of Twitter API's and especially the
> > search? If so, are there are machine performance issues, or issues
> > with finding open source code?- Hide quoted text -
>
> - Show quoted text -

Actually the efficiency arose from a blog. Apparently the blogger said
many developers had complained about the slowness of C# code in using
the search twitter api.



[twitter-dev] Re: Search Twitter (Java, C#) - Language Preferences?

2009-05-27 Thread Brendan O'Connor

On Tue, May 26, 2009 at 5:32 AM, Merrows  wrote:
>
> I have a system already written in C# and .NET which I started in
> 2003. I have been happy with using c# and .NET as it has a good class
> structure, and also Winforms works well for writing client-server
> applications. Recently, I have seen much less interest in C# from
> developers.
>
> I want to integrate search results from twitter into the current
> system and I am thinking of what languages to use.
>
> I have googled what language to use, and the limits of JSON and ATOM
> have placed some restrictions on what I can do. Especially, some
> developers have complained about performance issues using C# and .NET
> related to serialization of the data.

C or C++ will be faster, but those are pretty much the only mainstream
programming languages faster than C# and Java.  Unless your C# JSON or
XML/ATOM libraries are a bottleneck, which I doubt...

-- 
Brendan O'Connor - http://anyall.org


[twitter-dev] Re: Search API rpp parameter

2009-05-26 Thread Jim Whimpey

Genius Chad, problem solved, thank you!

On May 27, 10:45 am, Chad Etzel  wrote:
> My guess is that you have something like
>
> curlhttp://search.twitter.com/search.json?q=hello&rpp=50
>
> That '&' in there is sneaky and will be interpreted by your shell as a
> meta-character to background the process.
>
> Try wrapping the URL in quotes and see what happens:
>
> curl "http://search.twitter.com/search.json?q=hello&rpp=50";
>
> -Chad
>
>
>
> On Tue, May 26, 2009 at 8:39 PM, Matt Sanford  wrote:
>
> > Hi Jim,
>
> >    There is no known issue but if you can provide the curl command you're
> > using we might be able to help.
>
> > Thanks;
> >  – Matt Sanford / @mzsanford
> >     Twitter Dev
>
> > On May 26, 2009, at 5:31 PM, Jim Whimpey wrote:
>
> >> The API seems to be ignoring my rpp parameter. On the website I change
> >> it in the URL and the value is respected, I copy that exact same URL
> >> into a cURL call and the parameter is ignored, I'm returned 15
> >> results, no matter what the rpp value is set to.


[twitter-dev] Re: Search API rpp parameter

2009-05-26 Thread Jim Whimpey

curl -s -x http://gatekeeper:8080 
http://search.twitter.com/search.json?q=test&rpp=2

Returns 15 results.

On May 27, 10:39 am, Matt Sanford  wrote:
> Hi Jim,
>
>      There is no known issue but if you can provide the curl command  
> you're using we might be able to help.
>
> Thanks;
>   – Matt Sanford / @mzsanford
>       Twitter Dev
>
> On May 26, 2009, at 5:31 PM, Jim Whimpey wrote:
>
>
>
>
>
> > The API seems to be ignoring my rpp parameter. On the website I change
> > it in the URL and the value is respected, I copy that exact same URL
> > into a cURL call and the parameter is ignored, I'm returned 15
> > results, no matter what the rpp value is set to.


[twitter-dev] Re: Search API rpp parameter

2009-05-26 Thread Chad Etzel

My guess is that you have something like

curl http://search.twitter.com/search.json?q=hello&rpp=50

That '&' in there is sneaky and will be interpreted by your shell as a
meta-character to background the process.

Try wrapping the URL in quotes and see what happens:

curl "http://search.twitter.com/search.json?q=hello&rpp=50";

-Chad


On Tue, May 26, 2009 at 8:39 PM, Matt Sanford  wrote:
>
> Hi Jim,
>
>    There is no known issue but if you can provide the curl command you're
> using we might be able to help.
>
> Thanks;
>  – Matt Sanford / @mzsanford
>     Twitter Dev
>
> On May 26, 2009, at 5:31 PM, Jim Whimpey wrote:
>
>>
>> The API seems to be ignoring my rpp parameter. On the website I change
>> it in the URL and the value is respected, I copy that exact same URL
>> into a cURL call and the parameter is ignored, I'm returned 15
>> results, no matter what the rpp value is set to.
>
>


[twitter-dev] Re: Search API rpp parameter

2009-05-26 Thread Matt Sanford


Hi Jim,

There is no known issue but if you can provide the curl command  
you're using we might be able to help.


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

On May 26, 2009, at 5:31 PM, Jim Whimpey wrote:



The API seems to be ignoring my rpp parameter. On the website I change
it in the URL and the value is respected, I copy that exact same URL
into a cURL call and the parameter is ignored, I'm returned 15
results, no matter what the rpp value is set to.




[twitter-dev] Re: Search Twitter (Java, C#) - Language Preferences?

2009-05-26 Thread Andrew Badera

The language you're using is going to be pretty agnostic to the
performance of search.twitter.com. You're dealing with a loosely
coupled architecture over an Internet WAN connection ... and nothing
you do will change the base performance of search.twitter.com itself.

The specific API you select could be an entirely different story, but
with a RESTful API, most APIs are going to have a hard time making
performance mistakes.

If you have a lot of client-side processing, C# may be your best bet
on a Windows x86 or x64 machine, with Java equal, or a close second.
(Java's only faster on Java processors, and really only at scale.) Any
interpreted languages are going to have a much harder time doing
in-memory or I/O bound work with the same level of performance, if
that's what you're after.

Thanks-
- Andy Badera
- and...@badera.us
- Google me: http://www.google.com/search?q=andrew+badera
- This email is: [ ] bloggable [x] ask first [ ] private



On Tue, May 26, 2009 at 8:32 AM, Merrows  wrote:
>
> I have a system already written in C# and .NET which I started in
> 2003. I have been happy with using c# and .NET as it has a good class
> structure, and also Winforms works well for writing client-server
> applications. Recently, I have seen much less interest in C# from
> developers.
>
> I want to integrate search results from twitter into the current
> system and I am thinking of what languages to use.
>
> I have googled what language to use, and the limits of JSON and ATOM
> have placed some restrictions on what I can do. Especially, some
> developers have complained about performance issues using C# and .NET
> related to serialization of the data.
>
> Does anyone have any experience of Twitter API's and especially the
> search? If so, are there are machine performance issues, or issues
> with finding open source code?
>


[twitter-dev] Re: Search Twitter (Java, C#) - Language Preferences?

2009-05-26 Thread Pavlo Zahozhenko
 I've integrated huge ASP.Net (C#) system with Twitter and had no problems
with performance and open-source tools. For open-source C# Twitter API lib,
I recommend Twitterizer  . It is
quite easy to get started and very flexible.
As for performance, it is just fine. Performance bottleneck is Twitter API
itself, which is sometimes slow, but that doesn't depend on your programming
language.

Hadn't used search API though, so cannot comment about it.

On Tue, May 26, 2009 at 3:32 PM, Merrows  wrote:

>
> I have a system already written in C# and .NET which I started in
> 2003. I have been happy with using c# and .NET as it has a good class
> structure, and also Winforms works well for writing client-server
> applications. Recently, I have seen much less interest in C# from
> developers.
>
> I want to integrate search results from twitter into the current
> system and I am thinking of what languages to use.
>
> I have googled what language to use, and the limits of JSON and ATOM
> have placed some restrictions on what I can do. Especially, some
> developers have complained about performance issues using C# and .NET
> related to serialization of the data.
>
> Does anyone have any experience of Twitter API's and especially the
> search? If so, are there are machine performance issues, or issues
> with finding open source code?
>


[twitter-dev] Re: Search: Resolution of Since, and howto avoid pulling redundant search results

2009-05-22 Thread Jeffrey Greenberg
i've got a working solution as far as pulling in tweets doing pretty much as
I said, except that.it will fail when there is a burst of tweets.  For some
very active search term, say something that exceeds the 1500 search limit
(15 pages x100tweets/pg) per day... Tweets will be missed.  For my
application, i the odds are that missing a small quantity of tweets isn't
earth shattering, but there's a *chance* it could be..  I think of this as a
twitter shortcoming...  Wondering if it's worth filing a low-priority bug
for it?



On Fri, May 22, 2009 at 1:24 PM, Doug Williams  wrote:

> As the docs [1] state the correct format is since:-MM-DD which give you
> resolution down to a day.  Any further processing must be done on the client
> side. Given the constraints, utilizing a combination of since: and since_id
> sounds like a great solution.
> 1. http://search.twitter.com/operators
>
> Thanks,
> Doug
> --
>
> Doug Williams
> Twitter Platform Support
> http://twitter.com/dougw
>
>
>
>
>
> On Fri, May 22, 2009 at 8:05 AM, Jeffrey Greenberg <
> jeffreygreenb...@gmail.com> wrote:
>
>> What is the resolution of the 'since' operator?  It appears to be by the
>> day, but I'd sure like it to be by the minute or second.
>> Can't seem to find this in the docs.
>>
>> The use case is that I want to minimize pulling searches results that i've
>> already got.   My solution is to record the time of the last search and the
>> last status_id, and ask for subsequent searches from the status_id. If that
>> fails because it's out of range, I'll ask by the last search date.  Is this
>> the way to go?
>>
>>
>> http://www.tweettronics.com
>> http://www.jeffrey-greenberg.com
>>
>>
>


[twitter-dev] Re: Search: Resolution of Since, and howto avoid pulling redundant search results

2009-05-22 Thread Doug Williams
As the docs [1] state the correct format is since:-MM-DD which give you
resolution down to a day.  Any further processing must be done on the client
side. Given the constraints, utilizing a combination of since: and since_id
sounds like a great solution.
1. http://search.twitter.com/operators

Thanks,
Doug
--

Doug Williams
Twitter Platform Support
http://twitter.com/dougw




On Fri, May 22, 2009 at 8:05 AM, Jeffrey Greenberg <
jeffreygreenb...@gmail.com> wrote:

> What is the resolution of the 'since' operator?  It appears to be by the
> day, but I'd sure like it to be by the minute or second.
> Can't seem to find this in the docs.
>
> The use case is that I want to minimize pulling searches results that i've
> already got.   My solution is to record the time of the last search and the
> last status_id, and ask for subsequent searches from the status_id. If that
> fails because it's out of range, I'll ask by the last search date.  Is this
> the way to go?
>
>
> http://www.tweettronics.com
> http://www.jeffrey-greenberg.com
>
>


[twitter-dev] Re: Search not returning all updates

2009-05-02 Thread Abraham Williams
The user might be flagged as spam. Those accounts don't show up as results
in search.

On Sat, May 2, 2009 at 15:37, Andy  wrote:

>
> I was missing some results in my API search, so I tried it on the
> twitter web site (http://search.twitter.com) and I'm having the same
> problems. I cannot find tweets from certain users, but can find from
> others. For example an update about the Hamptoms and another about
> Rome at http://twitter.com/drenert
> I tried various searches:
>
> Wondering what the Hamptons
> Hamptons
> #Rome
> #localyte
> localyte
>
> No luck with any. I can see his profile, so I know it's not a private
> account. Can anyone help me or tell me what I'm missing?
>
> Thanks!
> Andy
>



-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Milwaukee, WI, United States


[twitter-dev] Re: Search if a user profile exists based on his email id / name

2009-04-27 Thread Cameron Kaiser

> > > Is it possible to know if a user (profile) exists based on email id .
> >
> > No.
>
> There is an API for search,  cant we use it to search if users exists
> or not. I think twitter gives you, I would like it to use on my
> website as a widget where I can search for a user based on emailid or
> username and then proceed to his twitter page.

The Search API does not allow searching for users by E-mail. As far as
username, you can simply query the username and see if it exists.

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- The idea is to die young as late as possible. -- Ashley Montagu 


[twitter-dev] Re: Search if a user profile exists based on his email id / name

2009-04-27 Thread king

There is an API for search,  cant we use it to search if users exists
or not. I think twitter gives you, I would like it to use on my
website as a widget where I can search for a user based on emailid or
username and then proceed to his twitter page.

Thank you

On Apr 25, 7:10 pm, Cameron Kaiser  wrote:
> > Is it possible to know if a user (profile) exists based on email id .
>
> No.
>
> --
>  personal:http://www.cameronkaiser.com/--
>   Cameron Kaiser * Floodgap Systems *www.floodgap.com* ckai...@floodgap.com
> -- DON'T PANIC! 
> ---


[twitter-dev] Re: Search if a user profile exists based on his email id / name

2009-04-25 Thread Cameron Kaiser

> Is it possible to know if a user (profile) exists based on email id .

No.

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- DON'T PANIC! ---


[twitter-dev] Re: search API issue : "source:" doesn't work in some case

2009-04-23 Thread Yusuke

Thanks for the super quick response!

I'll fit my testcase then.

Cheers,
Yusuke

On 4月24日, 午前1:01, Matt Sanford  wrote:
> Hi Yusuke,
>
>  Unfortunately the "source:" operator as it is currently  
> implemented has a few shortcomings. One is that it requires a query,  
> and the second is that it can only search the last 7 days. This is a  
> known performance issue and we're still looking for a way we can  
> remove the restriction. I'll talk to Doug about updating the docs.
>
> Thanks;
>- Matt Sanford / @mzsanford
>Twitter API Developer
>
> On Apr 23, 2009, at 08:55 AM, Yusuke wrote:
>
>
>
> > Hi,
>
> > Today I noticed that my Twitter4J automated testcase for the search
> > API started to fail.
>
> > query: "thisisarondomstringforatestcase" returns 1 tweet.
> >http://search.twitter.com/search?q=thisisarondomstringforatestcase
>
> > But query: "source:web thisisarondomstringforatestcase" returns 0
> > tweet despite that the above tweet was posted via web.
> >http://search.twitter.com/search?q=source%3Aweb+thisisarondomstringfo...
> > It used to be returning one single tweet.
>
> > Is there any problem with the search API?
>
> > Best regards,
> > Yusuke


[twitter-dev] Re: search API issue : "source:" doesn't work in some case

2009-04-23 Thread Matt Sanford

Hi Yusuke,

Unfortunately the "source:" operator as it is currently  
implemented has a few shortcomings. One is that it requires a query,  
and the second is that it can only search the last 7 days. This is a  
known performance issue and we're still looking for a way we can  
remove the restriction. I'll talk to Doug about updating the docs.


Thanks;
  – Matt Sanford / @mzsanford
  Twitter API Developer



On Apr 23, 2009, at 08:55 AM, Yusuke wrote:



Hi,

Today I noticed that my Twitter4J automated testcase for the search
API started to fail.

query: "thisisarondomstringforatestcase" returns 1 tweet.
http://search.twitter.com/search?q=thisisarondomstringforatestcase

But query: "source:web thisisarondomstringforatestcase" returns 0
tweet despite that the above tweet was posted via web.
http://search.twitter.com/search?q=source%3Aweb+thisisarondomstringforatestcase
It used to be returning one single tweet.

Is there any problem with the search API?

Best regards,
Yusuke




[twitter-dev] Re: Search API returns HTTP 406 Not Acceptable

2009-04-21 Thread Ho John Lee

Never mind. I figured out the problems was I switched queries to
".xml" instead of ".json", and XML isn't one of the choices for the
search API.

On Apr 21, 12:43 pm, hjl  wrote:
> I'm doing some testing this morning with the search API, which was
> working for a while but now is returning HTTP 406 Not Acceptable. Is
> this a symptom of the search API rate limiting? I ran a few queries
> with curl by hand, then ran a loop to see how far back the results
> pages go.
>
> The search API docs says rate limited requests should see 503 Service
> Unavailable, was wondering if it changed.
>
> I'll try it again in an hour or so and see if the search API starts
> responding again. But would still like to know if the response code
> for search rate limiting has changed.


[twitter-dev] Re: Search friends timeline

2009-04-21 Thread Doug Williams
Integrating search into your friends_timeline is something we want to do in
the future. With the separation of the Search and REST APIs, it isn't a
trivial feature. For now, you have to parse out results from timelines
client side.

Doug Williams
Twitter API Support
http://twitter.com/dougw


On Tue, Apr 21, 2009 at 9:44 AM, mikejablonski  wrote:

>
> That was my plan for now. It just makes it harder to get the next X
> friend status messages that have "XYZ" in them. I'm surprised this
> isn't a more requested feature. Thanks!
>
> On Apr 21, 8:28 am, Chad Etzel  wrote:
> > You can't.
> >
> > Just get the friends timeline and filter it client-side.  You'll have
> > more granular control over the filtering that way anyway.
> >
> > -Chad
> >
> > On Tue, Apr 21, 2009 at 11:16 AM, mikejablonski 
> wrote:
> >
> > > I've looked at the docs and searched the group, but I can't find any
> > > way to search your friends timeline. How can I get a filtered set of
> > > friend status messages based on a query? Is this possible? I know I
> > > could use the search API and throw away all my non-friends, but that
> > > won't work well for a lot of reasons. Thanks!
>


[twitter-dev] Re: Search friends timeline

2009-04-21 Thread mikejablonski

That was my plan for now. It just makes it harder to get the next X
friend status messages that have "XYZ" in them. I'm surprised this
isn't a more requested feature. Thanks!

On Apr 21, 8:28 am, Chad Etzel  wrote:
> You can't.
>
> Just get the friends timeline and filter it client-side.  You'll have
> more granular control over the filtering that way anyway.
>
> -Chad
>
> On Tue, Apr 21, 2009 at 11:16 AM, mikejablonski  wrote:
>
> > I've looked at the docs and searched the group, but I can't find any
> > way to search your friends timeline. How can I get a filtered set of
> > friend status messages based on a query? Is this possible? I know I
> > could use the search API and throw away all my non-friends, but that
> > won't work well for a lot of reasons. Thanks!


[twitter-dev] Re: Search friends timeline

2009-04-21 Thread Chad Etzel

You can't.

Just get the friends timeline and filter it client-side.  You'll have
more granular control over the filtering that way anyway.

-Chad

On Tue, Apr 21, 2009 at 11:16 AM, mikejablonski  wrote:
>
> I've looked at the docs and searched the group, but I can't find any
> way to search your friends timeline. How can I get a filtered set of
> friend status messages based on a query? Is this possible? I know I
> could use the search API and throw away all my non-friends, but that
> won't work well for a lot of reasons. Thanks!
>


[twitter-dev] Re: Search API Rate Limited even with OAuth

2009-04-20 Thread Doug Williams
Please see our article on rate limiting [1]. You will learn why the Search
API does not have a notion of authentication and how its rate limiting
differs from the REST API.

1. http://apiwiki.twitter.com/Rate-limiting

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw


On Mon, Apr 20, 2009 at 3:14 PM, Ammo Collector wrote:

>
> Hello,
>
> We're getting 503 rate limit responses from Search API even when
> passing in OAuth tokens.  The same tokens used on friends/followers/
> statuses go through fine so we know the tokens are good.  It appears
> we're getting IP limited even with OAuth...
>
> Klout.net
>


[twitter-dev] Re: Search API Rate Limited even with OAuth

2009-04-20 Thread Doug Williams
Please see our article on rate limiting [1]. You will learn why the Search
API does not have a notion of authentication and how its rate limiting
differs from the REST API.

1. http://apiwiki.twitter.com/Rate-limiting

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw


On Mon, Apr 20, 2009 at 3:14 PM, Ammo Collector wrote:

>
> Hello,
>
> We're getting 503 rate limit responses from Search API even when
> passing in OAuth tokens.  The same tokens used on friends/followers/
> statuses go through fine so we know the tokens are good.  It appears
> we're getting IP limited even with OAuth...
>
> Klout.net
>


[twitter-dev] Re: Search by in_reply_to_status_id

2009-04-18 Thread lordofthelake

Thanks for the link.

On Apr 18, 7:40 pm, Abraham Williams <4bra...@gmail.com> wrote:
> http://code.google.com/p/twitter-api/issues/detail?id=142
>
>
>
> On Sat, Apr 18, 2009 at 12:32, lordofthelake  wrote:
>
> > Hello.
> > I started a project whose goal is to allow users to track the reaction
> > of the crowd to their posts. This includes showing all the replies and
> > retweets born as reaction to the original message, organizing the data
> > in a threaded schema. While finding retweets of a particular message
> > is fairly easy using the Search API (Query: "RT @user  > the message>"), finding and filtering all the replies can become a non-
> > trivial work quite fast.
>
> > While tracking the replies given directly to you isn't particularly
> > hard, though not very efficient (find posts directed to you via search
> > API -- "to:user since_id:" -- and then filter by
> > in_reply_to_status_id), it becomes a nightmare when you want to track
> > what your followers' friends have answered to the replies you got from
> > your own followers.
>
> > Example of conversation:
> > Me: any idea about how to track the whole conversation originated from
> > this tweet?
> > MyFollower: @Me try posting in the twitter dev talk, maybe they can
> > help you
> > AFollowerOf_MyFollower: @MyFollower I know for sure those guys are
> > very supportive
>
> > Tracking MyFollower's response is not a big deal, even if the "first
> > fetch them all, then select those you need" may not be the most
> > efficient to implement for large volumes of tweets -- think to the
> > power-users with thousands, if not millions, of followers -- since
> > above certain limits, API usage caps (especially about number of
> > tweets that can be retrieved at once) start becoming an issue.
>
> > The real problem comes when you want to show in the threaded
> > conversation AFollowerOf_MyFollower's tweet, too. Sure thing, you can
> > use the same strategy as above (Search "to:MyFollower", fetch all,
> > filter by in_reply_to_status_id), but now instead of having to do a
> > single query (to:Me) to retrieve the replies to your posts, you have
> > to perform a fetching and filtering cycle for every person who took
> > part to the conversation: the growth is exponential.
>
> > A solution may be to allow searches by in_reply_to_status_id
> > (something like "reply:")... this would greatly lower the
> > cost of looking for replies to your posts. Would it be possible to
> > have such a feature exposed in future? Are there other, more efficient
> > solutions, anybody can suggest to solve my problem efficiently?
>
> > Thank you for the support. I apologize for the long post and my bad
> > English, but I'm not a native English speaker and I tried to expose my
> > problem as clearly as I could.
> > -- Michele
>
> --
> Abraham Williams |http://the.hackerconundrum.com
> Hacker |http://abrah.am|http://twitter.com/abraham
> Web608 | Community Evangelist |http://web608.org
> This email is: [ ] blogable [x] ask first [ ] private.
> Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search by in_reply_to_status_id

2009-04-18 Thread Abraham Williams
http://code.google.com/p/twitter-api/issues/detail?id=142

On Sat, Apr 18, 2009 at 12:32, lordofthelake  wrote:

>
> Hello.
> I started a project whose goal is to allow users to track the reaction
> of the crowd to their posts. This includes showing all the replies and
> retweets born as reaction to the original message, organizing the data
> in a threaded schema. While finding retweets of a particular message
> is fairly easy using the Search API (Query: "RT @user  the message>"), finding and filtering all the replies can become a non-
> trivial work quite fast.
>
> While tracking the replies given directly to you isn't particularly
> hard, though not very efficient (find posts directed to you via search
> API -- "to:user since_id:" -- and then filter by
> in_reply_to_status_id), it becomes a nightmare when you want to track
> what your followers' friends have answered to the replies you got from
> your own followers.
>
> Example of conversation:
> Me: any idea about how to track the whole conversation originated from
> this tweet?
> MyFollower: @Me try posting in the twitter dev talk, maybe they can
> help you
> AFollowerOf_MyFollower: @MyFollower I know for sure those guys are
> very supportive
>
> Tracking MyFollower's response is not a big deal, even if the "first
> fetch them all, then select those you need" may not be the most
> efficient to implement for large volumes of tweets -- think to the
> power-users with thousands, if not millions, of followers -- since
> above certain limits, API usage caps (especially about number of
> tweets that can be retrieved at once) start becoming an issue.
>
> The real problem comes when you want to show in the threaded
> conversation AFollowerOf_MyFollower's tweet, too. Sure thing, you can
> use the same strategy as above (Search "to:MyFollower", fetch all,
> filter by in_reply_to_status_id), but now instead of having to do a
> single query (to:Me) to retrieve the replies to your posts, you have
> to perform a fetching and filtering cycle for every person who took
> part to the conversation: the growth is exponential.
>
> A solution may be to allow searches by in_reply_to_status_id
> (something like "reply:")... this would greatly lower the
> cost of looking for replies to your posts. Would it be possible to
> have such a feature exposed in future? Are there other, more efficient
> solutions, anybody can suggest to solve my problem efficiently?
>
> Thank you for the support. I apologize for the long post and my bad
> English, but I'm not a native English speaker and I tried to expose my
> problem as clearly as I could.
> -- Michele
>



-- 
Abraham Williams | http://the.hackerconundrum.com
Hacker | http://abrah.am | http://twitter.com/abraham
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search API throwing 404's

2009-04-17 Thread dean....@googlemail.com

Hi,

I've experienced a few 404's on search.json this morning.

Sometimes works, sometimes doesn't can't seem to pinpoint any
particular pattern to it happening.

--
Leu

On Apr 17, 5:11 am, Chad Etzel  wrote:
> Just a quick update:
>
> The problem as popped up again. Doug is aware of this problem, and he
> says the servers are all stretched pretty thin (understandable).  Just
> curious if anyone else is seeing this as well?
>
> -Chad
>
> On Thu, Apr 16, 2009 at 11:30 PM, Chad Etzel  wrote:
> > Ok, dunno what was happening... I gave my server a swift kick with my
> > steel-toed boot and all seems well again... weird.
> > -Chad
>
> > On Thu, Apr 16, 2009 at 10:27 PM, Doug Williams  wrote:
> >> I just sent 200 queries through without seeing the 404. Are you still 
> >> seeing
> >> this?
>
> >> Doug Williams
> >> Twitter API Support
> >>http://twitter.com/dougw
>
> >> On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel  wrote:
>
> >>> Search is throwing 404's for search.json about every 7 or 8 requests...
>
> >>> 
> >>> 
> >>> 404 Not Found
> >>> 
> >>> Not Found
> >>> The requested URL /search.json was not found on this server.
> >>> 
>
> >>> Also got a "Forbidden" return when trying to connect to
> >>>http://search.twitter.com/about 10 minutes ago.
>
> >>> -Chad


[twitter-dev] Re: Search API throwing 404's

2009-04-16 Thread Chad Etzel

Just a quick update:

The problem as popped up again. Doug is aware of this problem, and he
says the servers are all stretched pretty thin (understandable).  Just
curious if anyone else is seeing this as well?

-Chad

On Thu, Apr 16, 2009 at 11:30 PM, Chad Etzel  wrote:
> Ok, dunno what was happening... I gave my server a swift kick with my
> steel-toed boot and all seems well again... weird.
> -Chad
>
> On Thu, Apr 16, 2009 at 10:27 PM, Doug Williams  wrote:
>> I just sent 200 queries through without seeing the 404. Are you still seeing
>> this?
>>
>> Doug Williams
>> Twitter API Support
>> http://twitter.com/dougw
>>
>>
>> On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel  wrote:
>>>
>>> Search is throwing 404's for search.json about every 7 or 8 requests...
>>>
>>> 
>>> 
>>> 404 Not Found
>>> 
>>> Not Found
>>> The requested URL /search.json was not found on this server.
>>> 
>>>
>>> Also got a "Forbidden" return when trying to connect to
>>> http://search.twitter.com/ about 10 minutes ago.
>>>
>>> -Chad
>>
>>
>


[twitter-dev] Re: Search API throwing 404's

2009-04-16 Thread Chad Etzel

Ok, dunno what was happening... I gave my server a swift kick with my
steel-toed boot and all seems well again... weird.
-Chad

On Thu, Apr 16, 2009 at 10:27 PM, Doug Williams  wrote:
> I just sent 200 queries through without seeing the 404. Are you still seeing
> this?
>
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
> On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel  wrote:
>>
>> Search is throwing 404's for search.json about every 7 or 8 requests...
>>
>> 
>> 
>> 404 Not Found
>> 
>> Not Found
>> The requested URL /search.json was not found on this server.
>> 
>>
>> Also got a "Forbidden" return when trying to connect to
>> http://search.twitter.com/ about 10 minutes ago.
>>
>> -Chad
>
>


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread Chad Etzel

I can't speak for twitter on the "permission to do that" side, but
that technique will work just fine, so you should be good to go
technically.
-chad

On Thu, Apr 16, 2009 at 9:34 PM, stevenic  wrote:
>
> Matt...  Another thought I just had...
>
> As Chad points out, with my particular query being high volume its
> realistic to think that I'm always going to risk seeing duplicates if
> I try to query for results in real time due to replication lag between
> your servers.  But I see how your using max_id in the paging stuff and
> I don't really need real time results so it seems like I should be
> able to use an ID that's 30 - 60 minutes old and do all of my queries
> using max_id instead of since_id.  In theory this would have me
> trailing the edge of new results coming into the index by 30 - 60
> minutes but it would give the servers more time to replicate so it
> seems like there'd be less of a chance I'd encounter dupes or missing
> entries.
>
> If that approach would work (and you would know) I'd just want to make
> sure you'd be ok with me using max_id instead of since_id given that
> max_id isn't documented
>
> -steve
>
> On Apr 16, 7:58 am, Matt Sanford  wrote:
>> Hi all,
>>
>>     There was a problem yesterday with several of the search back-ends
>> falling behind. This meant that if your page=1 and page=2 queries hit
>> different hosts they could return results that don't line up. If your
>> page=2 query hit a host with more lag you would miss results, and if
>> it hit a host that was more up-to-date you would see duplicates. We're
>> working on fixing this issues and trying to find a way to prevent
>> incorrect pagination in the future. Sorry for the delay in replying
>> but I was focusing all of my attention on fixing the issue and had to
>> let email wait.
>>
>> Thanks;
>>    — Matt Sanford / @mzsanford
>>
>> On Apr 15, 2009, at 09:29 PM, stevenic wrote:
>>
>>
>>
>>
>>
>> > Ok... So I think I know what's going on.  Well I don't know what's
>> > causing the bug obviously but I think I've narrowed down where it
>> > is...
>>
>> > I just issued the Page 1 or "previous" query for the above example and
>> > the ID's don't match the ID's from the original query.  There are
>> > extra rows that come back... 3 to be exact.  So the pagination queries
>> > are working fine.  It's the initial query that's busted.  It looks
>> > like that when you do a pagenation query you get back all rows
>> > matching the filter but a query without max_id sometimes drops rows.
>> > Well in my case it seems to drop rows everytime... This should get
>> > fixed...
>>
>> > *
>> > for:  http://search.twitter.com/search.atom?max_id=1530963910&page=1&q=http
>>
>> > http://base.google.com/ns/1.0"; xml:lang="en-US"
>> > xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/"; xmlns="http://
>> >www.w3.org/2005/Atom" xmlns:twitter="http://api.twitter.com/";>
>> >  
>> >  adjusted since_id, it was older than allowed> > twitter:warning>
>> >  2009-04-16T03:25:30Z
>> >  15
>> >  en
>> >  
>>
>> >   ...Removed...
>>
>> > 
>> >  tag:search.twitter.com,2005:1530963910
>> >  2009-04-16T03:25:30Z
>> > 
>> > 
>> >  tag:search.twitter.com,2005:1530963908
>> >  2009-04-16T03:25:32Z
>>
>> >  ...Where Did This Come From?...
>>
>> > 
>> > 
>> >  tag:search.twitter.com,2005:1530963898
>> >  2009-04-16T03:25:30Z
>>
>> >  ...And This?...
>>
>> > 
>> >  tag:search.twitter.com,2005:1530963896
>> >  tag:search.twitter.com,2005:1530963895
>> >  tag:search.twitter.com,2005:1530963894
>> > 
>> >  tag:search.twitter.com,2005:1530963892
>> >  2009-04-16T03:25:32Z
>>
>> >  ...And This?...
>>
>> > 
>> >  tag:search.twitter.com,2005:1530963881
>> >  tag:search.twitter.com,2005:1530963865
>> >  tag:search.twitter.com,2005:1530963860
>> >  tag:search.twitter.com,2005:1530963834
>> >  tag:search.twitter.com,2005:1530963833
>> >  tag:search.twitter.com,2005:1530963829
>> >  tag:search.twitter.com,2005:1530963827
>> >  tag:search.twitter.com,2005:1530963812
>> > - Hide quoted text -
>>
>> - Show quoted text -
>


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread stevenic

Matt...  Another thought I just had...

As Chad points out, with my particular query being high volume its
realistic to think that I'm always going to risk seeing duplicates if
I try to query for results in real time due to replication lag between
your servers.  But I see how your using max_id in the paging stuff and
I don't really need real time results so it seems like I should be
able to use an ID that's 30 - 60 minutes old and do all of my queries
using max_id instead of since_id.  In theory this would have me
trailing the edge of new results coming into the index by 30 - 60
minutes but it would give the servers more time to replicate so it
seems like there'd be less of a chance I'd encounter dupes or missing
entries.

If that approach would work (and you would know) I'd just want to make
sure you'd be ok with me using max_id instead of since_id given that
max_id isn't documented

-steve

On Apr 16, 7:58 am, Matt Sanford  wrote:
> Hi all,
>
>     There was a problem yesterday with several of the search back-ends  
> falling behind. This meant that if your page=1 and page=2 queries hit  
> different hosts they could return results that don't line up. If your  
> page=2 query hit a host with more lag you would miss results, and if  
> it hit a host that was more up-to-date you would see duplicates. We're  
> working on fixing this issues and trying to find a way to prevent  
> incorrect pagination in the future. Sorry for the delay in replying  
> but I was focusing all of my attention on fixing the issue and had to  
> let email wait.
>
> Thanks;
>    — Matt Sanford / @mzsanford
>
> On Apr 15, 2009, at 09:29 PM, stevenic wrote:
>
>
>
>
>
> > Ok... So I think I know what's going on.  Well I don't know what's
> > causing the bug obviously but I think I've narrowed down where it
> > is...
>
> > I just issued the Page 1 or "previous" query for the above example and
> > the ID's don't match the ID's from the original query.  There are
> > extra rows that come back... 3 to be exact.  So the pagination queries
> > are working fine.  It's the initial query that's busted.  It looks
> > like that when you do a pagenation query you get back all rows
> > matching the filter but a query without max_id sometimes drops rows.
> > Well in my case it seems to drop rows everytime... This should get
> > fixed...
>
> > *
> > for:  http://search.twitter.com/search.atom?max_id=1530963910&page=1&q=http
>
> > http://base.google.com/ns/1.0"; xml:lang="en-US"
> > xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/"; xmlns="http://
> >www.w3.org/2005/Atom" xmlns:twitter="http://api.twitter.com/";>
> >  
> >  adjusted since_id, it was older than allowed > twitter:warning>
> >  2009-04-16T03:25:30Z
> >  15
> >  en
> >  
>
> >   ...Removed...
>
> > 
> >  tag:search.twitter.com,2005:1530963910
> >  2009-04-16T03:25:30Z
> > 
> > 
> >  tag:search.twitter.com,2005:1530963908
> >  2009-04-16T03:25:32Z
>
> >  ...Where Did This Come From?...
>
> > 
> > 
> >  tag:search.twitter.com,2005:1530963898
> >  2009-04-16T03:25:30Z
>
> >  ...And This?...
>
> > 
> >  tag:search.twitter.com,2005:1530963896
> >  tag:search.twitter.com,2005:1530963895
> >  tag:search.twitter.com,2005:1530963894
> > 
> >  tag:search.twitter.com,2005:1530963892
> >  2009-04-16T03:25:32Z
>
> >  ...And This?...
>
> > 
> >  tag:search.twitter.com,2005:1530963881
> >  tag:search.twitter.com,2005:1530963865
> >  tag:search.twitter.com,2005:1530963860
> >  tag:search.twitter.com,2005:1530963834
> >  tag:search.twitter.com,2005:1530963833
> >  tag:search.twitter.com,2005:1530963829
> >  tag:search.twitter.com,2005:1530963827
> >  tag:search.twitter.com,2005:1530963812
> > - Hide quoted text -
>
> - Show quoted text -


[twitter-dev] Re: Search API throwing 404's

2009-04-16 Thread Doug Williams
I just sent 200 queries through without seeing the 404. Are you still seeing
this?

Doug Williams
Twitter API Support
http://twitter.com/dougw


On Thu, Apr 16, 2009 at 6:32 PM, Chad Etzel  wrote:

>
> Search is throwing 404's for search.json about every 7 or 8 requests...
>
> 
> 
> 404 Not Found
> 
> Not Found
> The requested URL /search.json was not found on this server.
> 
>
> Also got a "Forbidden" return when trying to connect to
> http://search.twitter.com/ about 10 minutes ago.
>
> -Chad
>


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread stevenic

So my project is a sort of tweetmeme or twitturly type thing where I'm
looking to collect a sample of the links being shared through
Twitter.  Unlike those projects I don't have a firehose so I have to
rely on search.  Fortunatly, I don't really need to see every link for
my project just a representive sample.

The actual query I'm using is "http OR www filter:links" where the
"filter:links" constraint helps make sure I exclude tweets like "can't
get http GET to work"  I don't really care about those.

Agreed with this query being a high volume query so maybe it'll never
be in sync but that's ok... Now I'm just ignoring the dupes.  And to
be clear, I have no intention of trying to keep up and use search as a
poor mans firehose.  What ever rate you guys are comfortable with me
hitting you at is what I'll do.  If that's one request/minute so be
it.  Just wanted to get the pagenation working so that I could better
control things and that's when I noticed the dupes.

-steve
(Microsoft Research)


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread Chad Etzel

the query "http filter:links" (which is a bit redundant) is such a
high volume query that I would doubt that the search servers would
ever be able to keep in sync even when things were running up to
speed.

Try with a less traffic'd query like "twitter"

-Chad

On Thu, Apr 16, 2009 at 6:55 PM, stevenic  wrote:
>
> Thanks for the reply Matt...
>
> Just as an FYI...
>
> I updated my code to track duplicates and then did a sample run over a
> 5 minute period that once a minute paged in new results for the query
> "http filter:links"  This resulted in about 11 pages of results each
> minute and over the 11 pages I saw anywhere from 60 - 150 duplicates
> so it's not just 3 or 4.  My concern isn't really around the extra
> updates it's the fact that sometimes updates are missing.
>
> Anyway... It sounds like you guys are working on it and I just thought
> I'd share that data point with you.
>
> -steve
>


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread stevenic

Thanks for the reply Matt...

Just as an FYI...

I updated my code to track duplicates and then did a sample run over a
5 minute period that once a minute paged in new results for the query
"http filter:links"  This resulted in about 11 pages of results each
minute and over the 11 pages I saw anywhere from 60 - 150 duplicates
so it's not just 3 or 4.  My concern isn't really around the extra
updates it's the fact that sometimes updates are missing.

Anyway... It sounds like you guys are working on it and I just thought
I'd share that data point with you.

-steve


[twitter-dev] Re: Search result pagination bugs

2009-04-16 Thread Matt Sanford

Hi all,

   There was a problem yesterday with several of the search back-ends  
falling behind. This meant that if your page=1 and page=2 queries hit  
different hosts they could return results that don't line up. If your  
page=2 query hit a host with more lag you would miss results, and if  
it hit a host that was more up-to-date you would see duplicates. We're  
working on fixing this issues and trying to find a way to prevent  
incorrect pagination in the future. Sorry for the delay in replying  
but I was focusing all of my attention on fixing the issue and had to  
let email wait.


Thanks;
  — Matt Sanford / @mzsanford

On Apr 15, 2009, at 09:29 PM, stevenic wrote:



Ok... So I think I know what's going on.  Well I don't know what's
causing the bug obviously but I think I've narrowed down where it
is...

I just issued the Page 1 or "previous" query for the above example and
the ID's don't match the ID's from the original query.  There are
extra rows that come back... 3 to be exact.  So the pagination queries
are working fine.  It's the initial query that's busted.  It looks
like that when you do a pagenation query you get back all rows
matching the filter but a query without max_id sometimes drops rows.
Well in my case it seems to drop rows everytime... This should get
fixed...


*
for:  http://search.twitter.com/search.atom?max_id=1530963910&page=1&q=http

http://base.google.com/ns/1.0"; xml:lang="en-US"
xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/"; xmlns="http://
www.w3.org/2005/Atom" xmlns:twitter="http://api.twitter.com/";>
 
 adjusted since_id, it was older than allowed
 2009-04-16T03:25:30Z
 15
 en
 

  ...Removed...


 tag:search.twitter.com,2005:1530963910
 2009-04-16T03:25:30Z


 tag:search.twitter.com,2005:1530963908
 2009-04-16T03:25:32Z

 ...Where Did This Come From?...



 tag:search.twitter.com,2005:1530963898
 2009-04-16T03:25:30Z

 ...And This?...


 tag:search.twitter.com,2005:1530963896
 tag:search.twitter.com,2005:1530963895
 tag:search.twitter.com,2005:1530963894

 tag:search.twitter.com,2005:1530963892
 2009-04-16T03:25:32Z

 ...And This?...


 tag:search.twitter.com,2005:1530963881
 tag:search.twitter.com,2005:1530963865
 tag:search.twitter.com,2005:1530963860
 tag:search.twitter.com,2005:1530963834
 tag:search.twitter.com,2005:1530963833
 tag:search.twitter.com,2005:1530963829
 tag:search.twitter.com,2005:1530963827
 tag:search.twitter.com,2005:1530963812






[twitter-dev] Re: Search result pagination bugs

2009-04-15 Thread stevenic

Ok... So I think I know what's going on.  Well I don't know what's
causing the bug obviously but I think I've narrowed down where it
is...

I just issued the Page 1 or "previous" query for the above example and
the ID's don't match the ID's from the original query.  There are
extra rows that come back... 3 to be exact.  So the pagination queries
are working fine.  It's the initial query that's busted.  It looks
like that when you do a pagenation query you get back all rows
matching the filter but a query without max_id sometimes drops rows.
Well in my case it seems to drop rows everytime... This should get
fixed...


*
for:  http://search.twitter.com/search.atom?max_id=1530963910&page=1&q=http

http://base.google.com/ns/1.0"; xml:lang="en-US"
xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/"; xmlns="http://
www.w3.org/2005/Atom" xmlns:twitter="http://api.twitter.com/";>
  
  adjusted since_id, it was older than allowed
  2009-04-16T03:25:30Z
  15
  en
  

   ...Removed...


  tag:search.twitter.com,2005:1530963910
  2009-04-16T03:25:30Z


  tag:search.twitter.com,2005:1530963908
  2009-04-16T03:25:32Z

  ...Where Did This Come From?...



  tag:search.twitter.com,2005:1530963898
  2009-04-16T03:25:30Z

  ...And This?...


  tag:search.twitter.com,2005:1530963896
  tag:search.twitter.com,2005:1530963895
  tag:search.twitter.com,2005:1530963894

  tag:search.twitter.com,2005:1530963892
  2009-04-16T03:25:32Z

  ...And This?...


  tag:search.twitter.com,2005:1530963881
  tag:search.twitter.com,2005:1530963865
  tag:search.twitter.com,2005:1530963860
  tag:search.twitter.com,2005:1530963834
  tag:search.twitter.com,2005:1530963833
  tag:search.twitter.com,2005:1530963829
  tag:search.twitter.com,2005:1530963827
  tag:search.twitter.com,2005:1530963812




[twitter-dev] Re: Search result pagination bugs

2009-04-15 Thread stevenic

Sure...  It repros for me every time in IE using the steps I outlined
above.  Do a query for "lang=en&q=http".  Open the "next" link in a
new tab of your browser and compare the ID's.

So I just did this from my home PC and here's the condensed output.
Notice that on Page 2 not only do I get 3 dupes but I even get a
result that should have been on Page 1... I hadn't seen that one
before but I'll assume that maybe a different server serviced each
request and they're not synced.


*
for: http://search.twitter.com/search.atom?lang=en&q=http


http://base.google.com/ns/1.0"; xml:lang="en-US"
xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/"; xmlns="http://
www.w3.org/2005/Atom" xmlns:twitter="http://api.twitter.com/";>
  
  adjusted since_id, it was older than allowed
  2009-04-16T03:25:30Z
  15
  en
  

  ...removed...


  tag:search.twitter.com,2005:1530963910
  2009-04-16T03:25:30Z

  ...removed...



  tag:search.twitter.com,2005:1530963896

  tag:search.twitter.com,2005:1530963895
  tag:search.twitter.com,2005:1530963894
  tag:search.twitter.com,2005:1530963881
  tag:search.twitter.com,2005:1530963865
  tag:search.twitter.com,2005:1530963860
  tag:search.twitter.com,2005:1530963834
  tag:search.twitter.com,2005:1530963833
  tag:search.twitter.com,2005:1530963829
  tag:search.twitter.com,2005:1530963827
  tag:search.twitter.com,2005:1530963812
  tag:search.twitter.com,2005:1530963811
  tag:search.twitter.com,2005:1530963796
  tag:search.twitter.com,2005:1530963786



*
for:  http://search.twitter.com/search.atom?max_id=1530963910&page=2&q=http

http://base.google.com/ns/1.0"; xml:lang="en-US"
xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/"; xmlns="http://
www.w3.org/2005/Atom" xmlns:twitter="http://api.twitter.com/";>
  
  2009-04-16T03:25:31Z
  15
  en
  
  

   ...Removed...


  tag:search.twitter.com,2005:1530963811
  2009-04-16T03:25:31Z

   ...Duplicate 1...



  tag:search.twitter.com,2005:1530963803
  2009-04-16T03:25:29Z
  en

   ...Not Even In Previous Page...



  tag:search.twitter.com,2005:1530963796
  2009-04-16T03:25:29Z

   ...Duplicate 2...



  tag:search.twitter.com,2005:1530963786
  2009-04-16T03:25:31Z

   ...Duplicate 3...



  tag:search.twitter.com,2005:1530963777

   ...First New Result (save the one above)...


  tag:search.twitter.com,2005:1530963755
  tag:search.twitter.com,2005:1530963732
  tag:search.twitter.com,2005:1530963725
  tag:search.twitter.com,2005:1530963718
  tag:search.twitter.com,2005:1530963710
  tag:search.twitter.com,2005:1530963709
  tag:search.twitter.com,2005:1530963706
  tag:search.twitter.com,2005:1530963699
  tag:search.twitter.com,2005:1530963698
  tag:search.twitter.com,2005:1530963690



[twitter-dev] Re: Search result pagination bugs

2009-04-15 Thread Chad Etzel

It would be helpful if you could give some example output/results
where you are seeing duplicates across pages.  I have spent a long
long time with the Search API and haven't ever had this problem (or
maybe I have and never noticed it).

-Chad

On Wed, Apr 15, 2009 at 9:07 PM, steve  wrote:
>
> I've been using the Search API in a project and its been working very
> reliably.  So today I decided to add support for pagination so I could
> pull in more results and I think I've identified a couple of bugs with
> the pagination code.
>
> Bug 1)
>
> The first few results of Page 2 for a query are sometimes duplicates.
> To verify this do the following:
>
>   1. Execute the query: 
> http://search.twitter.com/search.atom?lang=en&q=http&rpp=100
>   2. Grab the "next" link from the results and execute that.
>   3. Compare the ID's at the end of set one with the ID's at the
> begining of set 2.  They sometimes overlap.
>
>
> Bug 2)
>
> The second bug may be the cause of the 1st bug.  The link you get for
> "next" in a result set is missing the "lang=en" query param.  So you
> end up getting non-english items in your result set.  You can manually
> add the "lang=en" param to your query and while you still get dupes
> you get less.  If you do this though you then start getting a warning
> in the result set about an adjusted since_id.
>
> What's scarier though is that the result set seemed to get wierd on me
> if I added the "lang" param and requested pages too fast.  By that I
> mean I would sometimes get results for Page 2 that were (time wise)
> hours before my original Since ID so my code would just stop
> requesting pages since it assumed it had reached the end of the set.
> The scary part... Adding around a 2 seconds sleep between queries
> seemed to make this issue go away...
>
>
> In general the pagination stuff with the "next" link doesn't seem very
> reliable to me.  You do seem to get less dupes then just calling
> search and incrementing the page number.  But I'm still seeing dupes,
> results for the wrong language, and sometimes totally wierd results.
>
> -steve
>


[twitter-dev] Re: Search queries not working

2009-04-13 Thread Alex Payne

Yes. Queries are limited to 140 characters.

Basha Shaik wrote:

Hi,

Is there any Length Limit in the query I pass in search API?

Regards,

Mahaboob Basha Shaik
www.netelixir.com 
Making Search Work


On Sat, Apr 4, 2009 at 10:27 AM, Basha Shaik 
mailto:basha.neteli...@gmail.com>> wrote:


Hi Chad,
No duplicates are there with this.
Thank You

Regards,

Mahaboob Basha Shaik
www.netelixir.com 
Making Search Work


On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik
mailto:basha.neteli...@gmail.com>> wrote:

Hi chad,

Thank you. I was trying for a query which has only 55 tweets
and i have kept 100 as rpp . so i was not getting next_page.
when i decreased rpp to 20 and tried i got now. thank you very
much. i Will check if any Duplicates occur with these and let
you know.


Regards,

Mahaboob Basha Shaik
www.netelixir.com 
Making Search Work


On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel
mailto:jazzyc...@gmail.com>> wrote:

next_page




--
Alex Payne - API Lead, Twitter, Inc.
http://twitter.com/al3x



[twitter-dev] Re: Search queries not working

2009-04-12 Thread Basha Shaik
Hi,

Is there any Length Limit in the query I pass in search API?

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 10:27 AM, Basha Shaik wrote:

> Hi Chad,
> No duplicates are there with this.
> Thank You
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik wrote:
>
>> Hi chad,
>>
>> Thank you. I was trying for a query which has only 55 tweets and i have
>> kept 100 as rpp . so i was not getting next_page. when i decreased rpp to 20
>> and tried i got now. thank you very much. i Will check if any Duplicates
>> occur with these and let you know.
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel  wrote:
>>
>>> next_page
>>>
>>
>>
>


[twitter-dev] Re: search by link

2009-04-10 Thread Doug Williams
Search only by source is not supported.

Doug Williams
Twitter API Support
http://twitter.com/dougw


On Fri, Apr 10, 2009 at 10:38 AM, joop23  wrote:

>
> I was hoping to find a way to search for source through the search api
> without having to pass in some text.  Just source through api.
>
> On Apr 9, 11:48 am, Chad Etzel  wrote:
> > It should be noted that you can't just search for a source alone, you
> > must pass in some sort of query with it.  So you can't really get all
> > tweets from a particular source...
> >
> > One interesting way to use the "source" data handed back by the search
> > API is to gauge "market share" for certain keywords/phrases.  I
> > created a tool here to do this:
> >
> > http://tweetgrid.com/sources
> >
> > it's interesting to search for different people (e.g. from:user) to
> > see what clients they are frequently using...
> >
> > -Chad
> >
> > On Thu, Apr 9, 2009 at 2:37 PM, Doug Williams  wrote:
> > > The search "twitter source:tweetdeck" [1] will return any tweet with
> > > 'twitter' from the source with parameter 'tweetdeck'. Add your
> appropriate
> > > format to the URL and you're good to go!
> >
> > > 1.http://search.twitter.com/search?q=twitter+source%3Atweetdeck
> >
> > > Doug Williams
> > > Twitter API Support
> > >http://twitter.com/dougw
> >
> > > On Thu, Apr 9, 2009 at 11:22 AM, joop23  wrote:
> >
> > >> Hello,
> >
> > >> Is there a way to search by link on the status message?  For instance,
> > >> I'd like to pull all statuses submitted by TweetDeck application.
> >
> > >> thank you
>


[twitter-dev] Re: search by link

2009-04-10 Thread joop23

I was hoping to find a way to search for source through the search api
without having to pass in some text.  Just source through api.

On Apr 9, 11:48 am, Chad Etzel  wrote:
> It should be noted that you can't just search for a source alone, you
> must pass in some sort of query with it.  So you can't really get all
> tweets from a particular source...
>
> One interesting way to use the "source" data handed back by the search
> API is to gauge "market share" for certain keywords/phrases.  I
> created a tool here to do this:
>
> http://tweetgrid.com/sources
>
> it's interesting to search for different people (e.g. from:user) to
> see what clients they are frequently using...
>
> -Chad
>
> On Thu, Apr 9, 2009 at 2:37 PM, Doug Williams  wrote:
> > The search "twitter source:tweetdeck" [1] will return any tweet with
> > 'twitter' from the source with parameter 'tweetdeck'. Add your appropriate
> > format to the URL and you're good to go!
>
> > 1.http://search.twitter.com/search?q=twitter+source%3Atweetdeck
>
> > Doug Williams
> > Twitter API Support
> >http://twitter.com/dougw
>
> > On Thu, Apr 9, 2009 at 11:22 AM, joop23  wrote:
>
> >> Hello,
>
> >> Is there a way to search by link on the status message?  For instance,
> >> I'd like to pull all statuses submitted by TweetDeck application.
>
> >> thank you


[twitter-dev] Re: search by link

2009-04-10 Thread Carlos Crosetti
Squeak Smalltalk Twitter Client at

http://code.google.com/p/twitter-client/


[twitter-dev] Re: search by link

2009-04-09 Thread Chad Etzel

It should be noted that you can't just search for a source alone, you
must pass in some sort of query with it.  So you can't really get all
tweets from a particular source...

One interesting way to use the "source" data handed back by the search
API is to gauge "market share" for certain keywords/phrases.  I
created a tool here to do this:

http://tweetgrid.com/sources

it's interesting to search for different people (e.g. from:user) to
see what clients they are frequently using...

-Chad

On Thu, Apr 9, 2009 at 2:37 PM, Doug Williams  wrote:
> The search "twitter source:tweetdeck" [1] will return any tweet with
> 'twitter' from the source with parameter 'tweetdeck'. Add your appropriate
> format to the URL and you're good to go!
>
> 1. http://search.twitter.com/search?q=twitter+source%3Atweetdeck
>
>
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
> On Thu, Apr 9, 2009 at 11:22 AM, joop23  wrote:
>>
>> Hello,
>>
>> Is there a way to search by link on the status message?  For instance,
>> I'd like to pull all statuses submitted by TweetDeck application.
>>
>> thank you
>
>


[twitter-dev] Re: search by link

2009-04-09 Thread Doug Williams
The search "twitter source:tweetdeck" [1] will return any tweet with
'twitter' from the source with parameter 'tweetdeck'. Add your appropriate
format to the URL and you're good to go!

1. http://search.twitter.com/search?q=twitter+source%3Atweetdeck


Doug Williams
Twitter API Support
http://twitter.com/dougw


On Thu, Apr 9, 2009 at 11:22 AM, joop23  wrote:

>
> Hello,
>
> Is there a way to search by link on the status message?  For instance,
> I'd like to pull all statuses submitted by TweetDeck application.
>
> thank you
>


[twitter-dev] Re: search by link

2009-04-09 Thread Abraham Williams
http://search.twitter.com/operators

On Thu, Apr 9, 2009 at 13:22, joop23  wrote:

>
> Hello,
>
> Is there a way to search by link on the status message?  For instance,
> I'd like to pull all statuses submitted by TweetDeck application.
>
> thank you
>



-- 
Abraham Williams | Hacker | http://abrah.am
@poseurtech | http://the.hackerconundrum.com
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Madison, Wisconsin, United States


[twitter-dev] Re: Search API Refresh Rate

2009-04-08 Thread peterhough

Perfect, thanks Matt

On Apr 8, 5:27 pm, Matt Sanford  wrote:
> Hi Pete,
>
>      Every 5 seconds is well below the rate limit and seems like a  
> good rate for reasonably quick responses. It sounds like you're doing  
> the same query each time so that should be fine.
>
>      For people doing requests based on many different queries I  
> recommend that they query less often for searches that have no results  
> than for those that do. By using a back-off you can keep up to date on  
> queries that are hot but not waste cycles requesting queries that very  
> rarely change. Check out the way we do it on search.twitter.com 
> athttp://search.twitter.com/javascripts/search/refresher.js
>
> Thanks;
>    — Matt Sanford / @mzsanford
>
> On Apr 8, 2009, at 02:30 AM, peterhough wrote:
>
>
>
> > Hello!
>
> > I'm developing an application which needs to constantly request a
> > search API result. I'm pushing through a since_id to try to help
> > minimise the load on the servers. My question is, what is the optimum
> > time limit to loop the API requests? My application will need to act
> > upon the result of the search pretty much instantly.
>
> > I currently have the script requesting a search API result every 5
> > seconds. Will this hammer your servers too much?
>
> > Do you know the average time third party clients reload tweets? Are
> > there any guidelines for this? As this would have a factor in when my
> > applications actions are seen and so the need to request a search
> > result refresh
>
> > Thanks,
> > Pete


[twitter-dev] Re: Search API Refresh Rate

2009-04-08 Thread Matt Sanford


Hi Pete,

Every 5 seconds is well below the rate limit and seems like a  
good rate for reasonably quick responses. It sounds like you're doing  
the same query each time so that should be fine.


For people doing requests based on many different queries I  
recommend that they query less often for searches that have no results  
than for those that do. By using a back-off you can keep up to date on  
queries that are hot but not waste cycles requesting queries that very  
rarely change. Check out the way we do it on search.twitter.com at http://search.twitter.com/javascripts/search/refresher.js


Thanks;
  — Matt Sanford / @mzsanford

On Apr 8, 2009, at 02:30 AM, peterhough wrote:



Hello!

I'm developing an application which needs to constantly request a
search API result. I'm pushing through a since_id to try to help
minimise the load on the servers. My question is, what is the optimum
time limit to loop the API requests? My application will need to act
upon the result of the search pretty much instantly.

I currently have the script requesting a search API result every 5
seconds. Will this hammer your servers too much?

Do you know the average time third party clients reload tweets? Are
there any guidelines for this? As this would have a factor in when my
applications actions are seen and so the need to request a search
result refresh

Thanks,
Pete




[twitter-dev] Re: Search API json return

2009-04-06 Thread Matt Sanford

Hi Laksham,

The search operators did not change but they have always  
supported both the mentions and replies queries. You can search for  
@username for users referenced anywhere, and to:username for just at  
the beginning (the old replies behavior).


Thanks;
  — Matt Sanford / @mzsanford

On Apr 6, 2009, at 01:52 AM, Lakshman Prasad wrote:


Hi,

If a tweet has multiple @replies, then, does the search API return  
multiple to_user fields or is it just one.


Does it at all return a to_user field when the @reference is done  
not at the beginning.


In other words, is there a change in the Search API too similar to  
the recent changes to REST API for replies.


Thanks.

--
Regards,
Lakshman
becomingguru.com
uswaretech.com
lakshmanprasad.com




[twitter-dev] Re: Search api - rpp not working

2009-04-05 Thread Abraham Williams
I see 10 results.

Abraham

On Sun, Apr 5, 2009 at 14:48, Matt  wrote:

>
> Is it just me or is the rpp call not being processed correctly? Here
> is a link from the api documentation:
>
> http://search.twitter.com/search.atom?q=+the+%23sxsw&rpp=10
>



-- 
Abraham Williams | Hacker | http://abrah.am
@poseurtech | http://the.hackerconundrum.com
Web608 | Community Evangelist | http://web608.org
This email is: [ ] blogable [x] ask first [ ] private.
Sent from Chicago, Illinois, United States


[twitter-dev] Re: Search API, Multiple Hashtags

2009-04-05 Thread Matt

Thanks. Wasn't aware I could pass along operators.

On Apr 5, 2:41 pm, Chad Etzel  wrote:
> Yes, this is possible.  Have you actually tried it yet?  Make sure to
> use capital OR between the hashtags.
>
> http://search.twitter.com/search?q=%23followfriday+OR+%23pawpawty+OR+...
>
> -chad
>
> On Sun, Apr 5, 2009 at 2:36 PM, Matt  wrote:
>
> > Is it possible with the current search api to search for multiple
> > hashtags? I'm looking to do an OR search which will look for up to 3
> > hashtags.


[twitter-dev] Re: Search API, Multiple Hashtags

2009-04-05 Thread Chad Etzel

Yes, this is possible.  Have you actually tried it yet?  Make sure to
use capital OR between the hashtags.

http://search.twitter.com/search?q=%23followfriday+OR+%23pawpawty+OR+%23gno

-chad

On Sun, Apr 5, 2009 at 2:36 PM, Matt  wrote:
>
> Is it possible with the current search api to search for multiple
> hashtags? I'm looking to do an OR search which will look for up to 3
> hashtags.
>


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi Chad,
No duplicates are there with this.
Thank You
Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:29 AM, Basha Shaik wrote:

> Hi chad,
>
> Thank you. I was trying for a query which has only 55 tweets and i have
> kept 100 as rpp . so i was not getting next_page. when i decreased rpp to 20
> and tried i got now. thank you very much. i Will check if any Duplicates
> occur with these and let you know.
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel  wrote:
>
>> next_page
>>
>
>


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Basha Shaik
Hi chad,

Thank you. I was trying for a query which has only 55 tweets and i have kept
100 as rpp . so i was not getting next_page. when i decreased rpp to 20 and
tried i got now. thank you very much. i Will check if any Duplicates occur
with these and let you know.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 7:06 AM, Chad Etzel  wrote:

> next_page
>


[twitter-dev] Re: Search queries not working

2009-04-04 Thread Chad Etzel

I have not used java in a long time, but there should be a "next_page"
key in the map you create from the json response.  Here is an example
json response with rpp=1 for "hello":

{"results":[{"text":"hello","to_user_id":null,"from_user":"fsas1975","id":1450457219,"from_user_id":6788389,"source":"web<\/a>","profile_image_url":"http:\/\/s3.amazonaws.com\/twitter_production\/profile_images\/117699880\/514HjlKzd1L__AA280__normal.jpg","created_at":"Sat,
04 Apr 2009 06:59:57
+"}],"since_id":0,"max_id":1450457219,"refresh_url":"?since_id=1450457219&q=hello","results_per_page":1,"next_page":"?page=2&max_id=1450457219&rpp=1&q=hello","completed_in":0.013591,"page":1,"query":"hello"}

The part you are interested in is this:
"next_page":"?page=2&max_id=1450457219&rpp=1&q=hello"

you can construct the next page url by appending this value to:
"http://search.twitter.com/search.json";

-Chad


On Sat, Apr 4, 2009 at 2:55 AM, Basha Shaik  wrote:
> Hi i am using java. We parse the json response. and store the value as key -
> value pairs in a Map.
>
> In the reponse no wahere i found next_url or next_page.
> Can you tell me how we can store all json data in a variable.
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>


[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi i am using java. We parse the json response. and store the value as key -
value pairs in a Map.

In the reponse no wahere i found next_url or next_page.
Can you tell me how we can store all json data in a variable.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:41 AM, Chad Etzel  wrote:

>
> My example was in javascript. How are you retrieving the json data?
> What language are you using?
> -chad
>
> On Sat, Apr 4, 2009 at 2:35 AM, Basha Shaik 
> wrote:
> > Hi Chad,
> > how can we store all json data in a variable "jdata".
> > Can you tell me how to do that?
> > I am using java for jason processing
> >
> > Which technology are you using?
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel  wrote:
> >>
> >> Sorry, typo previously:
> >>
> >> var next_page_url = "http://search.twitter.com/search.json"; +
> >> jdata.next_page;
> >>
> >> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
> >> > Assuming you get the json data somehow and store it in a variable
> >> > called "jdata", you can construct the next page url thus:
> >> >
> >> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
> >> >
> >> > -Chad
> >> >
> >> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik <
> basha.neteli...@gmail.com>
> >> > wrote:
> >> >> I am using json
> >> >>
> >> >> Regards,
> >> >>
> >> >> Mahaboob Basha Shaik
> >> >> www.netelixir.com
> >> >> Making Search Work
> >> >>
> >> >>
> >> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel 
> wrote:
> >> >>>
> >> >>> Are you using the .atom or .json API feed?  I am only familiar with
> >> >>> the .json feed.
> >> >>> -Chad
> >> >>>
> >> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik
> >> >>> 
> >> >>> wrote:
> >> >>> > Hi Chad,
> >> >>> >
> >> >>> > how can we use "next_page" in the url we request. where can we get
> >> >>> > the
> >> >>> > url
> >> >>> > we need to pass.
> >> >>> >
> >> >>> > Regards,
> >> >>> >
> >> >>> > Mahaboob Basha Shaik
> >> >>> > www.netelixir.com
> >> >>> > Making Search Work
> >> >>> >
> >> >>> >
> >> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel 
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never
> seen
> >> >>> >> them anywhere), but at least in the json data there is a
> >> >>> >> "next_page"
> >> >>> >> field which uses "?page=_&max_id=__" already prefilled for
> you.
> >> >>> >> This should definitely avoid the duplicate tweet issue.  I've
> never
> >> >>> >> had to do any client-side duplicate filtering when using the
> >> >>> >> correct
> >> >>> >> combination of "page","max_id", and "rpp" values...
> >> >>> >>
> >> >>> >> If you give very specific examples (the actual URL data would be
> >> >>> >> handy) where you are seeing duplicates between pages, we can
> >> >>> >> probably
> >> >>> >> help sort this out.
> >> >>> >>
> >> >>> >> -Chad
> >> >>> >>
> >> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams 
> >> >>> >> wrote:
> >> >>> >> >
> >> >>> >> > The use of prev_url and next_url will take care of step 1 from
> >> >>> >> > your
> >> >>> >> > flow described above. Specifically, next_url will give your
> >> >>> >> > application the URI to contact to get the next page of results.
> >> >>> >> >
> >> >>> >> > Combining max_id and next_url usage will not solve the
> duplicate
> >> >>> >> > problem. To overcome that issue, you will have to simply strip
> >> >>> >> > the
> >> >>> >> > duplicate tweets on the client-side.
> >> >>> >> >
> >> >>> >> > Thanks,
> >> >>> >> > Doug Williams
> >> >>> >> > Twitter API Support
> >> >>> >> > http://twitter.com/dougw
> >> >>> >> >
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
> >> >>> >> > 
> >> >>> >> > wrote:
> >> >>> >> >> HI,
> >> >>> >> >>
> >> >>> >> >> Can you give me an example how i can use prev_url and next_url
> >> >>> >> >> with
> >> >>> >> >> max_id.
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >> No I am following below process to search
> >> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
> >> >>> >> >> incrementing
> >> >>> >> >> the param 'page'
> >> >>> >> >> 2. Get the id of the last status on page 15 and set that as
> the
> >> >>> >> >> max_id
> >> >>> >> >> for the next query
> >> >>> >> >> 3. If we have more results, go to step 1
> >> >>> >> >>
> >> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
> >> >>> >> >> record
> >> >>> >> >> in
> >> >>> >> >> page
> >> >>> >> >> 2.
> >> >>> >> >>
> >> >>> >> >> I understood the reason why i got the duplicates from matts
> >> >>> >> >> previous
> >> >>> >> >> mail.
> >> >>> >> >>
> >> >>> >> >> Will this problem solve if i use max_id with prev_url and
> >> >>> >> >> next_url?
> >> >>> >> >>  How can the duplicate problem be solved
> >> >>> >> >>
> >> >>> >> >>
> >> >>> >> >> Regards,
> >> >>> >> >>
> >> >>> >> >> Mahaboob Basha Shaik
> >> >>> >> >> 

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

My example was in javascript. How are you retrieving the json data?
What language are you using?
-chad

On Sat, Apr 4, 2009 at 2:35 AM, Basha Shaik  wrote:
> Hi Chad,
> how can we store all json data in a variable "jdata".
> Can you tell me how to do that?
> I am using java for jason processing
>
> Which technology are you using?
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel  wrote:
>>
>> Sorry, typo previously:
>>
>> var next_page_url = "http://search.twitter.com/search.json"; +
>> jdata.next_page;
>>
>> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
>> > Assuming you get the json data somehow and store it in a variable
>> > called "jdata", you can construct the next page url thus:
>> >
>> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
>> >
>> > -Chad
>> >
>> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik 
>> > wrote:
>> >> I am using json
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
>> >>>
>> >>> Are you using the .atom or .json API feed?  I am only familiar with
>> >>> the .json feed.
>> >>> -Chad
>> >>>
>> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik
>> >>> 
>> >>> wrote:
>> >>> > Hi Chad,
>> >>> >
>> >>> > how can we use "next_page" in the url we request. where can we get
>> >>> > the
>> >>> > url
>> >>> > we need to pass.
>> >>> >
>> >>> > Regards,
>> >>> >
>> >>> > Mahaboob Basha Shaik
>> >>> > www.netelixir.com
>> >>> > Making Search Work
>> >>> >
>> >>> >
>> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel 
>> >>> > wrote:
>> >>> >>
>> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> >>> >> them anywhere), but at least in the json data there is a
>> >>> >> "next_page"
>> >>> >> field which uses "?page=_&max_id=__" already prefilled for you.
>> >>> >> This should definitely avoid the duplicate tweet issue.  I've never
>> >>> >> had to do any client-side duplicate filtering when using the
>> >>> >> correct
>> >>> >> combination of "page","max_id", and "rpp" values...
>> >>> >>
>> >>> >> If you give very specific examples (the actual URL data would be
>> >>> >> handy) where you are seeing duplicates between pages, we can
>> >>> >> probably
>> >>> >> help sort this out.
>> >>> >>
>> >>> >> -Chad
>> >>> >>
>> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams 
>> >>> >> wrote:
>> >>> >> >
>> >>> >> > The use of prev_url and next_url will take care of step 1 from
>> >>> >> > your
>> >>> >> > flow described above. Specifically, next_url will give your
>> >>> >> > application the URI to contact to get the next page of results.
>> >>> >> >
>> >>> >> > Combining max_id and next_url usage will not solve the duplicate
>> >>> >> > problem. To overcome that issue, you will have to simply strip
>> >>> >> > the
>> >>> >> > duplicate tweets on the client-side.
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> > Doug Williams
>> >>> >> > Twitter API Support
>> >>> >> > http://twitter.com/dougw
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>> >>> >> > 
>> >>> >> > wrote:
>> >>> >> >> HI,
>> >>> >> >>
>> >>> >> >> Can you give me an example how i can use prev_url and next_url
>> >>> >> >> with
>> >>> >> >> max_id.
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> No I am following below process to search
>> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
>> >>> >> >> incrementing
>> >>> >> >> the param 'page'
>> >>> >> >> 2. Get the id of the last status on page 15 and set that as the
>> >>> >> >> max_id
>> >>> >> >> for the next query
>> >>> >> >> 3. If we have more results, go to step 1
>> >>> >> >>
>> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
>> >>> >> >> record
>> >>> >> >> in
>> >>> >> >> page
>> >>> >> >> 2.
>> >>> >> >>
>> >>> >> >> I understood the reason why i got the duplicates from matts
>> >>> >> >> previous
>> >>> >> >> mail.
>> >>> >> >>
>> >>> >> >> Will this problem solve if i use max_id with prev_url and
>> >>> >> >> next_url?
>> >>> >> >>  How can the duplicate problem be solved
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> Regards,
>> >>> >> >>
>> >>> >> >> Mahaboob Basha Shaik
>> >>> >> >> www.netelixir.com
>> >>> >> >> Making Search Work
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
>> >>> >> >> wrote:
>> >>> >> >>>
>> >>> >> >>> Basha,
>> >>> >> >>> Pagination is defined well here [1].
>> >>> >> >>>
>> >>> >> >>> The next_url and prev_url fields give your client HTTP URIs to
>> >>> >> >>> move
>> >>> >> >>> forward and backward through the result set. You can use them
>> >>> >> >>> to
>> >>> >> >>> page
>> >>> >> >>> through search results.
>> >>> >> >>>
>> >>> >> >>> I have some work to do on the search docs and I'll add field
>> >>> >> >>> definitions then as well.
>> >>> >> >>>
>> >>> >> >>> 1. http://en.wik

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi Chad,
how can we store all json data in a variable "jdata".
Can you tell me how to do that?
I am using java for jason processing

Which technology are you using?
Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:23 AM, Chad Etzel  wrote:

>
> Sorry, typo previously:
>
> var next_page_url = "http://search.twitter.com/search.json"; +
> jdata.next_page;
>
> On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
> > Assuming you get the json data somehow and store it in a variable
> > called "jdata", you can construct the next page url thus:
> >
> > var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
> >
> > -Chad
> >
> > On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik 
> wrote:
> >> I am using json
> >>
> >> Regards,
> >>
> >> Mahaboob Basha Shaik
> >> www.netelixir.com
> >> Making Search Work
> >>
> >>
> >> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
> >>>
> >>> Are you using the .atom or .json API feed?  I am only familiar with
> >>> the .json feed.
> >>> -Chad
> >>>
> >>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik  >
> >>> wrote:
> >>> > Hi Chad,
> >>> >
> >>> > how can we use "next_page" in the url we request. where can we get
> the
> >>> > url
> >>> > we need to pass.
> >>> >
> >>> > Regards,
> >>> >
> >>> > Mahaboob Basha Shaik
> >>> > www.netelixir.com
> >>> > Making Search Work
> >>> >
> >>> >
> >>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel 
> wrote:
> >>> >>
> >>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
> >>> >> them anywhere), but at least in the json data there is a "next_page"
> >>> >> field which uses "?page=_&max_id=__" already prefilled for you.
> >>> >> This should definitely avoid the duplicate tweet issue.  I've never
> >>> >> had to do any client-side duplicate filtering when using the correct
> >>> >> combination of "page","max_id", and "rpp" values...
> >>> >>
> >>> >> If you give very specific examples (the actual URL data would be
> >>> >> handy) where you are seeing duplicates between pages, we can
> probably
> >>> >> help sort this out.
> >>> >>
> >>> >> -Chad
> >>> >>
> >>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams 
> wrote:
> >>> >> >
> >>> >> > The use of prev_url and next_url will take care of step 1 from
> your
> >>> >> > flow described above. Specifically, next_url will give your
> >>> >> > application the URI to contact to get the next page of results.
> >>> >> >
> >>> >> > Combining max_id and next_url usage will not solve the duplicate
> >>> >> > problem. To overcome that issue, you will have to simply strip the
> >>> >> > duplicate tweets on the client-side.
> >>> >> >
> >>> >> > Thanks,
> >>> >> > Doug Williams
> >>> >> > Twitter API Support
> >>> >> > http://twitter.com/dougw
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
> >>> >> > 
> >>> >> > wrote:
> >>> >> >> HI,
> >>> >> >>
> >>> >> >> Can you give me an example how i can use prev_url and next_url
> with
> >>> >> >> max_id.
> >>> >> >>
> >>> >> >>
> >>> >> >>
> >>> >> >> No I am following below process to search
> >>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by
> incrementing
> >>> >> >> the param 'page'
> >>> >> >> 2. Get the id of the last status on page 15 and set that as the
> >>> >> >> max_id
> >>> >> >> for the next query
> >>> >> >> 3. If we have more results, go to step 1
> >>> >> >>
> >>> >> >> here i got duplicate. 100th record in page 1 was same as 1st
> record
> >>> >> >> in
> >>> >> >> page
> >>> >> >> 2.
> >>> >> >>
> >>> >> >> I understood the reason why i got the duplicates from matts
> previous
> >>> >> >> mail.
> >>> >> >>
> >>> >> >> Will this problem solve if i use max_id with prev_url and
> next_url?
> >>> >> >>  How can the duplicate problem be solved
> >>> >> >>
> >>> >> >>
> >>> >> >> Regards,
> >>> >> >>
> >>> >> >> Mahaboob Basha Shaik
> >>> >> >> www.netelixir.com
> >>> >> >> Making Search Work
> >>> >> >>
> >>> >> >>
> >>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
> >>> >> >> wrote:
> >>> >> >>>
> >>> >> >>> Basha,
> >>> >> >>> Pagination is defined well here [1].
> >>> >> >>>
> >>> >> >>> The next_url and prev_url fields give your client HTTP URIs to
> move
> >>> >> >>> forward and backward through the result set. You can use them to
> >>> >> >>> page
> >>> >> >>> through search results.
> >>> >> >>>
> >>> >> >>> I have some work to do on the search docs and I'll add field
> >>> >> >>> definitions then as well.
> >>> >> >>>
> >>> >> >>> 1. 
> >>> >> >>> http://en.wikipedia.org/wiki/Pagination_(web)
> >>> >> >>>
> >>> >> >>> Doug Williams
> >>> >> >>> Twitter API Support
> >>> >> >>> http://twitter.com/dougw
> >>> >> >>>
> >>> >> >>>
> >>> >> >>>
> >>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
> >>> >> >>> 
> >>> >> >>> wrote:
> >>> >> >>> > Hi matt,
> >>> >> >>> >
> >>> >> >>> > Thank You
> >>> >> >>> > What is Pagination? Does it mean that I cannot use 

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

Sorry, typo previously:

var next_page_url = "http://search.twitter.com/search.json"; + jdata.next_page;

On Sat, Apr 4, 2009 at 2:18 AM, Chad Etzel  wrote:
> Assuming you get the json data somehow and store it in a variable
> called "jdata", you can construct the next page url thus:
>
> var next_page_url = "http://search.twitter.com/"; + jdata.next_page;
>
> -Chad
>
> On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik  wrote:
>> I am using json
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
>>>
>>> Are you using the .atom or .json API feed?  I am only familiar with
>>> the .json feed.
>>> -Chad
>>>
>>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik 
>>> wrote:
>>> > Hi Chad,
>>> >
>>> > how can we use "next_page" in the url we request. where can we get the
>>> > url
>>> > we need to pass.
>>> >
>>> > Regards,
>>> >
>>> > Mahaboob Basha Shaik
>>> > www.netelixir.com
>>> > Making Search Work
>>> >
>>> >
>>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
>>> >>
>>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>>> >> them anywhere), but at least in the json data there is a "next_page"
>>> >> field which uses "?page=_&max_id=__" already prefilled for you.
>>> >> This should definitely avoid the duplicate tweet issue.  I've never
>>> >> had to do any client-side duplicate filtering when using the correct
>>> >> combination of "page","max_id", and "rpp" values...
>>> >>
>>> >> If you give very specific examples (the actual URL data would be
>>> >> handy) where you are seeing duplicates between pages, we can probably
>>> >> help sort this out.
>>> >>
>>> >> -Chad
>>> >>
>>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>>> >> >
>>> >> > The use of prev_url and next_url will take care of step 1 from your
>>> >> > flow described above. Specifically, next_url will give your
>>> >> > application the URI to contact to get the next page of results.
>>> >> >
>>> >> > Combining max_id and next_url usage will not solve the duplicate
>>> >> > problem. To overcome that issue, you will have to simply strip the
>>> >> > duplicate tweets on the client-side.
>>> >> >
>>> >> > Thanks,
>>> >> > Doug Williams
>>> >> > Twitter API Support
>>> >> > http://twitter.com/dougw
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>>> >> > 
>>> >> > wrote:
>>> >> >> HI,
>>> >> >>
>>> >> >> Can you give me an example how i can use prev_url and next_url with
>>> >> >> max_id.
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> No I am following below process to search
>>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>>> >> >> the param 'page'
>>> >> >> 2. Get the id of the last status on page 15 and set that as the
>>> >> >> max_id
>>> >> >> for the next query
>>> >> >> 3. If we have more results, go to step 1
>>> >> >>
>>> >> >> here i got duplicate. 100th record in page 1 was same as 1st record
>>> >> >> in
>>> >> >> page
>>> >> >> 2.
>>> >> >>
>>> >> >> I understood the reason why i got the duplicates from matts previous
>>> >> >> mail.
>>> >> >>
>>> >> >> Will this problem solve if i use max_id with prev_url and next_url?
>>> >> >>  How can the duplicate problem be solved
>>> >> >>
>>> >> >>
>>> >> >> Regards,
>>> >> >>
>>> >> >> Mahaboob Basha Shaik
>>> >> >> www.netelixir.com
>>> >> >> Making Search Work
>>> >> >>
>>> >> >>
>>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> Basha,
>>> >> >>> Pagination is defined well here [1].
>>> >> >>>
>>> >> >>> The next_url and prev_url fields give your client HTTP URIs to move
>>> >> >>> forward and backward through the result set. You can use them to
>>> >> >>> page
>>> >> >>> through search results.
>>> >> >>>
>>> >> >>> I have some work to do on the search docs and I'll add field
>>> >> >>> definitions then as well.
>>> >> >>>
>>> >> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>>> >> >>>
>>> >> >>> Doug Williams
>>> >> >>> Twitter API Support
>>> >> >>> http://twitter.com/dougw
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>>> >> >>> 
>>> >> >>> wrote:
>>> >> >>> > Hi matt,
>>> >> >>> >
>>> >> >>> > Thank You
>>> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
>>> >> >>> > searching
>>> >> >>> > tweets. What does next_url and prev_url fields mean. I did not
>>> >> >>> > find
>>> >> >>> > next_url
>>> >> >>> > and prev_url in documentation. how can these two urls be used
>>> >> >>> > with
>>> >> >>> > max_id.
>>> >> >>> > Please explain with example if possible.
>>> >> >>> >
>>> >> >>> >
>>> >> >>> >
>>> >> >>> > Regards,
>>> >> >>> >
>>> >> >>> > Mahaboob Basha Shaik
>>> >> >>> > www.netelixir.com
>>> >> >>> > Making Search Work
>>> >> >>> >
>>> >> >>> >
>>> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
>>> >> >>> > wrote:
>>> >> >>> >>
>>> >> >>> >> Hi Basha,
>>> >> >>> >>     The max_id 

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi Doug,
you said we can use next_url and prev URL.

I tried to get next_url. the response is saying that there is no field
called next_url. Should i pass next _url in the request with max_id? if so
how can i know what next_url is?

Can u give an clear example how to use prev_url and next_url

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 6:57 PM, Doug Williams  wrote:

>
> The use of prev_url and next_url will take care of step 1 from your
> flow described above. Specifically, next_url will give your
> application the URI to contact to get the next page of results.
>
> Combining max_id and next_url usage will not solve the duplicate
> problem. To overcome that issue, you will have to simply strip the
> duplicate tweets on the client-side.
>
> Thanks,
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
>
> On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
> wrote:
> > HI,
> >
> > Can you give me an example how i can use prev_url and next_url with
> max_id.
> >
> >
> >
> > No I am following below process to search
> > 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> > the param 'page'
> > 2. Get the id of the last status on page 15 and set that as the max_id
> > for the next query
> > 3. If we have more results, go to step 1
> >
> > here i got duplicate. 100th record in page 1 was same as 1st record in
> page
> > 2.
> >
> > I understood the reason why i got the duplicates from matts previous
> mail.
> >
> > Will this problem solve if i use max_id with prev_url and next_url?
> >  How can the duplicate problem be solved
> >
> >
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
> >>
> >> Basha,
> >> Pagination is defined well here [1].
> >>
> >> The next_url and prev_url fields give your client HTTP URIs to move
> >> forward and backward through the result set. You can use them to page
> >> through search results.
> >>
> >> I have some work to do on the search docs and I'll add field
> >> definitions then as well.
> >>
> >> 1. 
> >> http://en.wikipedia.org/wiki/Pagination_(web)
> >>
> >> Doug Williams
> >> Twitter API Support
> >> http://twitter.com/dougw
> >>
> >>
> >>
> >> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik  >
> >> wrote:
> >> > Hi matt,
> >> >
> >> > Thank You
> >> > What is Pagination? Does it mean that I cannot use max_id for
> searching
> >> > tweets. What does next_url and prev_url fields mean. I did not find
> >> > next_url
> >> > and prev_url in documentation. how can these two urls be used with
> >> > max_id.
> >> > Please explain with example if possible.
> >> >
> >> >
> >> >
> >> > Regards,
> >> >
> >> > Mahaboob Basha Shaik
> >> > www.netelixir.com
> >> > Making Search Work
> >> >
> >> >
> >> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
> wrote:
> >> >>
> >> >> Hi Basha,
> >> >> The max_id is only intended to be used for pagination via the
> >> >> next_url
> >> >> and prev_url fields and is known not to work with since_id. It is not
> >> >> documented as a valid parameter because it's known to only work in
> the
> >> >> case
> >> >> it was designed for. We added the max_id to prevent the problem where
> >> >> you
> >> >> click on 'Next' and page two starts with duplicates. Here's the
> >> >> scenario:
> >> >>  1. Let's say you search for 'foo'.
> >> >>  2. You wait 10 seconds, during which 5 people send tweets containing
> >> >> 'foo'.
> >> >>  3. You click next and go to page=2 (or call page=2 via the API)
> >> >>3.a. If we displayed results 21-40 the first 5 results would look
> >> >> like
> >> >> duplicates because they were "pushed down" by the 5 new entries.
> >> >>3.b. If we append a max_id from the time you searched we can do
> and
> >> >> offset from the maximum and the new 5 entries are skipped.
> >> >>   We use option 3.b. (as does twitter.com now) so you don't see
> >> >> duplicates. Since we wanted to provide the same data in the API as
> the
> >> >> UI we
> >> >> added the next_url and prev_url members in our output.
> >> >> Thanks;
> >> >>   — Matt Sanford
> >> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
> >> >>
> >> >> HI Matt,
> >> >>
> >> >> when Since_id and Max_id are given together, max_id is not working.
> >> >> This
> >> >> query is ignoring max_id. But with only since _id its working fine.
> Is
> >> >> there
> >> >> any problem when max_id and since_id are used together.
> >> >>
> >> >> Also please tell me what does max_id exactly mean and also what does
> it
> >> >> return when we send a request.
> >> >> Also tell me what the total returns.
> >> >>
> >> >>
> >> >> Regards,
> >> >>
> >> >> Mahaboob Basha Shaik
> >> >> www.netelixir.com
> >> >> Making Search Work
> >> >>
> >> >>
> >> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford 
> wrote:
> >> >>>
> >> >>> Hi there,
> >> >>>
> >> >>>Can you provide an exam

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

Assuming you get the json data somehow and store it in a variable
called "jdata", you can construct the next page url thus:

var next_page_url = "http://search.twitter.com/"; + jdata.next_page;

-Chad

On Sat, Apr 4, 2009 at 2:11 AM, Basha Shaik  wrote:
> I am using json
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:
>>
>> Are you using the .atom or .json API feed?  I am only familiar with
>> the .json feed.
>> -Chad
>>
>> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik 
>> wrote:
>> > Hi Chad,
>> >
>> > how can we use "next_page" in the url we request. where can we get the
>> > url
>> > we need to pass.
>> >
>> > Regards,
>> >
>> > Mahaboob Basha Shaik
>> > www.netelixir.com
>> > Making Search Work
>> >
>> >
>> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
>> >>
>> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> >> them anywhere), but at least in the json data there is a "next_page"
>> >> field which uses "?page=_&max_id=__" already prefilled for you.
>> >> This should definitely avoid the duplicate tweet issue.  I've never
>> >> had to do any client-side duplicate filtering when using the correct
>> >> combination of "page","max_id", and "rpp" values...
>> >>
>> >> If you give very specific examples (the actual URL data would be
>> >> handy) where you are seeing duplicates between pages, we can probably
>> >> help sort this out.
>> >>
>> >> -Chad
>> >>
>> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>> >> >
>> >> > The use of prev_url and next_url will take care of step 1 from your
>> >> > flow described above. Specifically, next_url will give your
>> >> > application the URI to contact to get the next page of results.
>> >> >
>> >> > Combining max_id and next_url usage will not solve the duplicate
>> >> > problem. To overcome that issue, you will have to simply strip the
>> >> > duplicate tweets on the client-side.
>> >> >
>> >> > Thanks,
>> >> > Doug Williams
>> >> > Twitter API Support
>> >> > http://twitter.com/dougw
>> >> >
>> >> >
>> >> >
>> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik
>> >> > 
>> >> > wrote:
>> >> >> HI,
>> >> >>
>> >> >> Can you give me an example how i can use prev_url and next_url with
>> >> >> max_id.
>> >> >>
>> >> >>
>> >> >>
>> >> >> No I am following below process to search
>> >> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>> >> >> the param 'page'
>> >> >> 2. Get the id of the last status on page 15 and set that as the
>> >> >> max_id
>> >> >> for the next query
>> >> >> 3. If we have more results, go to step 1
>> >> >>
>> >> >> here i got duplicate. 100th record in page 1 was same as 1st record
>> >> >> in
>> >> >> page
>> >> >> 2.
>> >> >>
>> >> >> I understood the reason why i got the duplicates from matts previous
>> >> >> mail.
>> >> >>
>> >> >> Will this problem solve if i use max_id with prev_url and next_url?
>> >> >>  How can the duplicate problem be solved
>> >> >>
>> >> >>
>> >> >> Regards,
>> >> >>
>> >> >> Mahaboob Basha Shaik
>> >> >> www.netelixir.com
>> >> >> Making Search Work
>> >> >>
>> >> >>
>> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
>> >> >> wrote:
>> >> >>>
>> >> >>> Basha,
>> >> >>> Pagination is defined well here [1].
>> >> >>>
>> >> >>> The next_url and prev_url fields give your client HTTP URIs to move
>> >> >>> forward and backward through the result set. You can use them to
>> >> >>> page
>> >> >>> through search results.
>> >> >>>
>> >> >>> I have some work to do on the search docs and I'll add field
>> >> >>> definitions then as well.
>> >> >>>
>> >> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>> >> >>>
>> >> >>> Doug Williams
>> >> >>> Twitter API Support
>> >> >>> http://twitter.com/dougw
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>> >> >>> 
>> >> >>> wrote:
>> >> >>> > Hi matt,
>> >> >>> >
>> >> >>> > Thank You
>> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
>> >> >>> > searching
>> >> >>> > tweets. What does next_url and prev_url fields mean. I did not
>> >> >>> > find
>> >> >>> > next_url
>> >> >>> > and prev_url in documentation. how can these two urls be used
>> >> >>> > with
>> >> >>> > max_id.
>> >> >>> > Please explain with example if possible.
>> >> >>> >
>> >> >>> >
>> >> >>> >
>> >> >>> > Regards,
>> >> >>> >
>> >> >>> > Mahaboob Basha Shaik
>> >> >>> > www.netelixir.com
>> >> >>> > Making Search Work
>> >> >>> >
>> >> >>> >
>> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
>> >> >>> > wrote:
>> >> >>> >>
>> >> >>> >> Hi Basha,
>> >> >>> >>     The max_id is only intended to be used for pagination via
>> >> >>> >> the
>> >> >>> >> next_url
>> >> >>> >> and prev_url fields and is known not to work with since_id. It
>> >> >>> >> is
>> >> >>> >> not
>> >> >>> >> documented as a valid parameter because it's known to only work
>> >> >>> >> in
>> >> >>> >> the
>> >> >>> >> cas

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
I am using json

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Sat, Apr 4, 2009 at 6:07 AM, Chad Etzel  wrote:

>
> Are you using the .atom or .json API feed?  I am only familiar with
> the .json feed.
> -Chad
>
> On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik 
> wrote:
> > Hi Chad,
> >
> > how can we use "next_page" in the url we request. where can we get the
> url
> > we need to pass.
> >
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
> >>
> >> I'm not sure of these "next_url" and "prev_url" fields (never seen
> >> them anywhere), but at least in the json data there is a "next_page"
> >> field which uses "?page=_&max_id=__" already prefilled for you.
> >> This should definitely avoid the duplicate tweet issue.  I've never
> >> had to do any client-side duplicate filtering when using the correct
> >> combination of "page","max_id", and "rpp" values...
> >>
> >> If you give very specific examples (the actual URL data would be
> >> handy) where you are seeing duplicates between pages, we can probably
> >> help sort this out.
> >>
> >> -Chad
> >>
> >> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
> >> >
> >> > The use of prev_url and next_url will take care of step 1 from your
> >> > flow described above. Specifically, next_url will give your
> >> > application the URI to contact to get the next page of results.
> >> >
> >> > Combining max_id and next_url usage will not solve the duplicate
> >> > problem. To overcome that issue, you will have to simply strip the
> >> > duplicate tweets on the client-side.
> >> >
> >> > Thanks,
> >> > Doug Williams
> >> > Twitter API Support
> >> > http://twitter.com/dougw
> >> >
> >> >
> >> >
> >> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik <
> basha.neteli...@gmail.com>
> >> > wrote:
> >> >> HI,
> >> >>
> >> >> Can you give me an example how i can use prev_url and next_url with
> >> >> max_id.
> >> >>
> >> >>
> >> >>
> >> >> No I am following below process to search
> >> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> >> >> the param 'page'
> >> >> 2. Get the id of the last status on page 15 and set that as the
> max_id
> >> >> for the next query
> >> >> 3. If we have more results, go to step 1
> >> >>
> >> >> here i got duplicate. 100th record in page 1 was same as 1st record
> in
> >> >> page
> >> >> 2.
> >> >>
> >> >> I understood the reason why i got the duplicates from matts previous
> >> >> mail.
> >> >>
> >> >> Will this problem solve if i use max_id with prev_url and next_url?
> >> >>  How can the duplicate problem be solved
> >> >>
> >> >>
> >> >> Regards,
> >> >>
> >> >> Mahaboob Basha Shaik
> >> >> www.netelixir.com
> >> >> Making Search Work
> >> >>
> >> >>
> >> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams 
> wrote:
> >> >>>
> >> >>> Basha,
> >> >>> Pagination is defined well here [1].
> >> >>>
> >> >>> The next_url and prev_url fields give your client HTTP URIs to move
> >> >>> forward and backward through the result set. You can use them to
> page
> >> >>> through search results.
> >> >>>
> >> >>> I have some work to do on the search docs and I'll add field
> >> >>> definitions then as well.
> >> >>>
> >> >>> 1. 
> >> >>> http://en.wikipedia.org/wiki/Pagination_(web)
> >> >>>
> >> >>> Doug Williams
> >> >>> Twitter API Support
> >> >>> http://twitter.com/dougw
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
> >> >>> 
> >> >>> wrote:
> >> >>> > Hi matt,
> >> >>> >
> >> >>> > Thank You
> >> >>> > What is Pagination? Does it mean that I cannot use max_id for
> >> >>> > searching
> >> >>> > tweets. What does next_url and prev_url fields mean. I did not
> find
> >> >>> > next_url
> >> >>> > and prev_url in documentation. how can these two urls be used with
> >> >>> > max_id.
> >> >>> > Please explain with example if possible.
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> > Regards,
> >> >>> >
> >> >>> > Mahaboob Basha Shaik
> >> >>> > www.netelixir.com
> >> >>> > Making Search Work
> >> >>> >
> >> >>> >
> >> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> Hi Basha,
> >> >>> >> The max_id is only intended to be used for pagination via the
> >> >>> >> next_url
> >> >>> >> and prev_url fields and is known not to work with since_id. It is
> >> >>> >> not
> >> >>> >> documented as a valid parameter because it's known to only work
> in
> >> >>> >> the
> >> >>> >> case
> >> >>> >> it was designed for. We added the max_id to prevent the problem
> >> >>> >> where
> >> >>> >> you
> >> >>> >> click on 'Next' and page two starts with duplicates. Here's the
> >> >>> >> scenario:
> >> >>> >>  1. Let's say you search for 'foo'.
> >> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
> >> >>> >> containing
> >> >>> >> 'foo'.
> >> >>> >>  3. You click next and go to page=

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

Are you using the .atom or .json API feed?  I am only familiar with
the .json feed.
-Chad

On Sat, Apr 4, 2009 at 2:01 AM, Basha Shaik  wrote:
> Hi Chad,
>
> how can we use "next_page" in the url we request. where can we get the url
> we need to pass.
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:
>>
>> I'm not sure of these "next_url" and "prev_url" fields (never seen
>> them anywhere), but at least in the json data there is a "next_page"
>> field which uses "?page=_&max_id=__" already prefilled for you.
>> This should definitely avoid the duplicate tweet issue.  I've never
>> had to do any client-side duplicate filtering when using the correct
>> combination of "page","max_id", and "rpp" values...
>>
>> If you give very specific examples (the actual URL data would be
>> handy) where you are seeing duplicates between pages, we can probably
>> help sort this out.
>>
>> -Chad
>>
>> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>> >
>> > The use of prev_url and next_url will take care of step 1 from your
>> > flow described above. Specifically, next_url will give your
>> > application the URI to contact to get the next page of results.
>> >
>> > Combining max_id and next_url usage will not solve the duplicate
>> > problem. To overcome that issue, you will have to simply strip the
>> > duplicate tweets on the client-side.
>> >
>> > Thanks,
>> > Doug Williams
>> > Twitter API Support
>> > http://twitter.com/dougw
>> >
>> >
>> >
>> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
>> > wrote:
>> >> HI,
>> >>
>> >> Can you give me an example how i can use prev_url and next_url with
>> >> max_id.
>> >>
>> >>
>> >>
>> >> No I am following below process to search
>> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>> >> the param 'page'
>> >> 2. Get the id of the last status on page 15 and set that as the max_id
>> >> for the next query
>> >> 3. If we have more results, go to step 1
>> >>
>> >> here i got duplicate. 100th record in page 1 was same as 1st record in
>> >> page
>> >> 2.
>> >>
>> >> I understood the reason why i got the duplicates from matts previous
>> >> mail.
>> >>
>> >> Will this problem solve if i use max_id with prev_url and next_url?
>> >>  How can the duplicate problem be solved
>> >>
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
>> >>>
>> >>> Basha,
>> >>> Pagination is defined well here [1].
>> >>>
>> >>> The next_url and prev_url fields give your client HTTP URIs to move
>> >>> forward and backward through the result set. You can use them to page
>> >>> through search results.
>> >>>
>> >>> I have some work to do on the search docs and I'll add field
>> >>> definitions then as well.
>> >>>
>> >>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>> >>>
>> >>> Doug Williams
>> >>> Twitter API Support
>> >>> http://twitter.com/dougw
>> >>>
>> >>>
>> >>>
>> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik
>> >>> 
>> >>> wrote:
>> >>> > Hi matt,
>> >>> >
>> >>> > Thank You
>> >>> > What is Pagination? Does it mean that I cannot use max_id for
>> >>> > searching
>> >>> > tweets. What does next_url and prev_url fields mean. I did not find
>> >>> > next_url
>> >>> > and prev_url in documentation. how can these two urls be used with
>> >>> > max_id.
>> >>> > Please explain with example if possible.
>> >>> >
>> >>> >
>> >>> >
>> >>> > Regards,
>> >>> >
>> >>> > Mahaboob Basha Shaik
>> >>> > www.netelixir.com
>> >>> > Making Search Work
>> >>> >
>> >>> >
>> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi Basha,
>> >>> >>     The max_id is only intended to be used for pagination via the
>> >>> >> next_url
>> >>> >> and prev_url fields and is known not to work with since_id. It is
>> >>> >> not
>> >>> >> documented as a valid parameter because it's known to only work in
>> >>> >> the
>> >>> >> case
>> >>> >> it was designed for. We added the max_id to prevent the problem
>> >>> >> where
>> >>> >> you
>> >>> >> click on 'Next' and page two starts with duplicates. Here's the
>> >>> >> scenario:
>> >>> >>  1. Let's say you search for 'foo'.
>> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
>> >>> >> containing
>> >>> >> 'foo'.
>> >>> >>  3. You click next and go to page=2 (or call page=2 via the API)
>> >>> >>    3.a. If we displayed results 21-40 the first 5 results would
>> >>> >> look
>> >>> >> like
>> >>> >> duplicates because they were "pushed down" by the 5 new entries.
>> >>> >>    3.b. If we append a max_id from the time you searched we can do
>> >>> >> and
>> >>> >> offset from the maximum and the new 5 entries are skipped.
>> >>> >>   We use option 3.b. (as does twitter.com now) so you don't see
>> >>> >> duplicates. Since we wanted to provide the same data in the API as
>> >>> >> the
>> >>

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Basha Shaik
Hi Chad,

how can we use "next_page" in the url we request. where can we get the url
we need to pass.

Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 7:14 PM, Chad Etzel  wrote:

>
> I'm not sure of these "next_url" and "prev_url" fields (never seen
> them anywhere), but at least in the json data there is a "next_page"
> field which uses "?page=_&max_id=__" already prefilled for you.
> This should definitely avoid the duplicate tweet issue.  I've never
> had to do any client-side duplicate filtering when using the correct
> combination of "page","max_id", and "rpp" values...
>
> If you give very specific examples (the actual URL data would be
> handy) where you are seeing duplicates between pages, we can probably
> help sort this out.
>
> -Chad
>
> On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
> >
> > The use of prev_url and next_url will take care of step 1 from your
> > flow described above. Specifically, next_url will give your
> > application the URI to contact to get the next page of results.
> >
> > Combining max_id and next_url usage will not solve the duplicate
> > problem. To overcome that issue, you will have to simply strip the
> > duplicate tweets on the client-side.
> >
> > Thanks,
> > Doug Williams
> > Twitter API Support
> > http://twitter.com/dougw
> >
> >
> >
> > On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik 
> wrote:
> >> HI,
> >>
> >> Can you give me an example how i can use prev_url and next_url with
> max_id.
> >>
> >>
> >>
> >> No I am following below process to search
> >> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> >> the param 'page'
> >> 2. Get the id of the last status on page 15 and set that as the max_id
> >> for the next query
> >> 3. If we have more results, go to step 1
> >>
> >> here i got duplicate. 100th record in page 1 was same as 1st record in
> page
> >> 2.
> >>
> >> I understood the reason why i got the duplicates from matts previous
> mail.
> >>
> >> Will this problem solve if i use max_id with prev_url and next_url?
> >>  How can the duplicate problem be solved
> >>
> >>
> >> Regards,
> >>
> >> Mahaboob Basha Shaik
> >> www.netelixir.com
> >> Making Search Work
> >>
> >>
> >> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
> >>>
> >>> Basha,
> >>> Pagination is defined well here [1].
> >>>
> >>> The next_url and prev_url fields give your client HTTP URIs to move
> >>> forward and backward through the result set. You can use them to page
> >>> through search results.
> >>>
> >>> I have some work to do on the search docs and I'll add field
> >>> definitions then as well.
> >>>
> >>> 1. 
> >>> http://en.wikipedia.org/wiki/Pagination_(web)
> >>>
> >>> Doug Williams
> >>> Twitter API Support
> >>> http://twitter.com/dougw
> >>>
> >>>
> >>>
> >>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik <
> basha.neteli...@gmail.com>
> >>> wrote:
> >>> > Hi matt,
> >>> >
> >>> > Thank You
> >>> > What is Pagination? Does it mean that I cannot use max_id for
> searching
> >>> > tweets. What does next_url and prev_url fields mean. I did not find
> >>> > next_url
> >>> > and prev_url in documentation. how can these two urls be used with
> >>> > max_id.
> >>> > Please explain with example if possible.
> >>> >
> >>> >
> >>> >
> >>> > Regards,
> >>> >
> >>> > Mahaboob Basha Shaik
> >>> > www.netelixir.com
> >>> > Making Search Work
> >>> >
> >>> >
> >>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford 
> wrote:
> >>> >>
> >>> >> Hi Basha,
> >>> >> The max_id is only intended to be used for pagination via the
> >>> >> next_url
> >>> >> and prev_url fields and is known not to work with since_id. It is
> not
> >>> >> documented as a valid parameter because it's known to only work in
> the
> >>> >> case
> >>> >> it was designed for. We added the max_id to prevent the problem
> where
> >>> >> you
> >>> >> click on 'Next' and page two starts with duplicates. Here's the
> >>> >> scenario:
> >>> >>  1. Let's say you search for 'foo'.
> >>> >>  2. You wait 10 seconds, during which 5 people send tweets
> containing
> >>> >> 'foo'.
> >>> >>  3. You click next and go to page=2 (or call page=2 via the API)
> >>> >>3.a. If we displayed results 21-40 the first 5 results would look
> >>> >> like
> >>> >> duplicates because they were "pushed down" by the 5 new entries.
> >>> >>3.b. If we append a max_id from the time you searched we can do
> and
> >>> >> offset from the maximum and the new 5 entries are skipped.
> >>> >>   We use option 3.b. (as does twitter.com now) so you don't see
> >>> >> duplicates. Since we wanted to provide the same data in the API as
> the
> >>> >> UI we
> >>> >> added the next_url and prev_url members in our output.
> >>> >> Thanks;
> >>> >>   — Matt Sanford
> >>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
> >>> >>
> >>> >> HI Matt,
> >>> >>
> >>> >> when Since_id and Max_id are given together, max_id is not working.
> >>> >

[twitter-dev] Re: search fine time interval

2009-04-03 Thread Doug Williams

There is a technique to work around the 1500 tweet paging limit but we
don't officially support it so I'd rather not link you directly. It is
available through a search of this group's archives.

Regards,
Doug Williams
Twitter API Support
http://twitter.com/dougw



On Fri, Apr 3, 2009 at 12:27 PM, Cestino  wrote:
>
> Many thanks Doug,
>
> I tried client side filtering but run into the 1500 tweet limit so I
> cannot get to tweets in the middle of the day. Is there an alternative
> solution? Thanks for your patients, I'm new to APIs.
>
> Cestino
>
> On Apr 1, 3:25 pm, Doug Williams  wrote:
>> Cestino,
>> Search only allows dates to be specified down to the day. We don't allow the
>> granularity to be more specific than that. If you are only looking for a
>> specific hour, our current recommendation is to do client-side filtering.
>>
>> Thanks,
>> Doug Williams
>> Twitter API Supporthttp://twitter.com/dougw
>>
>> On Wed, Apr 1, 2009 at 1:58 PM, Cestino  wrote:
>>
>> > Hi All,
>>
>> > Is it possible to search a finer time interval that a day? For example
>> > search between 12:00 and 1:00 on a specific day. I have tried numerous
>> > formats to extend the "since" and "until" operators to include
>> > hour:minute:second with no luck.
>>
>> > Many thanks,
>> > Cestino
>


[twitter-dev] Re: search fine time interval

2009-04-03 Thread Cestino

Many thanks Doug,

I tried client side filtering but run into the 1500 tweet limit so I
cannot get to tweets in the middle of the day. Is there an alternative
solution? Thanks for your patients, I'm new to APIs.

Cestino

On Apr 1, 3:25 pm, Doug Williams  wrote:
> Cestino,
> Search only allows dates to be specified down to the day. We don't allow the
> granularity to be more specific than that. If you are only looking for a
> specific hour, our current recommendation is to do client-side filtering.
>
> Thanks,
> Doug Williams
> Twitter API Supporthttp://twitter.com/dougw
>
> On Wed, Apr 1, 2009 at 1:58 PM, Cestino  wrote:
>
> > Hi All,
>
> > Is it possible to search a finer time interval that a day? For example
> > search between 12:00 and 1:00 on a specific day. I have tried numerous
> > formats to extend the "since" and "until" operators to include
> > hour:minute:second with no luck.
>
> > Many thanks,
> > Cestino


[twitter-dev] Re: Search queries not working

2009-04-03 Thread Chad Etzel

I'm not sure of these "next_url" and "prev_url" fields (never seen
them anywhere), but at least in the json data there is a "next_page"
field which uses "?page=_&max_id=__" already prefilled for you.
This should definitely avoid the duplicate tweet issue.  I've never
had to do any client-side duplicate filtering when using the correct
combination of "page","max_id", and "rpp" values...

If you give very specific examples (the actual URL data would be
handy) where you are seeing duplicates between pages, we can probably
help sort this out.

-Chad

On Fri, Apr 3, 2009 at 2:57 PM, Doug Williams  wrote:
>
> The use of prev_url and next_url will take care of step 1 from your
> flow described above. Specifically, next_url will give your
> application the URI to contact to get the next page of results.
>
> Combining max_id and next_url usage will not solve the duplicate
> problem. To overcome that issue, you will have to simply strip the
> duplicate tweets on the client-side.
>
> Thanks,
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
>
> On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik  
> wrote:
>> HI,
>>
>> Can you give me an example how i can use prev_url and next_url with max_id.
>>
>>
>>
>> No I am following below process to search
>> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
>> the param 'page'
>> 2. Get the id of the last status on page 15 and set that as the max_id
>> for the next query
>> 3. If we have more results, go to step 1
>>
>> here i got duplicate. 100th record in page 1 was same as 1st record in page
>> 2.
>>
>> I understood the reason why i got the duplicates from matts previous mail.
>>
>> Will this problem solve if i use max_id with prev_url and next_url?
>>  How can the duplicate problem be solved
>>
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
>>>
>>> Basha,
>>> Pagination is defined well here [1].
>>>
>>> The next_url and prev_url fields give your client HTTP URIs to move
>>> forward and backward through the result set. You can use them to page
>>> through search results.
>>>
>>> I have some work to do on the search docs and I'll add field
>>> definitions then as well.
>>>
>>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>>>
>>> Doug Williams
>>> Twitter API Support
>>> http://twitter.com/dougw
>>>
>>>
>>>
>>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik 
>>> wrote:
>>> > Hi matt,
>>> >
>>> > Thank You
>>> > What is Pagination? Does it mean that I cannot use max_id for searching
>>> > tweets. What does next_url and prev_url fields mean. I did not find
>>> > next_url
>>> > and prev_url in documentation. how can these two urls be used with
>>> > max_id.
>>> > Please explain with example if possible.
>>> >
>>> >
>>> >
>>> > Regards,
>>> >
>>> > Mahaboob Basha Shaik
>>> > www.netelixir.com
>>> > Making Search Work
>>> >
>>> >
>>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
>>> >>
>>> >> Hi Basha,
>>> >> The max_id is only intended to be used for pagination via the
>>> >> next_url
>>> >> and prev_url fields and is known not to work with since_id. It is not
>>> >> documented as a valid parameter because it's known to only work in the
>>> >> case
>>> >> it was designed for. We added the max_id to prevent the problem where
>>> >> you
>>> >> click on 'Next' and page two starts with duplicates. Here's the
>>> >> scenario:
>>> >>  1. Let's say you search for 'foo'.
>>> >>  2. You wait 10 seconds, during which 5 people send tweets containing
>>> >> 'foo'.
>>> >>  3. You click next and go to page=2 (or call page=2 via the API)
>>> >>3.a. If we displayed results 21-40 the first 5 results would look
>>> >> like
>>> >> duplicates because they were "pushed down" by the 5 new entries.
>>> >>3.b. If we append a max_id from the time you searched we can do and
>>> >> offset from the maximum and the new 5 entries are skipped.
>>> >>   We use option 3.b. (as does twitter.com now) so you don't see
>>> >> duplicates. Since we wanted to provide the same data in the API as the
>>> >> UI we
>>> >> added the next_url and prev_url members in our output.
>>> >> Thanks;
>>> >>   — Matt Sanford
>>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>>> >>
>>> >> HI Matt,
>>> >>
>>> >> when Since_id and Max_id are given together, max_id is not working.
>>> >> This
>>> >> query is ignoring max_id. But with only since _id its working fine. Is
>>> >> there
>>> >> any problem when max_id and since_id are used together.
>>> >>
>>> >> Also please tell me what does max_id exactly mean and also what does it
>>> >> return when we send a request.
>>> >> Also tell me what the total returns.
>>> >>
>>> >>
>>> >> Regards,
>>> >>
>>> >> Mahaboob Basha Shaik
>>> >> www.netelixir.com
>>> >> Making Search Work
>>> >>
>>> >>
>>> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>>> >>>
>>> >>> Hi there,
>>> >>>
>>> >>>Can you provide an example URL wher

[twitter-dev] Re: Search queries not working

2009-04-03 Thread Doug Williams

The use of prev_url and next_url will take care of step 1 from your
flow described above. Specifically, next_url will give your
application the URI to contact to get the next page of results.

Combining max_id and next_url usage will not solve the duplicate
problem. To overcome that issue, you will have to simply strip the
duplicate tweets on the client-side.

Thanks,
Doug Williams
Twitter API Support
http://twitter.com/dougw



On Thu, Apr 2, 2009 at 11:09 PM, Basha Shaik  wrote:
> HI,
>
> Can you give me an example how i can use prev_url and next_url with max_id.
>
>
>
> No I am following below process to search
> 1. Set rpp=100 and retrieve 15 pages search results by incrementing
> the param 'page'
> 2. Get the id of the last status on page 15 and set that as the max_id
> for the next query
> 3. If we have more results, go to step 1
>
> here i got duplicate. 100th record in page 1 was same as 1st record in page
> 2.
>
> I understood the reason why i got the duplicates from matts previous mail.
>
> Will this problem solve if i use max_id with prev_url and next_url?
>  How can the duplicate problem be solved
>
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:
>>
>> Basha,
>> Pagination is defined well here [1].
>>
>> The next_url and prev_url fields give your client HTTP URIs to move
>> forward and backward through the result set. You can use them to page
>> through search results.
>>
>> I have some work to do on the search docs and I'll add field
>> definitions then as well.
>>
>> 1. http://en.wikipedia.org/wiki/Pagination_(web)
>>
>> Doug Williams
>> Twitter API Support
>> http://twitter.com/dougw
>>
>>
>>
>> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik 
>> wrote:
>> > Hi matt,
>> >
>> > Thank You
>> > What is Pagination? Does it mean that I cannot use max_id for searching
>> > tweets. What does next_url and prev_url fields mean. I did not find
>> > next_url
>> > and prev_url in documentation. how can these two urls be used with
>> > max_id.
>> > Please explain with example if possible.
>> >
>> >
>> >
>> > Regards,
>> >
>> > Mahaboob Basha Shaik
>> > www.netelixir.com
>> > Making Search Work
>> >
>> >
>> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
>> >>
>> >> Hi Basha,
>> >>     The max_id is only intended to be used for pagination via the
>> >> next_url
>> >> and prev_url fields and is known not to work with since_id. It is not
>> >> documented as a valid parameter because it's known to only work in the
>> >> case
>> >> it was designed for. We added the max_id to prevent the problem where
>> >> you
>> >> click on 'Next' and page two starts with duplicates. Here's the
>> >> scenario:
>> >>  1. Let's say you search for 'foo'.
>> >>  2. You wait 10 seconds, during which 5 people send tweets containing
>> >> 'foo'.
>> >>  3. You click next and go to page=2 (or call page=2 via the API)
>> >>    3.a. If we displayed results 21-40 the first 5 results would look
>> >> like
>> >> duplicates because they were "pushed down" by the 5 new entries.
>> >>    3.b. If we append a max_id from the time you searched we can do and
>> >> offset from the maximum and the new 5 entries are skipped.
>> >>   We use option 3.b. (as does twitter.com now) so you don't see
>> >> duplicates. Since we wanted to provide the same data in the API as the
>> >> UI we
>> >> added the next_url and prev_url members in our output.
>> >> Thanks;
>> >>   — Matt Sanford
>> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>> >>
>> >> HI Matt,
>> >>
>> >> when Since_id and Max_id are given together, max_id is not working.
>> >> This
>> >> query is ignoring max_id. But with only since _id its working fine. Is
>> >> there
>> >> any problem when max_id and since_id are used together.
>> >>
>> >> Also please tell me what does max_id exactly mean and also what does it
>> >> return when we send a request.
>> >> Also tell me what the total returns.
>> >>
>> >>
>> >> Regards,
>> >>
>> >> Mahaboob Basha Shaik
>> >> www.netelixir.com
>> >> Making Search Work
>> >>
>> >>
>> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>> >>>
>> >>> Hi there,
>> >>>
>> >>>    Can you provide an example URL where since_id isn't working so I
>> >>> can
>> >>> try and reproduce the issue? As for language, the language identifier
>> >>> is not
>> >>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes
>> >>> but it
>> >>> definitely does.
>> >>>
>> >>> Thanks;
>> >>>  — Matt Sanford / @mzsanford
>> >>>
>> >>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>> >>>
>> 
>>  Hi all;
>> 
>>  I see a few people complaining about the since_id not working.  I too
>>  have the same issue - I am currently storing the last executed id and
>>  having to check new tweets to make sure their id is greater than my
>>  last processed id as a temporary workaround.
>> 
>>  I have also noticed that the filter by language param also does

[twitter-dev] Re: Search queries not working

2009-04-02 Thread Basha Shaik
HI,

Can you give me an example how i can use prev_url and next_url with max_id.



No I am following below process to search
1. Set rpp=100 and retrieve 15 pages search results by incrementing
the param 'page'
2. Get the id of the last status on page 15 and set that as the max_id
for the next query
3. If we have more results, go to step 1

here i got duplicate. 100th record in page 1 was same as 1st record in page
2.

I understood the reason why i got the duplicates from matts previous mail.

Will this problem solve if i use max_id with prev_url and next_url?
 How can the duplicate problem be solved


Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Fri, Apr 3, 2009 at 5:59 AM, Doug Williams  wrote:

>
> Basha,
> Pagination is defined well here [1].
>
> The next_url and prev_url fields give your client HTTP URIs to move
> forward and backward through the result set. You can use them to page
> through search results.
>
> I have some work to do on the search docs and I'll add field
> definitions then as well.
>
> 1. 
> http://en.wikipedia.org/wiki/Pagination_(web)
>
> Doug Williams
> Twitter API Support
> http://twitter.com/dougw
>
>
>
> On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik 
> wrote:
> > Hi matt,
> >
> > Thank You
> > What is Pagination? Does it mean that I cannot use max_id for searching
> > tweets. What does next_url and prev_url fields mean. I did not find
> next_url
> > and prev_url in documentation. how can these two urls be used with
> max_id.
> > Please explain with example if possible.
> >
> >
> >
> > Regards,
> >
> > Mahaboob Basha Shaik
> > www.netelixir.com
> > Making Search Work
> >
> >
> > On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
> >>
> >> Hi Basha,
> >> The max_id is only intended to be used for pagination via the
> next_url
> >> and prev_url fields and is known not to work with since_id. It is not
> >> documented as a valid parameter because it's known to only work in the
> case
> >> it was designed for. We added the max_id to prevent the problem where
> you
> >> click on 'Next' and page two starts with duplicates. Here's the
> scenario:
> >>  1. Let's say you search for 'foo'.
> >>  2. You wait 10 seconds, during which 5 people send tweets containing
> >> 'foo'.
> >>  3. You click next and go to page=2 (or call page=2 via the API)
> >>3.a. If we displayed results 21-40 the first 5 results would look
> like
> >> duplicates because they were "pushed down" by the 5 new entries.
> >>3.b. If we append a max_id from the time you searched we can do and
> >> offset from the maximum and the new 5 entries are skipped.
> >>   We use option 3.b. (as does twitter.com now) so you don't see
> >> duplicates. Since we wanted to provide the same data in the API as the
> UI we
> >> added the next_url and prev_url members in our output.
> >> Thanks;
> >>   — Matt Sanford
> >> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
> >>
> >> HI Matt,
> >>
> >> when Since_id and Max_id are given together, max_id is not working. This
> >> query is ignoring max_id. But with only since _id its working fine. Is
> there
> >> any problem when max_id and since_id are used together.
> >>
> >> Also please tell me what does max_id exactly mean and also what does it
> >> return when we send a request.
> >> Also tell me what the total returns.
> >>
> >>
> >> Regards,
> >>
> >> Mahaboob Basha Shaik
> >> www.netelixir.com
> >> Making Search Work
> >>
> >>
> >> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
> >>>
> >>> Hi there,
> >>>
> >>>Can you provide an example URL where since_id isn't working so I can
> >>> try and reproduce the issue? As for language, the language identifier
> is not
> >>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes
> but it
> >>> definitely does.
> >>>
> >>> Thanks;
> >>>  — Matt Sanford / @mzsanford
> >>>
> >>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
> >>>
> 
>  Hi all;
> 
>  I see a few people complaining about the since_id not working.  I too
>  have the same issue - I am currently storing the last executed id and
>  having to check new tweets to make sure their id is greater than my
>  last processed id as a temporary workaround.
> 
>  I have also noticed that the filter by language param also doesn't
>  seem to be working 100% - I notice a few chinese tweets, as well as
>  tweets having a null value for language...
> 
> >>>
> >>
> >>
> >
> >
>


[twitter-dev] Re: Search queries not working

2009-04-02 Thread Doug Williams

Basha,
Pagination is defined well here [1].

The next_url and prev_url fields give your client HTTP URIs to move
forward and backward through the result set. You can use them to page
through search results.

I have some work to do on the search docs and I'll add field
definitions then as well.

1. http://en.wikipedia.org/wiki/Pagination_(web)

Doug Williams
Twitter API Support
http://twitter.com/dougw



On Thu, Apr 2, 2009 at 10:03 PM, Basha Shaik  wrote:
> Hi matt,
>
> Thank You
> What is Pagination? Does it mean that I cannot use max_id for searching
> tweets. What does next_url and prev_url fields mean. I did not find next_url
> and prev_url in documentation. how can these two urls be used with max_id.
> Please explain with example if possible.
>
>
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:
>>
>> Hi Basha,
>>     The max_id is only intended to be used for pagination via the next_url
>> and prev_url fields and is known not to work with since_id. It is not
>> documented as a valid parameter because it's known to only work in the case
>> it was designed for. We added the max_id to prevent the problem where you
>> click on 'Next' and page two starts with duplicates. Here's the scenario:
>>  1. Let's say you search for 'foo'.
>>  2. You wait 10 seconds, during which 5 people send tweets containing
>> 'foo'.
>>  3. You click next and go to page=2 (or call page=2 via the API)
>>    3.a. If we displayed results 21-40 the first 5 results would look like
>> duplicates because they were "pushed down" by the 5 new entries.
>>    3.b. If we append a max_id from the time you searched we can do and
>> offset from the maximum and the new 5 entries are skipped.
>>   We use option 3.b. (as does twitter.com now) so you don't see
>> duplicates. Since we wanted to provide the same data in the API as the UI we
>> added the next_url and prev_url members in our output.
>> Thanks;
>>   — Matt Sanford
>> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>>
>> HI Matt,
>>
>> when Since_id and Max_id are given together, max_id is not working. This
>> query is ignoring max_id. But with only since _id its working fine. Is there
>> any problem when max_id and since_id are used together.
>>
>> Also please tell me what does max_id exactly mean and also what does it
>> return when we send a request.
>> Also tell me what the total returns.
>>
>>
>> Regards,
>>
>> Mahaboob Basha Shaik
>> www.netelixir.com
>> Making Search Work
>>
>>
>> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>>>
>>> Hi there,
>>>
>>>    Can you provide an example URL where since_id isn't working so I can
>>> try and reproduce the issue? As for language, the language identifier is not
>>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
>>> definitely does.
>>>
>>> Thanks;
>>>  — Matt Sanford / @mzsanford
>>>
>>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>>>

 Hi all;

 I see a few people complaining about the since_id not working.  I too
 have the same issue - I am currently storing the last executed id and
 having to check new tweets to make sure their id is greater than my
 last processed id as a temporary workaround.

 I have also noticed that the filter by language param also doesn't
 seem to be working 100% - I notice a few chinese tweets, as well as
 tweets having a null value for language...

>>>
>>
>>
>
>


[twitter-dev] Re: Search queries not working

2009-04-02 Thread Basha Shaik
Hi matt,

Thank You
What is Pagination? Does it mean that I cannot use max_id for searching
tweets. What does next_url and prev_url fields mean. I did not find next_url
and prev_url in documentation. how can these two urls be used with max_id.
Please explain with example if possible.



Regards,

Mahaboob Basha Shaik
www.netelixir.com
Making Search Work


On Wed, Apr 1, 2009 at 4:23 PM, Matt Sanford  wrote:

> Hi Basha,
> The max_id is only intended to be used for pagination via the next_url
> and prev_url fields and is known not to work with since_id. It is not
> documented as a valid parameter because it's known to only work in the case
> it was designed for. We added the max_id to prevent the problem where you
> click on 'Next' and page two starts with duplicates. Here's the scenario:
>
>  1. Let's say you search for 'foo'.
>  2. You wait 10 seconds, during which 5 people send tweets containing
> 'foo'.
>  3. You click next and go to page=2 (or call page=2 via the API)
>3.a. If we displayed results 21-40 the first 5 results would look like
> duplicates because they were "pushed down" by the 5 new entries.
>3.b. If we append a max_id from the time you searched we can do and
> offset from the maximum and the new 5 entries are skipped.
>
>   We use option 3.b. (as does twitter.com now) so you don't see
> duplicates. Since we wanted to provide the same data in the API as the UI we
> added the next_url and prev_url members in our output.
>
> Thanks;
>   — Matt Sanford
>
> On Mar 31, 2009, at 08:42 PM, Basha Shaik wrote:
>
> HI Matt,
>
> when Since_id and Max_id are given together, max_id is not working. This
> query is ignoring max_id. But with only since _id its working fine. Is there
> any problem when max_id and since_id are used together.
>
> Also please tell me what does max_id exactly mean and also what does it
> return when we send a request.
> Also tell me what the total returns.
>
>
> Regards,
>
> Mahaboob Basha Shaik
> www.netelixir.com
> Making Search Work
>
>
> On Tue, Mar 31, 2009 at 3:22 PM, Matt Sanford  wrote:
>
>>
>> Hi there,
>>
>>Can you provide an example URL where since_id isn't working so I can
>> try and reproduce the issue? As for language, the language identifier is not
>> a 100% and sometimes makes mistakes. Hopefully not too many mistakes but it
>> definitely does.
>>
>> Thanks;
>>  — Matt Sanford / @mzsanford
>>
>>
>> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>>
>>
>>> Hi all;
>>>
>>> I see a few people complaining about the since_id not working.  I too
>>> have the same issue - I am currently storing the last executed id and
>>> having to check new tweets to make sure their id is greater than my
>>> last processed id as a temporary workaround.
>>>
>>> I have also noticed that the filter by language param also doesn't
>>> seem to be working 100% - I notice a few chinese tweets, as well as
>>> tweets having a null value for language...
>>>
>>>
>>
>
>


[twitter-dev] Re: Search queries not working

2009-04-02 Thread feedbackmine

Hi Matt,

I have tried to use language parameter of twitter search and find the
result is very unreliable. For example:
http://search.twitter.com/search?lang=all&q=tweetjobsearch returns 10
results (all in english), but
http://search.twitter.com/search?lang=en&q=tweetjobsearch only returns
3.

I googled this list and it seems you are using n-gram based algorithm
(http://groups.google.com/group/twitter-development-talk/msg/
565313d7b36e8d65). I have found n-gram algorithm works very well for
language detection, but the quality of training data may make a big
difference.

Recently I have developed a language detector (in ruby) myself:
http://github.com/feedbackmine/language_detector/tree/master
It uses wikipedia's data for training, and based on my limited
experience it works well. Actually using wikipedia's data is not my
idea, all credits should go to Kevin Burton (http://feedblog.org/
2005/08/19/ngram-language-categorization-source/ ).

Just thought you may be interested.

@feedbackmine
http://twitter.com/feedbackmine

On Mar 31, 11:22 am, Matt Sanford  wrote:
> Hi there,
>
>      Can you provide an example URL where since_id isn't working so I  
> can try and reproduce the issue? As forlanguage, thelanguage 
> identifier is not a 100% and sometimes makes mistakes. Hopefully not  
> too many mistakes but it definitely does.
>
> Thanks;
>    — Matt Sanford / @mzsanford
>
> On Mar 31, 2009, at 08:14 AM, codepuke wrote:
>
>
>
>
>
> > Hi all;
>
> > I see a few people complaining about the since_id not working.  I too
> > have the same issue - I am currently storing the last executed id and
> > having to check new tweets to make sure their id is greater than my
> > last processed id as a temporary workaround.
>
> > I have also noticed that the filter bylanguageparam also doesn't
> > seem to be working 100% - I notice a few chinese tweets, as well as
> > tweets having a null value forlanguage...


<    1   2   3   4   5   >