be much simpler. Many
fewer requests to the server. Less data storage. And being
that Twitter is supposed to be simple this seems like a goal
worth pursuing, at least from my point of view.
Owkaye
When i request friends (or followers) from the Twitter
API i want to get
You've just made a perfect argument for my suggestion that
Twitter use ONLY unchangeable screen names (no more ids) for
the whole system.
:)
Owkaye
I know there's been a ton of request for a
followers/screen_names API, or a friends/screen_names one
for that matter. Right now the only way
of a curl search
that requires a phrase match?
Owkaye
How can I retrieve the maximum number of tweets in a search?
Can rpp be set to more than 100?
What if I do not send a rpp value, does twitter default to
returning more than 100 per page?
Owkaye
when trying to
retrieve more than 100 tweets. Am I wrong about this?
Owkaye
Hi Peter,
I got it working already, that was easy ... and FAST thanks
to your help!
Owkaye
friends/idshttp://apiwiki.twitter.com/Twitter-REST-API-M
ethod%3A-friends%C2%A0ids
followers/idshttp://apiwiki.twitter.com/Twitter-REST-API
-Method%3A-followers%C2%A0ids
I surely hope people would not judge
me based on who is following me.
They won't unless they are stupid. After all, Twitter gives
you no way to control who follows you, and most people
understand this.
Followers do no, zero, nada harm.
Just let them be.
Agreed.
Owkaye
for commercial messages
to be sent to you, and in this case clearly you are asking
for it.
Owkaye
good to know it's available to others I guess.
Owkaye
Would be very helpful to know the definition of quick
as relates to following churn suspensions.
As Cameron pointed out earlier, as soon as they do that,
the following churners will adjust their methods to be
just inside that definition of OK.
This seems like a really short-sighted
If users paid due diligence to those they follow and only
followed those people who demonstrate some value to them,
follower churn would not exist. Period.
Obviously they won't so maybe it's time to deal with reality
rather than dreaming of a perfect world.
Owkaye
Owkaye
Would be very helpful to know the definition of
quick as relates to following churn suspensions.
As Cameron pointed out earlier, as soon as they do
that, the following churners will adjust their
methods to be just inside that definition of OK.
This seems like
-- and the support they need from within the
company of course.
Then again, if these people are already working on it (as
you may have suggested) then it's going to happen one of
these days anyways ... :)
Owkaye
I don't think that adding more people to the staff at
Twitter
their existing cached server data because
this historical data would exist on separate data storage
servers ... theoretically anyways.
Owkaye
I am a bit concerned. I remember at one point it being
between 30-45 days. Now it seems to be getting smaller by
about 1-day per month. Last month
it happens I'll focus on building my own
little space in the Twitter universe and continue to hope
for the best.
:)
Owkaye
I would do anything (including paying good amounts of
money) to be able to purchase access to older datasets
that I could transfer to my database through non-rest-api
of these users.
Then when the visitor tries to login, your code can check to
see if the private id the visitor has entered is in your own
database. If so the person is allowed to login, and if not
they get an error.
Would this work to solve the problem of am I missing
something here?
Owkaye
The Streaming API docs say we should avoid opening new
connections with the same user:pass when that user
already has a connection open. But I'm hoping it is
okay to do this every hour or so ...
If you're only doing this every hour, that's fine by us.
Great, thanks for the
.
JSON is a much better format to use.
Not for me it isn't. My software has built-in XML parsing
capabilities but it doesn't know how to deal with JSON data
so XML is clearly the best way for me to go.
Owkaye
and therefore useless.
Not a major problem on Twitter because of the typical
transience of data, but when you run a company like mine
that needs to reference historic data it will definitely
create future problems when these companies fail.
Just something for folks to consider ...
Owkaye
username:password -d
track=harry potter,
Owkaye
-u
username:password -d track=harry potter,
I think the problem is missing quotes and URL
encoding. Try curl … -d track=harry+potter
Thanks for the suggestion Matt but that doesn't work either.
Any other ideas?
Owkaye
Twitter this
way, but I guess there's no way around it -- and for me the
end result is the same anyways -- so it looks like I can
proceed successfully now.
Thanks again for everyone's help, I'll be back when I have
new questions ... :)
Owkaye
or
denial of service issues? I mean, is this an acceptable way
to close a connection ... by opening a new one in order to
force the old connection to close?
Any info you can provide that will clarify this issue is
greatly appreciated, thanks!
Owkaye
the ones in this
email.
So who should I contact at Twitter to see if they can raise
the search limits for me? Are you the man? If not, please
let me know who I should contact and how.
Thanks!
Owkaye
what I'll use, especially if some helpful soul can post some
code to help me get started. Thanks.
Owkaye
.
You may be correct, but to plan for the possibility that
this may be bigger than expected is simply the way I do
business. It doesn't make sense for me to launch a promo
like this until I'm prepared for the possibilities, right?
Owkaye
I'm building an app that uses the atom search API to retrieve recent
posts which contain a specific keyword. The API docs say:
Clients may request up to 1,500 statuses via the page and rpp
parameters for the search method.
But this 1500 hits per search cannot be done in a single request
Thanks Chad, that's what I was afraid of. I wonder if you
know about this next question:
Twitter API docs say search is rate limited to something
more than REST which is 150 requests per hour, but for the
sake of argument let's say the search rate limit is actually
150 hits per hour ...
Since
You are correct, you have to do 15 requests. However,
you can cache the results in your end, so when you come
back, you are only getting the new stuff.
Thanks Scott. I'm storing the results in a database on my server but
that doesn't stop the search from retrieving the same results
29 matches
Mail list logo