Re: [delicious-discuss] Py-Delicious - get_urlposts

2005-08-10 Thread joshua schachter

That API is screen-scraping. We don't support that.

Joshua

On Aug 10, 2005, at 3:52 AM, Michael Foord wrote:


Hello,

Any of you Delicious'ers using the Python interface to the API ?  
(The mailing list over at belios.de is pretty quiet).


It looks like the ``get_urlposts`` function has stopped working.

The facility still exists with del.icio.us :

http://del.icio.us/url/ + md5.md5(the_url).hexdigest()

still returns the page of posts - but HtmlToPosts isn't extracting  
them

anymore.

Anyone got any ideas ? (short of fetching the rss version and  
parsing it myself :-)


Best Regards,

Fuzzy
http://www.voidspace.org.uk/python



___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss




--
joshua schachter
[EMAIL PROTECTED]


___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss


Re: [delicious-discuss] Py-Delicious - get_urlposts

2005-08-10 Thread joshua schachter
Not yet. My main worry here is that if we provide the API, people  
will just hammer away at it for every URL they know about. (the same  
problem exists on /url itself)


I'm still looking for a good way to throttle these requests.

Joshua

On Aug 10, 2005, at 8:06 AM, Michael Foord wrote:


joshua schachter wrote:


That API is screen-scraping. We don't support that.




Is there a way to obtain this information via the REST API ?
I would like to know what categories users have put specific URLs in.

Regards,

Fuzzyman



Joshua

On Aug 10, 2005, at 3:52 AM, Michael Foord wrote:



Hello,

Any of you Delicious'ers using the Python interface to the API ?   
(The

mailing list over at belios.de is pretty quiet).

It looks like the ``get_urlposts`` function has stopped working.

The facility still exists with del.icio.us :

http://del.icio.us/url/ + md5.md5(the_url).hexdigest()

still returns the page of posts - but HtmlToPosts isn't  
extracting  them

anymore.

Anyone got any ideas ? (short of fetching the rss version and   
parsing

it myself :-)

Best Regards,

Fuzzy
http://www.voidspace.org.uk/python



___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss





--
joshua schachter
[EMAIL PROTECTED]












--
joshua schachter
[EMAIL PROTECTED]


___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss


[delicious-discuss] IT Conversations Podcast: Folksonomy - How I Learned to Stop Worrying and Love the Mess

2005-08-10 Thread Nicola Paolucci
Hi All,

I just drop a quick note to tell the mailing list that IT conversation
has just published a podcast with a panel on Folksonomy:

In this dynamic panel from ETech 2005, Joshua Schachter
(del.icio.us), Stewart Butterfield (Flickr), Jimmy Wales (Wikipedia)
and Clay Shirky discuss several topics important to folksonomies.
Surprising aspects of the implementation of tagging in various
environments and approaches to balancing the needs of the system to
the desires of the user are discussed from various viewpoints.

Here's the link:
http://feeds.feedburner.com/ITConversations-EverythingMP3?m=349

ciao,
-- Nick
___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss


Re: [delicious-discuss] Py-Delicious - get_urlposts

2005-08-10 Thread Richard Cameron


On 10 Aug 2005, at 16:17, Pete Freitag wrote:

Another way to implement it would be to allow X number of  
connections from per IP per day (Yahoo!'s API's typically allow  
5,000 requests per day). Then just keep a database table with the  
IP and number of connections for the day. If they excede the  
connections delay, and 503. This would probably perform ok because  
you could wipe the table clean every day.


That would still entail a fairly hefty performance overhead of making  
one update to a database table per API request. My guess is that this  
contention would rapidly become a performance bottleneck.


An alternative way of implementing the same idea might be to use  
LiveJournal's memcached software http://www.danga.com/memcached/.  
It acts as a very large in-memory hash table which can be queried by  
sending request over a socket. It supports expiration times on data,  
and has an atomic incr operation which could be used to keep a  
running total of the number of requests made in a finite time window.  
There's a Perl client, so it should be reasonably straightforward to  
hack some code into the start of each API request which incrs the  
count for that username (and/or IP address), and conditionally throws  
a 503 response code, with an appropriate Retry-After header if the  
client is being unreasonable.


Richard.
___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss


Re: [delicious-discuss] Py-Delicious - get_urlposts

2005-08-10 Thread joshua schachter




Or you could setup some/one dedicated server for the api that had a 
throttled connection to the del.icio.us database. Can be done with: 
http://sqlrelay.sourceforge.net/



This seems neat. Any actual war stories?

Joshua

--
joshua schachter
[EMAIL PROTECTED]
http://del.icio.us/joshua

___
discuss mailing list
discuss@del.icio.us
http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss