I've read an idea about using a 'ticket' system... each session gets X # of 
tickets. Tickets regenerate at a fixed rate. Normal users would never run 
out of tickets.
Each query operation would have a fixed cost of tickets. Inserts would cost 
double selects... 
You don't have to calculate regeneration - just when an operation is about 
to be performed, you check the last time a request was made, and the last 
number of tickets. calculate regeneration and then if they have enough 
tickets, you do the request. If not, you return a 503 error, or perhaps a 
friendly message saying "swiper no swiping" (to quote Dora)

On Thursday, May 9, 2013 11:58:43 AM UTC-7, Alex Glaros wrote:
>
> What techniques can be used in a Web2py site to prevent data mining by 
> harvester bots?
>
> In my day job, if the Oracle database slows down, I go to the Unix OS, see 
> if the same IP address is doing a-lot-faster-than-a-human-could-type 
> queries, and then block that IP address in the firewall.
>
> Are there any ideas that that I could use with a Web2py website?
>
> Thanks,
>
> Alex Glaros
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to