On 23 Jul 2008, at 21:20, [EMAIL PROTECTED] wrote:

This is actually a pretty "hard" problem -- there is no right answer. What if the user clears her browser state while using the site? Leaves the computer and browser on at work then tries to log in at home on a different
computer?

I agree that it's a hard problem if you do it the original way round that was suggested.

However all apps which do that (Facebook as an example) - logging into a new session kills your old one, and this is pretty easy to do (given that you force users to have a cookie and etc).

I think you can gain most of the lockdown of 1 session per user
if you just track user activity over a X minute period. for instance every time a user hits your app add a record that is attached to the user account in the db (src ip, session number, other relevant info). Then do (either inline or if it is too costly, via cron) a check on those entries that looks for multiple IP/Sessions or whatever you define as multiple users (given that http is stateless there really is no _safe_ definition). If
that process detects usage over your threshold,  disable (temp or
permanent) the account. The same process can clean up entries that are
outside of the time window that you want to look at.

If you in any way expect your site to scale to high volume, I'd highly recommend avoiding any approach like this.

Your 'hit logging' table is going to be really really slow (and have massive contention) if you're doing high volume. Also - these lookups for multiple IPs you're talking about, they're not going to be indexed, right? (Adding a *single* index on a table will reduce your max insert speed by 40-60%).

Even if you're cleaning out rows pretty regularly, you're going to seriously grind the IO on the DB system, and the general IO subsystem on your database server (not to mention any replication / binlogs / hot backups & etc that you need), is going to be doing significantly more work than it needs to, meaning that the general amount of headroom you have in the system will be that much reduced...

Your session storage could be criticized for the same reasons, but is actually significantly less work for the DB (updating, not inserting and deleting - 1/2 the IO and no index overhead as you're re-writing without changing keys), and can _trivially_ be pushed into memcache at a later date when you need to. The approach recommended above requires a relational model, and so moving it to memcache wouldn't work...

All of that said, unless you're looking to scale to multiple Mb/s of traffic, then I'm probably being too paranoid - as jay said already, 'right' here is use-case specific :)

Cheers
t0m


_______________________________________________
List: [email protected]
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/[email protected]/
Dev site: http://dev.catalyst.perl.org/

Reply via email to