Write to a logfile that gets gathered up and aggregated into a database periodically.

Memcached is not an ideal solution for this problem for a couple reasons. First, it's unreliable (in a theoretical sense; the software itself is quite solid.) If the memcached server is restarted, you will lose all your data; if your data grows beyond the capacity of your memcached instances, old data will be silently expired to make room for the new stuff.

Second, it's not queryable in the ways you'd probably want to query this data, unless you anticipate all your possible queries and store data redundantly. As you say, the best you could hope for would be to use it as a temporary staging area before writing the real data to a database.

A text file, or even a binary log of whatever format you choose, is a lot easier to deal with on both the event recording side and on the aggregation side. It's also almost certain to be significantly faster than memcached given that no interprocess (and possibly network) communication is required to log an event.

-Steve


On Nov 3, 2007, at 6:29 PM, <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> wrote:

I am trying to figure out problem, how to effective track click statistics like user profile views without hitting database or writing statistics to flatfile and processing by cron. It would be best to save them directly into memcache so they can be globally available, then run in periods database updates. The problem is that there could be thousands of different profiles to count stats for
so using increment function is not an option.
Best would be if memcache would support "append" so I could save all hit IDs under one memcache key and then process the list to count frequency of IDs and issue db update. So my question is, how do I handle such cases to have statistics written into shared place to process later DB update?

Goodwill


Reply via email to