On Thu, Sep 17, 2015 at 10:40 AM, Lennon Day-Reynolds <[email protected]> wrote:
> Log-based rollups are slower to converge, but potentially much cheaper > to maintain (esp. if you have any existing async job infrastructure in > place). Depending on your access rates and how "bursty" traffic can be > they might be a totally acceptable option. You also didn't mention > your underlying web stack, but assuming you're going to run on > multiple hosts you might have to work a bit to gather + order logs in > one place to get accurate rollups. > Yeah, this is a fair point. There are services, for example Papertrail, that aggregate logs these days, so it can certainly be worked around, but you need to integrate with or build something to do this. Timeliness will definitely vary. > There's also a middle ground where you use memcached or redis to store > your counts. In-place increments in those systems will be cheaper than > a transactional DB write, though you can't do a simple filter on event > timestamp as you can in SQL. I've commonly used a simple bucketed > storage model with TTLs on each bucket to store a sliding window of > recent access counts. > +1 - totally valid option, too. Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/portland/attachments/20150917/dd2326fc/attachment.html> _______________________________________________ Portland mailing list [email protected] https://mail.python.org/mailman/listinfo/portland
