Maybe we need an auto responder for emails that contain unsubscribe
On May 2, 2012, at 9:14 AM, Eric Evans wrote:
On Tue, May 1, 2012 at 9:05 AM, Gmail matthewapet...@gmail.com wrote:
unsubscribe
http://qkme.me/35w46c
--
Eric Evans
Acunu | http://www.acunu.com | @acunu
How much data do you think you will need ad hoc query ability for?
On Fri, Jan 20, 2012 at 11:28 AM, Brian O'Neill b...@alumni.brown.eduwrote:
I can't remember if I asked this question before, but
We're using Cassandra as our transactional system, and building up quite a
library of
Many articles suggest model TimeUUID in columns instead of rows, but since
only one node can serve a single row, won't this lead to hot spot problems?
It won't cause hotspots as long as you are sharding by a small enough
time period, like hour, day, or week.
I.e. the key is the hour day or
How many total ranges to you expect to have long term?
On Tue, Nov 1, 2011 at 11:17 AM, Tamas Marki tma...@gmail.com wrote:
Hello,
I'm new to the list and also to Cassandra. I found it when I was searching
for something to replace our busy mysql server.
One of the things we use the server
Ed,
I could be completely wrong about this working--I haven't specifically
looked at how the counts are executed, but I think this makes sense.
You could potentially shard across several rows, based on a hash of
the username combined with the time period as the row key. Run a
count across each
Aditya,
Depending on how often you have to write to the database, you could
perform dual writes to two different column families, one that has
summary + details in it, and one that only has the summary.
This way you can get everything with one query, or the summary with
one query, this should
, Nice Idea !
and what about looking at, may be, some custom caching solutions, leaving
aside cassandra caching .. ?
On Sun, Oct 30, 2011 at 2:00 AM, Zach Richardson
j.zach.richard...@gmail.com wrote:
Aditya,
Depending on how often you have to write to the database, you could
perform
Matthias,
This is an interesting problem.
I would consider using long's as the column type, where your column
names are evenly distributed longs in sort order when you first write
your list out. So if you have items A and C with the long column
names 1000 and 2000, and then you have to insert