On Mon, Dec 29, 2014 at 5:20 PM, Sam Klock skl...@akamai.com wrote:
Our investigation led us to logic in Cassandra used to paginate scans
of rows in indexes on composites. The issue seems to be the short
algorithm Cassandra uses to select the size of the pages for the scan,
partially given
Hi!
Yes, since all the writes for a partition (or row if you speak Thrift) always
go to the same replicas, you will need to design to avoid hotspots - a pure day
row will cause all the writes for a single day to go to the same replicas, so
those nodes will have to work really hard for a day,
Hi there
I was facing a similar requirement recently, e.g. UPDATE IF EXISTS and
I found a work-around.
CREATE TABLE my_table(
partition_key int,
duplicate_partition_key int,
value text,
PRIMARY KEY(partition_key));
At the beginning, I tried to query with : UPDATE
On Mon, Dec 29, 2014 at 3:24 PM, mck m...@apache.org wrote:
Especially in CASSANDRA-6285 i see some scary stuff went down.
But there are no outstanding bugs that we know of, are there?
Right, the question is whether you believe that 6285 has actually been
fully resolved.
It's relatively
Thanks Rob.
On Tue, Dec 30, 2014 at 1:38 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Dec 30, 2014 at 9:42 AM, Phil Burress philtburr...@gmail.com
wrote:
We are having a lot of problems with release 2.1.2. It was suggested here
we should downgrade to 2.1.1 if possible.
For the
On Mon, Dec 29, 2014 at 6:05 AM, Ajay ajay.ga...@gmail.com wrote:
In my case, Cassandra is the only storage. If the counters get incorrect,
it could't be corrected.
Cassandra counters are not appropriate for this use case, if correctness is
a requirement.
=Rob
Hi,
We have a table in our production Cassandra that is stored on 11369
SSTables. The average SSTable count for the other tables is around 15, and
the read latency for them is much smaller.
I tried to run manual compaction (nodetool compact my_keyspace my_table)
but then the node starts spending
We also suffer some problem from 2.1.2 . But I think we can deal with .
First I don’t use incremental repair.
Second we restart node after repair . It will release sstable tmplink .
Third , don’t use stop COMPACTION command.
If we read 2.1.2 release notes ,we find it solve some issues
Thanks Janne and Rob.
The idea is like this : To store the User clicks on Cassandra and a
scheduler to count/aggregate the clicks per link or ad
hourly/daily/monthly and store in My SQL (or may be in Cassandra itself).
Since tombstones will be deleted only after some days (as per
configuration),