Although low amount of updates, it's possible that you hit a contention
bug. A simple test would be to add multiple Cassandra nodes on the same
physical node (like split your 20 cores to 5 instances of Cassandra). If
you get much higher throughput, then you have an answer..
I don't think a
Hi,
You could try something like (I tried to cleanup the code from other
stuff in the gist-editor, so it might not compile directly) the
following for your scheduling:
https://gist.github.com/burmanm/230c306f88c69c62dfe73799fc01
That should prevent pool getting full, instead using the
Hi,
How about taking it from the BoundStatement directly?
ByteBuffer routingKey =
b.getRoutingKey(ProtocolVersion.NEWEST_SUPPORTED, codecRegistry);
Token token = metadata.newToken(routingKey);
In this case the b is the "BoundStatement". Replace codecRegistry &
ProtocolVersion with what you
ay run into some trouble.
Jon
On Fri, Aug 5, 2016 at 6:14 AM Michael Burman <mibur...@redhat.com> wrote:
> Hi,
>
> As Spark is an example of something I really don't want. It's resource
> heavy, it involves copying data and it involves managing yet another
> distributed sys
some sort of ETL on your C* data, why not use
Spark to compress those data into blobs and use User-Defined-Function to
explode them when reading ?
On Thu, Aug 4, 2016 at 10:08 PM, Michael Burman <mibur...@redhat.com> wrote:
> Hi,
>
> No, I don't want to lose precision (if that's what
nt: Thursday, August 4, 2016 10:26:30 PM
Subject: Re: Merging cells in compaction / compression?
When you say merge cells, do you mean re-aggregating the data into courser
time buckets?
On Thu, Aug 4, 2016 at 5:59 AM Michael Burman <mibur...@redhat.com> wrote:
> Hi,
>
> Consid
Hi,
Considering the following example structure:
CREATE TABLE data (
metric text,
value double,
time timestamp,
PRIMARY KEY((metric), time)
) WITH CLUSTERING ORDER BY (time DESC)
The natural inserting order is metric, value, timestamp pairs, one metric/value
pair per second for example. That