On 08/22/2017 05:39 PM, Thakrar, Jayesh wrote:
Surbhi and Fay,
I agree we have plenty of RAM to spare.
Hi
At the very beginning of system.log there is a
INFO [CompactionExecutor:487] 2017-08-21 23:21:01,684
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate
Hi,
I use cassandra-count (github
https://github.com/brianmhess/cassandra-count) to count records in a
table, but I have wrong results.
When I export data with cqlsh /copy to csv, I have 1M records in my test
table, when I use cassandra-count I have different results for each node :
On 24/03/2017 01:00, Eric Stevens wrote:
Assuming an even distribution of data in your cluster, and an even
distribution across those keys by your readers, you would not need to
increase RF with cluster size to increase read performance. If you have
3 nodes with RF=3, and do 3 million reads,
Kairosdb (OpenTSDB clone forCassandra) is a TSDB that does this.
MAy be you could have a look at it ?
It has a daemon process that collects and groups data points into blobs
before writing to cassandra.
--
best,
Alain
On 20/03/2017 22:05, Michael Wojcikiewicz wrote:
Not sure if someone has suggested this, but I believe it's not
sufficient to simply add nodes to a cluster to increase read
performance: you also need to alter the ReplicationFactor of the
keyspace to a larger value as you increase your cluster
On 20/03/2017 02:35, S G wrote:
2)
https://docs.datastax.com/en/developer/java-driver/3.1/manual/statements/prepared/
tells me to avoid preparing select queries if I expect a change of
columns in my table down the road.
The problem is also related to select * which is considered bad practice
On 19/03/2017 02:54, S G wrote:
Forgot to mention that this vmstat picture is for the client-cluster
reading from Cassandra.
Hi SG,
Your numbers are low, 15k req/sec would be ok for a single node, for a
12 nodes cluster, something goes wrong... how do you measure the
throughput?
As
On 01/05/2017 07:59 PM, Edward Capriolo wrote:
Good troubleshooting. I would open a Jira. It seems like a good solution
would be to replace '..' with '.' somehow. It seems like no one would
every want ..
In my case with logstash it was not really a problem, in case of the OP,
may be it is a
On 01/04/2017 11:12 PM, Edward Capriolo wrote:
The metric-reporter is actually leveraged from another project.
https://github.com/addthis/metrics-reporter-config
Check the version of metric-reporter (in cassandra/lib) and see if it
has changed from your old version to your new version.
On
On 11/08/2016 08:52 PM, Alain Rastoul wrote:
For example if you had to track the position of a lot of objects,
instead of updating the object records, each second you could insert a
new event with : (object: object_id, event_type: position_move, position
: x, y ).
and add a timestamp
On 11/08/2016 11:05 AM, DuyHai Doan wrote:
Are you sure Cassandra is a good fit for this kind of heavy update &
delete scenario ?
+1
this sounds like relational thinking scenario... (no offense, I like
relational systems)
As if you want to maintain the state of a lot of entities with updates
On 11/08/2016 03:54 AM, ben ben wrote:
Hi guys,
CREATE TABLE recent (
user_name text,
vedio_id text,
position int,
last_time timestamp,
PRIMARY KEY (user_name, vedio_id)
)
Hi Ben,
May be a clustering columns order would help
CREATE TABLE recent (
...
) WITH
On 15/06/2016 06:40, linbo liao wrote:
I am not sure, but looks it will cause the update other than insert. If
it is true, the only way is request includes IF NOT EXISTS, inform the
client it failed?
Thanks,
Linbo
Hi Linbo,
+1 with what Ben said, timestamp has a millisecond precision and is
On 25/05/2016 17:56, bastien dine wrote:
Hi,
I'm running a 3 nodes Cassandra 2.1.x cluster. Each node has 8vCPU and 30
Go RAM.
Replication factor = 3 for my keyspace.
...
Is there a problem with the Java Driver ? The load balancing is not
Hi Bastien,
A replication factor of 3 for a 3
14 matches
Mail list logo