Clint, did you find anything?
I just noticed it happens to us too on only one node in our CI cluster.
I don't think there is a special usage before it happens... The last line
in the log before the shutdown lines in at least an hour before..
We're using C* 2.0.9.
On Thu, Aug 7, 2014 at 12:49
Hello,
I have a cluster running and I'm trying to change the schema on it. Altough it
succeeds on one cluster (a test one), on another it keeps creating two separate
schema versions (both are 2 DC configuration; the cluster where it goes wrong
end up with a schema version on each DC).
I use
Hello all,
I have altered a table in cassandra and on one node it somehow got corrupted. I
the changes did not propagate ok. Ran repair keyspace columnfamily... noting
changed...
Is there a way to repair this?
In the datastax documentation there is a description how to replace a dead
node
(http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html).
Is the replace_address option required even if the IP address of the new
node is the same as the original one (I read
After a lot of investigation, it seems that the clocks were desynchronized
through the cluster (altough we did not check that resyncing them resolve the
problem, we modify the schma with one node up and restart all other nodes
afterwards).
From: Demeyer
Hi,
Without more information (Cassandra version, setup, topology, schema,
queries performed) this list won't be able to assist you. If you can
provide a more detailed explanation of the steps you took to reach your
current state that would be great.
Mark
On Tue, Aug 12, 2014 at 12:21 PM,
Hello Ian
So that way each index entry *will* have quite a few entries and the index
as a whole won't grow too big. Is my thinking correct here? -- In this
case yes. Do not forget that for each date value, there will be 1
corresponding index value + 10 updates. If you have an approximate count
Still having issues with node bootstrapping. The new node just died,
because it Full Gced, the nodes it had actual streams with noticed its
down. After the full gc finished the new node printed this log :
ERROR 02:52:36,259 Stream failed because /10.10.20.35 died or was
restarted/removed (streams
Makes sense - thanks again!
On Tue, Aug 12, 2014 at 9:45 AM, DuyHai Doan doanduy...@gmail.com wrote:
Hello Ian
So that way each index entry *will* have quite a few entries and the
index as a whole won't grow too big. Is my thinking correct here? -- In
this case yes. Do not forget that for
Hi Or,
For now I removed the test that was failing like this from our suite
and made a note to revisit it in a couple of weeks. Unfortunately I
still don't know what the issue is. I'll post here if I figure out it
(please do the same!). My working hypothesis now is that we had some
kind of OOM
Hi all,
We have a node with commit log director ~4G. During start-up of the node on
commit log replaying the used heap space is constantly growing ending with OOM
error.
The heap size and new heap size properties are - 1G and 256M. We are using the
default settings for commitlog_sync,
Hi everyone,
I'm confused with number of columns in a row of Cassandra, as far as I know
there is 2 billions columns per row. Like that if I have a composite column
name in each row, for ex: (timestamp, userid), then number of columns per
row is the number of distinct 'timestamp' or each distinct
On Tue, Aug 12, 2014 at 9:34 AM, jivko donev jivko_...@yahoo.com wrote:
We have a node with commit log director ~4G. During start-up of the node
on commit log replaying the used heap space is constantly growing ending
with OOM error.
The heap size and new heap size properties are - 1G and
On Tue, Aug 12, 2014 at 4:33 AM, tsi thorsten.s...@t-systems.com wrote:
In the datastax documentation there is a description how to replace a dead
node
(
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
).
Is the replace_address option
On Mon, Aug 11, 2014 at 4:17 PM, Ian Rose ianr...@fullstory.com wrote:
You better off create a manuel reverse-index to track modification date,
something like this -- I had considered an approach like this but my
concern is that for any given minute *all* of the updates will be handled
by a
Hi Robert,
Thanks for your reply. The Cassandra version is 2.07. Is there some commonly
used rule for determining the commitlog and memtables size depending on the
heap size? What would be the main disadvantage when having smaller commitlog?
On Tuesday, August 12, 2014 8:32 PM, Robert Coli
Your question is a little too tangled for me... Are you asking about rows in a
partition (some people call that a “storage row”) or columns per row? The
latter is simply the number of columns that you have declared in your table.
The total number of columns – or more properly, “cells” – in a
Some questions on nodetool repair.
1. This tool repairs inconsistencies across replicas of the row. Since
latest update always wins, I dont see inconsistencies other than ones
resulting from the combination of deletes, tombstones, and crashed nodes.
Technically, if data is never deleted from
Hi Vish,
1. This tool repairs inconsistencies across replicas of the row. Since
latest update always wins, I dont see inconsistencies other than ones
resulting from the combination of deletes, tombstones, and crashed nodes.
Technically, if data is never deleted from cassandra, then nodetool
Agreed need more details; and just start by increasing heap because that may
wells solve the problem.
I have just observed (which makes sense when you think about it) while testing
fix for https://issues.apache.org/jira/browse/CASSANDRA-7546, that if you are
replaying a commit log which has a
1. You don't have to repair if you use QUORUM consistency and you don't
delete data.
2.Performance depends on size of data each node has. It's very difficult to
predict. It may take days.
Thank you,
Andrey
On Tue, Aug 12, 2014 at 2:06 PM, Viswanathan Ramachandran
vish.ramachand...@gmail.com
Thanks Mark,
Since we have replicas in each data center, addition of a new data center
(and new replicas) has a performance implication on nodetool repair.
I do understand that adding nodes without increasing number of replicas may
improve repair performance, but in this case we are adding new
Andrey, QUORUM consistency and no deletes makes perfect sense.
I believe we could modify that to EACH_QUORUM or QUORUM consistency and no
deletes - isnt that right ?
Thanks
On Tue, Aug 12, 2014 at 3:10 PM, Andrey Ilinykh ailin...@gmail.com wrote:
1. You don't have to repair if you use QUORUM
Hi -
I am currently running a single Cassandra node on my local dev machine.
Here is my (test) schema (which is meaningless, I created it just to
demonstrate the issue I am running into):
CREATE TABLE foo (
foo_name ascii,
foo_shard bigint,
int_val bigint,
PRIMARY KEY ((foo_name,
On Tue, Aug 12, 2014 at 4:46 PM, Viswanathan Ramachandran
vish.ramachand...@gmail.com wrote:
Andrey, QUORUM consistency and no deletes makes perfect sense.
I believe we could modify that to EACH_QUORUM or QUORUM consistency and no
deletes - isnt that right?
yes.
25 matches
Mail list logo