Are you frequently updating same rows ? What is the memtable flush size ?
can you post the table create query here in please.
On Thu, Mar 26, 2015 at 1:21 PM, Dave Galbraith david92galbra...@gmail.com
wrote:
Hey! So I'm running Cassandra 2.1.2 and using the
SizeTieredCompactionStrategy. I'm
you may be seeing
https://issues.apache.org/jira/browse/CASSANDRA-8860
https://issues.apache.org/jira/browse/CASSANDRA-8860
https://issues.apache.org/jira/browse/CASSANDRA-8635
https://issues.apache.org/jira/browse/CASSANDRA-8635
related issues (which ends up with excessive numbers of
I have a cluster which stores tree structures. I keep several hundred unrelated
trees. The largest has about 180 million nodes, and the smallest has 1 node.
The largest fanout is almost 400K. Depth is arbitrary, but in practice is
probably less than 10. I am able to page through children and
Interesting thought, that should work indeed, I'll evaluate both options
and provide an update here once I have results.
Best regards,
Robin Verlangen
*Chief Data Architect*
W http://www.robinverlangen.nl
E ro...@us2.nl
http://goo.gl/Lt7BC
*What is CloudPelican? http://goo.gl/HkB3D*
Hey all,
In certain cases it would be useful for us to find out which node(s) have
the data for a given token/partition key.
The only solutions I'm aware of is to select from system.local and/or
system.peers to grab the host_id and tokens, do `SELECT token(thing) FROM
myks.mytable WHERE thing =
It looks like it was CASSANDRA-8860, setting that cold reads to omit thing
down to zero took my SSTable count from 641 to 1 and made all my queries
work. Thank you!!
On Thu, Mar 26, 2015 at 4:55 AM, graham sanderson gra...@vast.com wrote:
you may be seeing
Hi all,
I encountered an issue by removing and adding back a node.
Here is how this issue came out:
(1) We have four nodes cluster running, but there was a hard disk failure
on one of the node.
Since we need to replace the hard disk, I chose to use *removenode *to
remove the failed node.
(2) few
Thanks guys, think both of these answer my question. Guess I had overlooked
nodetool getendpoints. Hopefully findable by future googlers now.
On Thu, Mar 26, 2015 at 2:37 PM, Adam Holmberg adam.holmb...@datastax.com
wrote:
Dan,
Depending on your context, many of the DataStax drivers have the
Not sure if this is the right place to ask, but we are trying to model a
user-generated tree hierarchy in which they create child objects of a
root node, and can create an arbitrary number of children (and children
of children, and on and on). So far we have looked at storing each tree
Dan,
Depending on your context, many of the DataStax drivers have the token ring
exposed client-side.
For example,
Python:
http://datastax.github.io/python-driver/api/cassandra/metadata.html#tokens-and-ring-topology
Java:
Hi Dan,
Have you tried using nodetool getendpoints? It shows you nodes that
currently own the specific key.
Roman
On Thu, Mar 26, 2015 at 1:21 PM, Dan Kinder dkin...@turnitin.com wrote:
Hey all,
In certain cases it would be useful for us to find out which node(s) have
the data for a given
On Thu, Mar 26, 2015 at 11:31 AM, Shiwen Cheng cheng.shiwen...@gmail.com
wrote:
I encountered an issue by removing and adding back a node.
You are encountering a failed/hung bootstrap, which probably has nothing to
do with the node having been previously removenoded.
Stop the node, wipe all
On Wed, Mar 25, 2015 at 7:16 PM, Jonathan Haddad j...@jonhaddad.com wrote:
There's no downside to running upgradesstables. I recommend always doing
it on upgrade just to be safe.
For the record and just my opinion : I recommend against paying this fixed
cost when you don't need to.
It is
On Wed, Mar 25, 2015 at 6:53 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Yup, I increased in_memory_compaction_limit_in_mb to 512MB so the row in
question fits into it and ran repair on a couple of nodes owning its key.
The log entries about this particular row went away and those columns
On Thu, Mar 26, 2015 at 12:51 AM, Dave Galbraith david92galbra...@gmail.com
wrote:
Hey! So I'm running Cassandra 2.1.2 and using the
SizeTieredCompactionStrategy. I'm doing about 3k writes/sec on a single
node. My read performance is terrible, all my queries just time out. So I
do nodetool
Yep, good point: https://issues.apache.org/jira/browse/CASSANDRA-9045.
On Thu, Mar 26, 2015 at 4:23 PM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Mar 25, 2015 at 6:53 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Yup, I increased in_memory_compaction_limit_in_mb to 512MB so the row
Would it help here to not actually issue a delete statement but instead use
date based compaction and a dynamically calculated ttl that is some safe
distance in the future from your key?
Just a thought.
-Thunder
On Mar 25, 2015 11:07 AM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Mar 25,
Hey! So I'm running Cassandra 2.1.2 and using the
SizeTieredCompactionStrategy. I'm doing about 3k writes/sec on a single
node. My read performance is terrible, all my queries just time out. So I
do nodetool cfstats:
Read Count: 42071
Read Latency: 67.47804242827601 ms.
Write Count:
18 matches
Mail list logo