We are also seeing this in our development environment. We have a 3 node
Cassandra 2.0.5 cluster running on Ubuntu 12.04 and are connecting from a
Tomcat based application running on Windows using the 2.0.0 Cassandra Java
Driver. We have setKeepAlive(true) when building the cluster in the
Is it possible to list all running queries on a Cassandra cluster ?
Is it possible to cancel a running query on a Cassandra cluster?
Regards
Hi Rob,
we need this for the worst case scenario, so our intention is to restore the
entire cluster, not a single node.
I am really not sure what the correct procedure would be. I think we have
configured everything properly so the nodes are archiving the commitlogs (even
though I am not sure
Thanks Tim,
Significant number of writes / second - possibly a good use case for
cassandra.
what is a significant number for you?
--
View this message in context:
Hi,
I am now to cassandra and even though I am not familiar to the
implementation and architecture of cassandra, Is struggle with how to best
design the schema.
We have an application where we need to store huge amounts of data. Its a
per user storage where we store a lot of data for each user
Hi Prem,
Also, I have heard that Cassandra doesn't perform will with high read
ops. How true is that?
I think that it isn't true. Cassandra has very good read performance.
For more details you can look to benchmark
http://planetcassandra.org/nosql-performance-benchmarks/#EndPoint.
How many
I'm wondering what will clear tombstoned rows? nodetool cleanup, nodetool
repair, or time (as in just wait)?
I had a CF that was more or less storing session information. After some
time, we decided that one piece of this information was pointless to track
(and was 90%+ of the columns, and in
compaction should take care of it; for me it never worked so I run nodetool
compaction on every node; that does it.
2014-04-11 16:05 GMT+02:00 William Oberman ober...@civicscience.com:
I'm wondering what will clear tombstoned rows? nodetool cleanup, nodetool
repair, or time (as in just
Thanks.
For the use case, what should I be thinking about schema-wise. ?
Thanks,
Prem
On Fri, Apr 11, 2014 at 2:16 PM, Sergey Murylev sergeymury...@gmail.comwrote:
Hi Prem,
Also, I have heard that Cassandra doesn't perform will with high read ops.
How true is that?
I think that it
I'm seeing a lot of articles about a dependency between removing tombstones
and GCGraceSeconds, which might be my problem (I just checked, and this CF
has GCGraceSeconds of 10 days).
On Fri, Apr 11, 2014 at 10:10 AM, tommaso barbugli tbarbu...@gmail.comwrote:
compaction should take care of it;
TCP keep alives (by the setTimeout) are notoriously useless... The default
2 hours is generally far longer then any timeout in NAT translation tables
(generally ~5 min) and even if you decrease the keep alive to a sane value
a log of networks actually throw away TCP keep alive packets. You see
Actually if you want to use Cassandra you should store all user related
data in single row with user ID as primary key.
On 11/04/14 18:14, Prem Yadav wrote:
Thanks.
For the use case, what should I be thinking about schema-wise. ?
Thanks,
Prem
On Fri, Apr 11, 2014 at 2:16 PM, Sergey
Correct, a tombstone will only be removed after gc_grace period has
elapsed. The default value is set to 10 days which allows a great deal of
time for consistency to be achieved prior to deletion. If you are
operationally confident that you can achieve consistency via anti-entropy
repairs within a
In my experience even after the gc_grace period tombstones remains stored
on disk (at least using cassandra 2.0.5) ; only a full compaction clears
them. Perhaps that is because my application never reads tombstones?
2014-04-11 16:31 GMT+02:00 Mark Reddy mark.re...@boxever.com:
Correct, a
So, if I was impatient and just wanted to make this happen now, I could:
1.) Change GCGraceSeconds of the CF to 0
2.) run nodetool compact (*)
3.) Change GCGraceSeconds of the CF back to 10 days
Since I have ~900M tombstones, even if I miss a few due to impatience, I
don't care *that* much as I
No. This is not possible today
On Apr 11, 2014, at 1:19 AM, Richard Jennings richardjenni...@gmail.com
wrote:
Is it possible to list all running queries on a Cassandra cluster ?
Is it possible to cancel a running query on a Cassandra cluster?
Regards
Answered my own question. Good writeup here of the pros/cons of compact:
http://www.datastax.com/documentation/cassandra/1.2/cassandra/operations/ops_about_config_compact_c.html
And I was thinking of bad information that used to float in this forum
about major compactions (with respect to the
Yes, running nodetool compact (major compaction) creates one large SSTable.
This will mess up the heuristics of the SizeTiered strategy (is this the
compaction strategy you are using?) leading to multiple 'small' SSTables
alongside the single large SSTable, which results in increased read
latency.
We have considered this but wondered how well it would work as the Cassandra
Java Driver opens multiple connections internally to each Cassandra node. I
suppose it depends how those connections are used internally, if it's round
robin then it should work. Perhaps we just need to to try it.
--
I have a similar problem here, I deleted about 30% of a very large CF using
LCS (about 80GB per node), but still my data hasn't shrinked, even if I
used 1 day for gc_grace_seconds. Would nodetool scrub help? Does nodetool
scrub forces a minor compaction?
Cheers,
Paulo
On Fri, Apr 11, 2014 at
Yes, I'm using SizeTiered.
I totally understand the mess up the heuristics issue. But, I don't
understand You will incur the operational overhead of having to manage
compactions if you wish to compact these smaller SSTables. My
understanding is the small tables will still compact. The problem
I have played with this quite a bit and recommend you set gc_grace_seconds
to 0 and use 'nodetool compact [keyspace] [cfname]' on your table.
A caveat I have is that we use C* 2.0.6 - but the space we expect to
recover is in fact recovered.
Actually, since we never delete explicitly (just ttl)
To clarify, you would want to manage compactions only if you were concerned
about read latency. If you update rows, those rows may become spread across
an increasing number of SSTables leading to increased read latency.
Thanks for providing some insight into your use case as it does differ from
Thats great Will, if you could update the thread with the actions you
decide to take and the results that would be great.
Mark
On Fri, Apr 11, 2014 at 5:53 PM, William Oberman
ober...@civicscience.comwrote:
I've learned a *lot* from this thread. My thanks to all of the
contributors!
Out of curiosity, any folks seeing backups in the send or receive queues
via netstat while this is happening? (netstat -tulpn for example)
I feel like I had this happen once and it ended up being a sysconfig tuning
issue (net.core.* and net.ipv4.* stuff specifically).
Can't seem to find anything
This thread is really informative, thanks for the good feedback.
My question is : Is there a way to force tombstones to be clared with LCS?
Does scrub help in any case? Or the only solution would be to create a new
CF and migrate all the data if you intend to do a large CF cleanup?
Cheers,
On
Hello to all
I have runed today nodetool compact no specific node.
It created single file ( g-1155) on 18:08
Currently all clients are down therefore no new data is wrotten
However in time of running compact on on other nodes i found that new
SSTables appeared
on this node :
-rw-r--r-- 1 root
It’s a fairly standard relational-like CF. Description is the only field
that’s potentially big (can be up to 1k).
CREATE COLUMN FAMILY 'Event' WITH
key_validation_class = 'UTF8Type' AND
comparator = 'UTF8Type' AND
default_validation_class = 'UTF8Type' AND
bloom_filter_fp_chance = 0.1
On Fri, Apr 11, 2014 at 10:44 AM, Yulian Oifa oifa.yul...@gmail.com wrote:
Currently all clients are down therefore no new data is wrotten
Hinted handoff delivery.
=Rob
At the cost of really quite a lot of compaction, you can temporarily switch
to SizeTiered, and when that is completely done (check each node), switch
back to Leveled.
it's like doing the laundry twice :)
I've done this on CFs that were about 5GB but I don't see why it wouldn't
work on larger
(probably should have read downthread before writing my reply.. briefly, +1
most of the thread's commentary regarding major compaction, but don't
listen to the FUD about major compaction, unless you have a really large
amount of data you'll probably be fine..)
On Fri, Apr 11, 2014 at 7:05 AM,
For sanity, I ran the same python script with the same row ids again today and
it was 10x faster. Must be something going wrong intermittently in my cluster.
-Allan
On April 11, 2014 at 11:02:11 AM, Allan C (alla...@gmail.com) wrote:
It’s a fairly standard relational-like CF. Description is
On Fri, Apr 11, 2014 at 1:18 AM, Richard Jennings richardjenni...@gmail.com
wrote:
Is it possible to list all running queries on a Cassandra cluster ?
No, but you can get a count of them on a per node basis :
https://issues.apache.org/jira/browse/CASSANDRA-5084
=Rob
I was wondering, to remove the tombstones from Sstables created by LCS, why
don't we just set the tombstone_threshold table property to a very small
value (say 0.01)..?
As the doc said (
www.datastax.com/documentation/cql/3.0/cql/cql_reference/compactSubprop.html)
this will force compaction on
The situation I am seeing is this:
To access my companies development environment I need to VPN.
I do some development on the application, and for some reason my VPN drops,
but I had established connections to my development cassandra server.
When I reconnect and check netstat I see the
I've never noticed that that setting tombstone_threshold has any effect...
at least in 2.0.6.
What gets written to the log?
On Fri, Apr 11, 2014 at 3:31 PM, DuyHai Doan doanduy...@gmail.com wrote:
I was wondering, to remove the tombstones from Sstables created by LCS,
why don't we just set
Hey,
Some months ago (last year!!) during our previous major upgrade from 1.1 -
1.2 I started writing a blog post with some tips for a smooth rolling
upgrade, but for some reason I forgot to finish the post. I found it
recently and decided it to publish anyway, as some of the info may be
helpful
I've got classical eventual consistency symptoms (read after write returns
empty result) but there is a surprising twist. The keyspace has replication
factor 1 (it's used as a cache) so how can I get a stale result?
Cassandra version 1.2.15.
Consistency settings (although I think they should not
38 matches
Mail list logo