e logs? It *could* be
https://issues.apache.org/jira/browse/CASSANDRA-13873
On Tue, Oct 24, 2017 at 11:18 PM, Sotirios Delimanolis
<sotodel...@yahoo.com.invalid> wrote:
On a Cassandra 2.2.11 cluster, I noticed estimated compactions accumulating on
one node. nodetool compactionstats showed t
On a Cassandra 2.2.11 cluster, I noticed estimated compactions accumulating on
one node. nodetool compactionstats showed the following:
compaction type keyspace table completed
total unit progress Compaction ks1
some_table
These guesses will have to do. I thought something was wrong with such old
SSTables.
Thanks for your help investigating!
On Wednesday, August 23, 2017, 3:09:34 AM PDT, kurt greaves
wrote:
Ignore me, I was getting the major compaction for LCS mixed up with STCS.
I issued another major compaction just now and a brand new SSTable in Level 2
has an Estimated droppable tombstone value of 0.64. I don't know how accurate
that is.
On Tuesday, August 22, 2017, 9:33:34 PM PDT, Sotirios Delimanolis
<sotodel...@yahoo.com.INVALID> wrote:
What do yo
What do you mean by "a single SSTable"? SSTable size is set to 200MB and there
are ~ 100 SSTables in that previous example in Level 3.
This previous example table doesn't have a TTL, but we do delete rows. I've
since compacted the table so I can't provide the previous "Estimated droppable
Ignore the files missing those other components, that was confirmation bias :(
I was sorting by date instead of by name and just assumed that something was
wrong with Cassandra.
Here's an example table's SSTables, sorted by level, then by repaired status:
SSTable [name=lb-432055-big-Data.db,
See my other email to this list that you replied to (I most recently replied
late last week), titled "Cassandra isn't compacting old files". It's not just a
few. It's tens/hundreds. I'm worried there's some "starvation" going on and
disk is being filled with data that could be compacted away.
We are on 2.2.11 so we're all right on that front. The advice is difficult to
implement unfortunately, with so many nodes.
Thanks for the information!On Sunday, August 20, 2017, 4:28:36 PM PDT, kurt
greaves wrote:
Correction: Full repairs do mark SSTables as repaired in
That's the only way to get this done then, break writes and fix them with
incremental repairs?
On Friday, August 18, 2017, 5:17:38 PM PDT, kurt greaves
wrote:
You need to run an incremental repair for sstables to be marked repaired.
However only if all of the data in
n Wednesday, August 2, 2017, 2:35:02 PM PDT, Sotirios
Delimanolis <sotodel...@yahoo.com.INVALID> wrote:
Turns out there are already logs for this in Tracker.java. I enabled those and
clearly saw the old files are being tracked.
What else can I look at for hints about whether these files
I have a table that uses LeveledCompactionStrategy on Cassandra 2.2. At the
moment, it has two SSTables, both in level 1, one that's repaired and one that
isn't.
$ sstablemetadata lb-135366-big-Data.db | head
SSTable: /home/cassandra/data/my_keyspace/my_table/lb-135366-big
Partitioner:
Turns out there are already logs for this in Tracker.java. I enabled those and
clearly saw the old files are being tracked.
What else can I look at for hints about whether these files are later
invalidated/filtered out somehow?
On Tuesday, August 1, 2017, 3:29:38 PM PDT, Sotirios Delimanolis
f your version, but it may be (
https://issues.apache.org/jira/browse/CASSANDRA-13620 ) , or it may be
something else.
I wouldn't expect compaction to touch them if they're invalid. The handle may
be a leftover from trying to load them.
On Tue, Aug 1, 2017 at 10:01 AM, Sotirios Delimanolis
&l
@Jeff, why does compaction clear them and why does Cassandra keep a handle to
them? Shouldn't they be ignored entirely? Is there an error log I can enable to
detect them?
@kurt, there are no such logs for any of these tables. We have a custom log in
our build of Cassandra that does shows that
I don't want to go down the TTL path because this behaviour is also occurring
for tables without a TTL. I don't have hard numbers about the amount of writes,
but there's definitely been enough to trigger compaction in the ~year since.
We've never changed the topology of this cluster. Ranges have
On Cassandra 2.2.11, I have a table that uses LeveledCompactionStrategy and
that gets written to continuously. If I list the files in its data directory, I
see something like this
-rw-r--r-- 1 acassy agroup 161733811 Jul 31 18:46 lb-135346-big-Data.db
-rw-r--r-- 1 acassy agroup 159626222 Jul 31
A deployment of mine is hitting the
cassandra.max_queued_native_transport_requests limit quite often. I'd like to
trace which batch of requests caused Cassandra to go over the limit, by adding
some logs to Cassandra 2.2.
I was considering setting some thread local flag when we block and
:45 PM, Sotirios Delimanolis <sotodel...@yahoo.com>
wrote:
I forgot to check nodetool gossipinfo. Still, why does the first check think
that the address exists, but the second doesn't?
On Friday, January 6, 2017 1:11 PM, David Berry <dbe...@blackberry.com>
wrote:
We're using Cassandra 2.2.
This document lists a number of CQL limits. I'm particularly interested in the
Collection limits for Set and List. If I've interpreted it correctly, the
document states that values in Sets are limited to 65535 bytes.
This limit, as far as I know, exists because the
0e6184
TOKENS:15: Converting it from epoch….. local@img2116saturn101:~$
date -d @$((1483995662276/1000)) Mon Jan 9 21:01:02 UTC 2017 At the time we
waited the 72 hour period before reusing the IP, I’ve not used replace_address
previously. From: Sotirios Delimanolis [mailto:s
We had a node go down in our cluster and its disk had to be wiped. During that
time, all nodes in the cluster have restarted at least once.
We want to add the bad node back to the ring. It has the same IP/hostname. I
follow the steps here for "Adding nodes to an existing cluster."
When the
: 2147483647SSTable Level: 0
There's a mix of level 0 and level 1, but the 0 are definitely the bigger ones.
On Thursday, December 8, 2016 9:01 AM, Eric Evans
<john.eric.ev...@gmail.com> wrote:
On Wed, Dec 7, 2016 at 6:35 PM, Sotirios Delimanolis
<sotodel...@yahoo.com> wrote:
>
340812 #yiv9834340812 -- P
{margin-top:0;margin-bottom:0;}#yiv9834340812 This can happen as part of node
bootstrap,repair or rebuild node.
From: Sotirios Delimanolis <sotodel...@yahoo.com>
Sent: Wednesday, December 7, 2016 4:35:45 PM
To: User
Subject: Huge files in level 1
I have a couple of SSTables that are humongous
-rw-r--r-- 1 user group 138933736915 Dec 1 03:41
lb-29677471-big-Data.db-rw-r--r-- 1 user group 78444316655 Dec 1 03:58
lb-29677495-big-Data.db-rw-r--r-- 1 user group 212429252597 Dec 1 08:20
lb-29678145-big-Data.db
sstablemetadata reports
Hey C*,
During startup of a Cassandra 2.2.7 node, some part of the process fails.
Here's a snippet of logs with the stack trace
TRACE [main] 2016-07-28 19:06:25,381 SliceQueryFilter.java:269 - collecting 6
of 2147483647:
custom_table:ep_str:component_index:false:4@1441163636178000TRACE [main]
We're running G1 at the moment, both young and mixed collections.
On Thursday, April 21, 2016 11:07 AM, Jake Luciani <jak...@gmail.com> wrote:
What kind of collection? if its par new I wouldn't worry.
On Thu, Apr 21, 2016 at 2:02 PM, Sotirios Delimanolis <sotodel...@yahoo.co
wrote:
It's only used by the Snappy and LZ4 Compressors
On Thu, Apr 21, 2016 at 1:54 PM, Sotirios Delimanolis <sotodel...@yahoo.com>
wrote:
According to this Oracle document, GCLocker Initiated GC
is triggered when a JNI critical region was released. GC is blocked when
any thread i
According to this Oracle document, GCLocker Initiated GC
is triggered when a JNI critical region was released. GC is blocked when
any thread is in the JNI Critical region.If GC was requested during that
period, that GC is invoked after all the threads come out of the JNI critical
It was the driver after all. The C# driver (and I'm guessing others) query this
table as part of their heartbeat for idle connections. We have a lot of
clients. This adds up.
I don't believe this is the cause of the increasing network traffic.
On Wednesday, April 6, 2016 2:22 PM, Sotirios
Hey,
I'm investigating an issue where the network traffic on a Cassandra 2.1 node
increases over time, regardless of the load our clients are under.
I tried enabling TRACE logging for org.apache.cassandra.transport.Message and
got bombarded with logs like these
DEBUG [SharedPool-Worker-2]
native, not thrift.
...
Daemeon C.M. Reiydelle
USA (+1) 415.501.0198
London (+44) (0) 20 8144 9872
On Fri, Feb 19, 2016 at 10:12 AM, Sotirios Delimanolis <sotodel...@yahoo.com>
wrote:
Does your cluster contain 24+ nodes or fewer?
We did the same upgrade on a smaller cluster of 5
to do very interesting stuff. Updating to native now that
you are using 2.1 is something you might want to do soon enough :-).
C*heers,-Alain RodriguezFrance
The Last Picklehttp://www.thelastpickle.com
2016-02-19 3:07 GMT+01:00 Sotirios Delimanolis <sotodel...@yahoo.com&g
We have a Cassandra cluster with 24 nodes. These nodes were running 2.0.16.
While the nodes are in the ring and handling queries, we perform the upgrade to
2.1.12 as follows (more or less) one node at a time:
- Stop the Cassandra process
- Deploy jars, scripts, binaries, etc.
-
Hey,
I wanted to ask here before I opened an issue for the driver.
I'm using version 2.7.3 of the driver.
The PoolingOptions define core connections and max connections that limit the
number of connections that should be opened to each host.
However, each Session object you retrieve through
!forum/csharp-driver-user
-Carl
On Thu, Dec 17, 2015 at 3:01 PM, Sotirios Delimanolis <sotodel...@yahoo.com>
wrote:
Hey,
I wanted to ask here before I opened an issue for the driver.
I'm using version 2.7.3 of the driver.
The PoolingOptions define core connections and max connections that limit
Similarly, should we send multiple SELECT requests or a single one with a
SELECT...IN ?
On Wednesday, June 10, 2015 11:27 AM, Sotirios Delimanolis
sotodel...@yahoo.com wrote:
Will this eventually they will all go through behavior apply to the IN? How
is this query written
Hi,
When executing a DELETE statement with an IN clause, where the list contains
partition keys, what is the underlying behaviour with regards to atomicity?
DELETE FROM MastersOfTheUniverse WHERE mastersID IN ('Man-At-Arms', 'Teela');
Is it going to act like an atomic batch where if one fails,
they will all go through.
Do not use IN(), use a whole bunch of prepared statements asynchronously.
On Wed, Jun 10, 2015 at 9:26 AM Sotirios Delimanolis sotodel...@yahoo.com
wrote:
Hi,
When executing a DELETE statement with an IN clause, where the list contains
partition keys, what
Hey all,
Assuming a cluster with X 1 application nodes backed by Y 1 Cassandra
nodes, how do you best apply a schema modification?
Typically, such a schema modification is going to be done in parallel with code
changes (for querying the table) so all application nodes have to be restarted.
script and the code deploy
happens on that first single node before the code goes to all the other nodes.
Does that sound right?
Soto
On Monday, January 12, 2015 6:10 PM, Robert Coli rc...@eventbrite.com
wrote:
On Mon, Jan 12, 2015 at 5:46 PM, Sotirios Delimanolis sotodel
40 matches
Mail list logo