Check Cassandra logs for tombstone threshold error
On Aug 3, 2015 7:32 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Aug 3, 2015 at 2:48 PM, Sid Tantia sid.tan...@baseboxsoftware.com
wrote:
Any select all or select count query on a particular table is timing out
with
Theres your problem, you're using the DataStax java driver :) I just ran
into this issue in the last week and it was incredibly frustrating. If you
are doing a simple loop on a select * query, then the DataStax java
driver will only process 2^31 rows (e.g. the Java Integer Max
(2,147,483,647))
It could be the linux kernel killing Cassandra b/c of memory usage. When
this happens, nothing is logged in Cassandra. Check the system
logs: /var/log/messages Look for a message saying Out of Memory... kill
process...
On Mon, Jun 8, 2015 at 1:37 PM, Paulo Motta pauloricard...@gmail.com
wrote:
Try breaking it up into smaller chunks using multiple threads and token
ranges. 86400 is pretty large. I found ~1000 results per query is good.
This will spread the burden across all servers a little more evenly.
On Thu, May 7, 2015 at 4:27 AM, Alprema alpr...@alprema.com wrote:
Hi,
I am
What other storage impacting commands or nuances do you gave to consider
when you switch to leveled compaction? For instance, nodetool cleanup says
Running the nodetool cleanup command causes a temporary increase in disk
space usage proportional to the size of your largest SSTable.
Are sstables
I have 1 DC that was originally 3 nodes each set with a single token:
'-9223372036854775808', '-3074457345618258603', '3074457345618258602'
I added two more nodes and ran nodetool move and nodetool cleanup one
server at a time with these tokens: '-9223372036854775808',
'-5534023222112865485',