Hello, I have a curious behaviour occurring.
- 7 Nodes custer
- RF on the Keyspace is 3
- Latest version of everything (C* and Python Drivers)
- All queries are at QUORUM level
Some of my larger queries are timing out, which is ok, it can happen. But
looking at the log, I see the following:
:
1) Put on DEBUG log on the joining node to see what is going on in details
with the stream with 1500 files
2) Check the stream ID to see whether it's a new stream or an old one
pending
On Wed, Oct 29, 2014 at 2:21 AM, Maxime maxim...@gmail.com wrote:
Doan, thanks for the tip, I just read
ea8dfb47177bd40f46aac4fe41d3cfea3316cf35451ace0825f46b6e0fa9e3ef in
ColumnFamily(loc.loc_id_idx [66652e312e31332e3830:0:false:0@1414696815262000
!63072000,])
This is a sample of Enqueuing flush events in the storm.
On Thu, Oct 30, 2014 at 12:20 PM, Maxime maxim...@gmail.com wrote:
I will give a shot adding the logging
Well, the answer was Secondary indexes. I am guessing they were corrupted
somehow. I dropped all of them, cleanup, and now nodes are bootstrapping
fine.
On Thu, Oct 30, 2014 at 3:50 PM, Maxime maxim...@gmail.com wrote:
I've been trying to go through the logs but I can't say I understand very
, Maxime maxim...@gmail.com wrote:
Hmm, thanks for the reading.
I initially followed some (perhaps too old) maintenance scripts, which
included weekly 'nodetool compact'. Is there a way for me to undo the
damage? Tombstones will be a very important issue for me since the dataset
is very much
the queue_size to any value (deprecated now?) and boosting the threads
does not seem to help since even at 20 we're an order of magnitude off.
Suggestions? Comments?
On Sun, Oct 26, 2014 at 2:26 AM, DuyHai Doan doanduy...@gmail.com wrote:
Hello Maxime
Can you put the complete logs and config somewhere
, DuyHai Doan doanduy...@gmail.com wrote:
Hello Maxime
Increasing the flush writers won't help if your disk I/O is not keeping up.
I've had a look into the log file, below are some remarks:
1) There are a lot of SSTables on disk for some tables (events for
example, but not only). I've seen
those numbers
are, the most overwhelmed your disk is.
On Sun, Oct 26, 2014 at 12:01 PM, DuyHai Doan doanduy...@gmail.com
wrote:
Hello Maxime
Increasing the flush writers won't help if your disk I/O is not keeping
up.
I've had a look into the log file, below are some remarks:
1
Hello, I've been trying to add a new node to my cluster ( 4 nodes ) for a
few days now.
I started by adding a node similar to my current configuration, 4 GB or RAM
+ 2 Cores on DigitalOcean. However every time, I would end up getting OOM
errors after many log entries of the type:
INFO
not encounter problems - it just works so I dig into
other stuff.
ml
On Sat, Oct 25, 2014 at 5:22 PM, Maxime maxim...@gmail.com
javascript:_e(%7B%7D,'cvml','maxim...@gmail.com'); wrote:
Hello, I've been trying to add a new node to my cluster ( 4 nodes ) for a
few days now.
I started
Is there some unwritten wisdom with regards to the use 'nodetool compact'
before bootstrapping new nodes and decommissioning old ones?
TL;DR:
I've been spending the last few days trying to move a cluster on
DigitalOcean 2GB machines to 4GB machines (same provider). To do so I
wanted to create the
. But with Cassandra
2.0.7, and the addition of datastax's java driver in the dependencies, I am
getting this error.
Any idea how I could fix this?
Thanks!
Maxime
12 matches
Mail list logo