Hi - has anyone made any progress with this issue? We are having the same
problem with our Cassandra nodes in production. At some point a node (and
sometimes all 3) will jump to 100% CPU usage and stay there for hours until
restarted. Stack traces reveal several threads in a seemingly endless loop
I encounter the same problem, does anyone have sovled the problem?
On Tue, Apr 20, 2010 at 11:03 AM, Ingram Chen ingramc...@gmail.com wrote:
I check system.log both, but there is no exception logged.
On Tue, Apr 20, 2010 at 10:40, Jonathan Ellis jbel...@gmail.com wrote:
I don't see
We have this problem initially. but it disappeared after several days'
operation. so we have no chance to investigate problems more.
2010/5/10 Даниел Симеонов dsimeo...@gmail.com
Hi,
I've experienced the same problem, two nodes got stuck with CPU at 99%
and the following source code from
Hi,
I've experienced the same problem, two nodes got stuck with CPU at 99% and
the following source code from IncomingStreamReader class:
while (bytesRead pendingFile.getExpectedBytes()) {
bytesRead += fc.transferFrom(socketChannel, bytesRead,
FileStreamTask.CHUNK_SIZE);
On 4/17/10 6:47 PM, Ingram Chen wrote:
after upgrading jdk from 1.6.0_16 to 1.6.0_20, the problem solved.
FYI, this sounds like it might be :
https://issues.apache.org/jira/browse/CASSANDRA-896
http://bugs.sun.com/view_bug.do;jsessionid=60c39aa55d3666c0c84dd70eb826?bug_id=6805775
Where
Ouch ! I talk too early !
We still suffer same problems after upgrade to 1.6.0_20.
In JMX StreamingService, I see several wired incoming/outgoing transfer:
In Host A, 192.168.2.87
StreamingService Status:
Done with transfer to /192.168.2.88
StreamingService StreamSources:
[/192.168.2.88]
I don't see csArena-tmp-6-Index.db in the incoming files list. If
it's not there, that means that it did break out of that while loop.
Did you check both logs for exceptions?
On Mon, Apr 19, 2010 at 9:36 PM, Ingram Chen ingramc...@gmail.com wrote:
Ouch ! I talk too early !
We still suffer
Hi all,
We setup two nodes and simply set replication factor=2 for test run.
After both nodes, say, node A and node B, serve several hours, we found that
node A always keep 300% cpu usage.
(the other node is under 100% cpu, which is normal)
thread dump on node A shows that there are 3 busy