[ 
https://issues.apache.org/jira/browse/CASSANDRA-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833612#comment-16833612
 ] 

Yap Sok Ann commented on CASSANDRA-11460:
-----------------------------------------

On a 10 nodes cluster running 3.11.3 with pretty equal data distribution, we 
are seeing 1 or 2 random nodes getting OutOfMemoryError when handling writes 
from spark.

>From the resulting heap dump, there would be an instance of MutationStage 
>SEPExecutor occupying 60+GB of heap (we use G1GC with 64GB heap), consisting 
>of 11+ millions mutations in the tasks queue.

For now, we will try setting `spark.cassandra.output.throughput_mb_per_sec` to 
a low value and see how it goes.

> memory leak
> -----------
>
>                 Key: CASSANDRA-11460
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11460
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: stone
>            Priority: Urgent
>         Attachments: aaa.jpg
>
>
> env:
> cassandra3.3
> jdk8
> 8G Ram
> so set
> MAX_HEAP_SIZE="2G"
> HEAP_NEWSIZE="400M"
> 1.met same problem about this:
> https://issues.apache.org/jira/browse/CASSANDRA-9549
> I confuse about that this was fixed in release 3.3 according this page:
> https://github.com/apache/cassandra/blob/trunk/CHANGES.txt
> so I change to 3.4,and also have  found this problem again 
> I think this fix should be included in 3.3.3.4
> can you explain about this?
> 2.our write rate exceed the value that our cassandra env can support,
> but i think it should descrese the write rate,or block.consumer the writed 
> data,keep the memory down,then go on writing,not cause out-of-memory instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to