- Are you using a key cache?  How many keys do you have?  Across how
many column families

You configuration is unusual both in terms of not setting min heap ==
max heap and the percentage of available RAM used for the heap.  Did you
change the heap size in response to errors or for another reason?

On 03/04/2011 03:25 PM, Mark wrote:
> This happens during compaction and we are not using the RowsCached
> attribute.
> 
> Our initial/max heap are 2 and 6 respectively and we have 8 gigs in
> these machines.
> 
> Thanks
> 
> On 3/4/11 12:05 PM, Chris Burroughs wrote:
>> - Does this occur only during compaction or at seemingly random times?
>> - How large is your heap?  What jvm settings are you using? How much
>> physical RAM do you have?
>> - Do you have the row and/or key cache enabled?  How are they
>> configured?  How large are they when the OOM is thrown?
>>
>> On 03/04/2011 02:38 PM, Mark Miller wrote:
>>> Other than adding more memory to the machine is there a way to solve
>>> this? Please help. Thanks
>>>
>>> ERROR [COMPACTION-POOL:1] 2011-03-04 11:11:44,891 CassandraDaemon.java
>>> (line org.apache.cassandra.thrift.CassandraDaemon$1) Uncaught exception
>>> in thread Thread[COMPACTION-POOL:1,5,main]
>>> java.lang.OutOfMemoryError: Java heap space
>>>      at java.util.Arrays.copyOf(Arrays.java:2798)
>>>      at
>>> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:111)
>>>      at java.io.DataOutputStream.write(DataOutputStream.java:107)
>>>      at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>>>      at
>>> org.apache.cassandra.utils.FBUtilities.writeByteArray(FBUtilities.java:298)
>>>
>>>      at
>>> org.apache.cassandra.db.ColumnSerializer.serialize(ColumnSerializer.java:66)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:311)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:284)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:87)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:99)
>>>
>>>
>>>      at
>>> org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:140)
>>>
>>>
>>>      at
>>> org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:43)
>>>
>>>
>>>      at
>>> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:73)
>>>
>>>
>>>      at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:135)
>>>
>>>
>>>      at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:130)
>>>
>>>
>>>      at
>>> org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
>>>
>>>
>>>      at
>>> org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:294)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:101)
>>>
>>>
>>>      at
>>> org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:82)
>>>
>>>      at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>      at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>      at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>
>>>
>>>      at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>
>>>
>>>      at java.lang.Thread.run(Thread.java:636)
>>>
> 

Reply via email to