[ 
https://issues.apache.org/jira/browse/KAFKA-682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13545344#comment-13545344
 ] 

Ricky Ng-Adam commented on KAFKA-682:
-------------------------------------

After filing the bug initially, I switched to these settings (and then added 
the HeapDump directive):

bin/kafka-run-class.sh

KAFKA_OPTS="-server -Xms1024m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m 
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution 
-Xloggc:logs/gc.log -Djava.awt.headless=true 
-Dlog4j.configuration=file:$base_dir/config/log4j.properties 
-XX:+HeapDumpOnOutOfMemoryError"

Shouldn't these be set more aggressively as per the operational suggestions? 
It's probably better to have the user lower them than to have to make them 
higher.

I've downloaded MAT for Eclipse and ran it on the hprof. It points out two 
issues of which this is the more noticeable:

One instance of "java.nio.HeapByteBuffer" loaded by "<system class loader>" 
occupies 8,404,016 (58.22%) bytes. The instance is referenced by 
kafka.network.BoundedByteBufferReceive @ 0x7ad6a038 , loaded by 
"sun.misc.Launcher$AppClassLoader @ 0x7ad00d40". The memory is accumulated in 
one instance of "byte[]" loaded by "<system class loader>"


                
> java.lang.OutOfMemoryError: Java heap space
> -------------------------------------------
>
>                 Key: KAFKA-682
>                 URL: https://issues.apache.org/jira/browse/KAFKA-682
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.8
>         Environment: $ uname -a
> Linux rngadam-think 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:32:08 UTC 
> 2012 i686 i686 i686 GNU/Linux
> $ java -version
> java version "1.7.0_09"
> OpenJDK Runtime Environment (IcedTea7 2.3.3) (7u9-2.3.3-0ubuntu1~12.04.1)
> OpenJDK Server VM (build 23.2-b09, mixed mode)
>            Reporter: Ricky Ng-Adam
>
> git pull (commit 32dae955d5e2e2dd45bddb628cb07c874241d856)
> ...build...
> ./sbt update
> ./sbt package
> ...run...
> bin/zookeeper-server-start.sh config/zookeeper.properties
> bin/kafka-server-start.sh config/server.properties
> ...then configured fluentd with kafka plugin...
> gem install fluentd --no-ri --no-rdoc
> gem install fluent-plugin-kafka
> fluentd -c ./fluent/fluent.conf -vv
> ...then flood fluentd with messages inputted from syslog and outputted to 
> kafka.
> results in (after about 10000 messages of 1K each in 3s):
> [2013-01-05 02:00:52,087] ERROR Closing socket for /127.0.0.1 because of 
> error (kafka.network.Processor)
> java.lang.OutOfMemoryError: Java heap space
>     at 
> kafka.api.ProducerRequest$$anonfun$1$$anonfun$apply$1.apply(ProducerRequest.scala:45)
>     at 
> kafka.api.ProducerRequest$$anonfun$1$$anonfun$apply$1.apply(ProducerRequest.scala:42)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
>     at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
>     at scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
>     at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
>     at scala.collection.immutable.Range.map(Range.scala:39)
>     at kafka.api.ProducerRequest$$anonfun$1.apply(ProducerRequest.scala:42)
>     at kafka.api.ProducerRequest$$anonfun$1.apply(ProducerRequest.scala:38)
>     at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
>     at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:227)
>     at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
>     at scala.collection.immutable.Range$$anon$1.foreach(Range.scala:274)
>     at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:227)
>     at scala.collection.immutable.Range.flatMap(Range.scala:39)
>     at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:38)
>     at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:32)
>     at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:32)
>     at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:47)
>     at kafka.network.Processor.read(SocketServer.scala:298)
>     at kafka.network.Processor.run(SocketServer.scala:209)
>     at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to