This is the current configuration values.

*drpc.childopts *
-Xmx768m

*storm.zookeeper.connection.timeout*
15000

*nimbus.childopts *
-Xmx1024m -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf
-javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8649,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM

storm.zookeeper.session.timeout
20000

*supervisor.childopts *
-Xmx256m -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=56431
-javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM

topology.max.spout.pending
50

*worker.childopts*
-Xmx2048m
-javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM

*logviewer.childopts*
-Xmx128m

On Tue, Nov 24, 2015 at 9:44 PM, Paul Poulosky <[email protected]>
wrote:

> You might want to see if tuples are backing up and eating up memory, or if
> one of your components in the worker is holding on to stale references, and
> preventing them from being reaped.
>
> You can add the -XX:HeapDumpPath=/path/to/heapdumps
> -XX:HeapDumpOnOutOfMemoryError arguments to your worker childopts so that
> you can see what your worker was holding on to.
>
>
>
> On Tuesday, November 24, 2015 10:09 AM, Fan Jiang <[email protected]>
> wrote:
>
>
> Look at "worker.childopts" and "worker.heap.size.mb" in storm.yaml on the
> nimbus node. Try increasing worker's heap size by specifying JVM opt "-Xmx
> <heap size>".
>
> 2015-11-24 10:59 GMT-05:00 prakash a <[email protected]>:
>
> We are getting below error from kafka spout on our storm cluster.Please
> let us know which configuration need to be updated.
>
>
> ------------------------------------------------------------------------------------------------------------------
>
> java.lang.OutOfMemoryError: Java heap space
>       at java.util.Arrays.copyOf(Arrays.java:2271)
>       at 
> java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:178)
>       at 
> kafka.message.ByteBufferMessageSet$.decompress(ByteBufferMessageSet.scala:75)
>       at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNextOuter(ByteBufferMessageSet.scala:178)
>       at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:191)
>       at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:145)
>       at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
>       at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
>       at 
> kafka.javaapi.message.ByteBufferMessageSet$$anon$1.hasNext(ByteBufferMessageSet.scala:42)
>       at storm.kafka.PartitionManager.fill(PartitionManager.java:177)
>       at storm.kafka.PartitionManager.next(PartitionManager.java:124)
>       at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:141)
>       at 
> backtype.storm.daemon.executor$fn__3231$fn__3246$fn__3275.invoke(executor.clj:562)
>       at backtype.storm.util$async_loop$fn__442.invoke(util.clj:436)
>       at clojure.lang.AFn.run(AFn.java:24)
>       at java.lang.Thread.run(Thread.java:744)
>
>
> Kafka version 0.8.1.1
> Storm 0.9.1.2.1.11.0-891
>
> --
> Regards,
> Prakash.
> ---------------------------------------------------------
> Doing your best is more important than being the best.
> ----------------------------------------------------------
>
>
>
>
> --
> Regards,
> Prakash.
> ---------------------------------------------------------
> Doing your best is more important than being the best.
> ----------------------------------------------------------
>
>
>
>
> --
> Sincerely,
> Fan Jiang
>
>
>
>


-- 
Regards,
Prakash.
---------------------------------------------------------
Doing your best is more important than being the best.
----------------------------------------------------------

Reply via email to