[
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16403461#comment-16403461
]
Pankaj commented on FLINK-9009:
-------------------------------
Stacktrace
Sink: Cassandra Sink, Filter -> Flat Map -> Sink: Unnamed, Filter -> Flat Map
-> Sink: Cassandra Sink, Filter -> Flat Map -> Sink: Unnamed, Filter -> Flat
Map -> Sink: *Cassandra Sink)(1/10*) switched to RUNNING
17.03.2018 14:33:59.740 [OUT] [ERROR] [ ] [ ]
*io.netty.util.ResourceLeakDetector LEAK: You are creating too many
HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be
reused across the JVM,so that only a few instances are create*d. # #
java.lang.OutOfMemoryError: Java heap space #
-XX:OnOutOfMemoryError="/opt/tomcat/bin/shutdown.sh 5" # Executing /bin/sh -c
"/opt/tomcat/bin/shutdown.sh 5"... Mar 17, 2018 2:34:28 PM
org.apache.catalina.startup.Catalina stopServer SEVERE: Could not contact
localhost:8005. Tomcat may not be running.
Mar 17, 2018 2:34:28 PM org.apache.catalina.startup.Catalina stopServer SEVERE:
Catalina.stop: java.net.ConnectException: Connection refused (Connection
refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
java.net.Socket.connect(Socket.java:589) at
java.net.Socket.connect(Socket.java:538)
-----------------------
"cluster16-nio-worker-1" #142 prio=5 os_prio=0 tid=0x00007f97fc386000 nid=0xbe
waiting for monitor entry [0x00007f9782e86000] java.lang.Thread.State: BLOCKED
(on object monitor) at
*com.datastax.driver.core.Connection$10.operationComplete(Connection.java:547)
at*
com.datastax.driver.core.Connection$10.operationComplete(Connection.java:534)
at
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at
io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:621)
at
io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) at
io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
at
io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
at com.datastax.driver.core.Connection$Flusher.run(Connection.java:870) at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:748)
----
io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
at
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636)
at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:284)
at
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636)
at
io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:622)
at
io.netty.channel.DefaultChannelPipeline.write(DefaultChannelPipeline.java:939)
at io.netty.channel.AbstractChannel.write(AbstractChannel.java:234) at
com.datastax.driver.core.Connection$Flusher.run(Connection.java:870) at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:748)
> Error| You are creating too many HashedWheelTimer instances.
> HashedWheelTimer is a shared resource that must be reused across the
> application, so that only a few instances are created.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
> Issue Type: Bug
> Environment: Pass platform: Openshit
> Reporter: Pankaj
> Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>
> When i start the server, It starts flink and all off sudden stops with above
> error. It also shows out of memory error.
>
> It would be nice if any body can suggest if something is wrong.
>
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)