[ 
https://issues.apache.org/jira/browse/TINKERPOP3-746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stephen mallette closed TINKERPOP3-746.
---------------------------------------
    Resolution: Fixed

Fixed this one via:

https://github.com/apache/incubator-tinkerpop/commit/cb7db410ff3d22e20b57313c33f5886cad87c7ca

Logs are much more clear without that nagging error.  Basically, background 
threads in the driver were trying to do a reconnect after calls to 
{{Cluster.close()}}.

> Investigate WARN Message in Driver
> ----------------------------------
>
>                 Key: TINKERPOP3-746
>                 URL: https://issues.apache.org/jira/browse/TINKERPOP3-746
>             Project: TinkerPop 3
>          Issue Type: Improvement
>          Components: driver
>            Reporter: stephen mallette
>            Assignee: stephen mallette
>            Priority: Minor
>             Fix For: 3.0.0.GA
>
>
> Seeing this message in the logs when running integration tests:
> {code}
> [WARN] Slf4JLogger - Force-closing a channel whose registration task was not 
> accepted by an event loop: [id: 0x9bec2f52]
> java.util.concurrent.RejectedExecutionException: event executor terminated
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:707)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:299)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:690)
>       at 
> io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:421)
>       at 
> io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:60)
>       at 
> io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:48)
>       at 
> io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:64)
>       at 
> io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:315)
>       at io.netty.bootstrap.Bootstrap.doConnect(Bootstrap.java:134)
>       at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:116)
>       at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:97)
>       at 
> org.apache.tinkerpop.gremlin.driver.Connection.<init>(Connection.java:89)
>       at 
> org.apache.tinkerpop.gremlin.driver.ConnectionPool.tryReconnect(ConnectionPool.java:382)
>       at 
> org.apache.tinkerpop.gremlin.driver.ConnectionPool$$Lambda$164/955416092.apply(Unknown
>  Source)
>       at 
> org.apache.tinkerpop.gremlin.driver.Host.lambda$makeUnavailable$13(Host.java:75)
>       at 
> org.apache.tinkerpop.gremlin.driver.Host$$Lambda$165/134537957.run(Unknown 
> Source)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> {code}
> Don't think it is doing anything "bad" at the moment, but it should be 
> cleaned up if it can be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to