[ 
https://issues.apache.org/jira/browse/TINKERPOP-2390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152504#comment-17152504
 ] 

Carlos commented on TINKERPOP-2390:
-----------------------------------

>From what I've read so far, it might be related to issue 2369

> Connections not released when closed abruptly in the server side
> ----------------------------------------------------------------
>
>                 Key: TINKERPOP-2390
>                 URL: https://issues.apache.org/jira/browse/TINKERPOP-2390
>             Project: TinkerPop
>          Issue Type: Bug
>          Components: dotnet
>    Affects Versions: 3.4.7
>         Environment: Tinkerpop 3.4.7 + Janusgraph 0.5.1 (optional opencypher 
> 1.0.0) 
>            Reporter: Carlos
>            Priority: Major
>
> We have developed a WService to query a gremlin-server (JanusGraph 0.5.1) 
> using the .net driver. Using the opencypher plugin has allowed us to see a 
> behaviour where the server gets completely blocked after a timeout on the 
> server side. We thought this might be related to issue 
> https://issues.apache.org/jira/browse/TINKERPOP-2288, so we have moved our 
> driver version to the master one (3.4-dev, which includes the PR solving this 
> issue). However, when facing a timeout (server side always, it is the one 
> launching the exception), quite a lot of connections get stalled at 
> CLOSE_WAIT status, and the server becomes unusable. 
> I've been digging around other bugs and issues, and from what I've read, some 
> similar behaviour happened to CosmoDB (although it might be caused in that 
> situation due to the some connection leaks, in this case is the timeout). We 
> have traced down the problem to the driver itself after isolating all the 
> components involved (optimizing the cypher query results in a non-timeout 
> situation where everything is ok; forcing the timeout from pure gremlin 
> replicates the behaviour). 
> We have set up the connection pool params to 16 / 4096 (we are expecting 
> quite a high concurrency load).  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to