[
https://issues.apache.org/jira/browse/TINKERPOP-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17065772#comment-17065772
]
Florian Hockmann commented on TINKERPOP-2288:
---------------------------------------------
I got a first version working that should solve the issue described here. It
includes mostly these two improvements:
* Closed connections are repaired automatically in the background.
* Retry policy with Polly in case that no connection is available.
It would be really helpful if someone who encountered the problem could test
whether this solves the problem with Cosmos DB. [~dzmitry.lahoda]
[~SomeOneElse] [~jmondal] maybe?
The version can be found in [this feature
branch|https://github.com/apache/tinkerpop/commits/TINKERPOP-2288].
I would consider the feature branch still work in progress as the retry policy
should be configurable and we also need some docs on that. I am also not sure
yet whether we want to add a dependency on Polly just for this retry policy or
whether we can better implement something ourselves. Nevertheless, the branch
should already include the basic functionality for this so it would be great to
get some feedback.
> Get ConnectionPoolBusyException and then ServerUnavailableExceptions
> --------------------------------------------------------------------
>
> Key: TINKERPOP-2288
> URL: https://issues.apache.org/jira/browse/TINKERPOP-2288
> Project: TinkerPop
> Issue Type: Bug
> Components: dotnet
> Affects Versions: 3.4.1
> Environment: Gremlin.Net 3.4.1
> Microsoft.NetCore.App 2.2
> Azure Cosmos DB
> Reporter: patrice huot
> Priority: Critical
>
> I am using .Net core Gremlin API query Cosmos DB.
> From time to time we are getting an error saying that no connection is
> available and then the server become unavailable. When this is occurring we
> need to restart the server. It looks like the connections are not released
> properly and become unavailable forever.
> We have configured the pool size to 50 and the MaxInProcessPerConnection to
> 32 (Which I guess should be sufficient).
> To diagnose the issue, Is there a way to access diagnostic information on the
> connection pool in order to know how many connections are open and how many
> processes are running in each connection?
> I would like to be able to monitor the connections usage to see if they are
> about to be exhausted and to see if the number of used connections is always
> increasing or of the connection lease is release when the queries completes?
> As a work around, Is there a way we can access this information from the code
> so that I can catch those scenario and create logic that re-initiate the
> connection pool?
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)