[
https://issues.apache.org/jira/browse/PHOENIX-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15447291#comment-15447291
]
ASF GitHub Bot commented on PHOENIX-3189:
-----------------------------------------
Github user joshelser commented on the issue:
https://github.com/apache/phoenix/pull/191
> This solution is not thread safe and will not allow to safely create
multiple instances of a driver on different threads in the JVM.
Yes, that's why I directed you over here, bud. That wasn't an initial goal
of these changes.
> With that said I am not sure that you can support multiple users and
support renewals with the way the UGI works.
Right.. you're catching on to what I was pointing out. This is something
that you should be managing inside of Storm. We cannot do this effectively
inside of Phoenix. We can only put a bandaid on top.
> Do we want the Phoenix driver to allow multiple instances instantiated
with a different logged in user for each in the same JVM ?
The only change I think we can do here is to prevent multiple clients from
doing what you're suggesting and hope they don't shoot themselves in the foot.
> HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url
> ----------------------------------------------------------------------------
>
> Key: PHOENIX-3189
> URL: https://issues.apache.org/jira/browse/PHOENIX-3189
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.8.0
> Reporter: Josh Elser
> Assignee: Josh Elser
> Priority: Blocker
> Fix For: 4.9.0, 4.8.1
>
>
> We've been doing some more testing after PHOENIX-3126 and, with the help of
> [~arpitgupta] and [~harsha_ch], we've found an issue in a test between Storm
> and Phoenix.
> Storm was configured to create a JDBC Bolt, specifying the principal and
> keytab in the JDBC URL, relying on PhoenixDriver to do the Kerberos login for
> them. After PHOENIX-3126, a ZK server blacklisted the host running the bolt,
> and we observed that there were over 140 active ZK threads in the JVM.
> This results in a subtle change where every time the client tries to get a
> new Connection, we end up getting a new UGI instance (because the
> {{ConnectionQueryServicesImpl#openConnection()}} always does a new login).
> If users are correctly caching Connections, there isn't an issue (best as I
> can presently tell). However, if users rely on the getting the same
> connection every time (the pre-PHOENIX-3126), they will saturate their local
> JVM with connections and crash.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)