[jira] [Commented] (PHOENIX-1647) Correctly return that Phoenix supports schema name references in DatabaseMetaData

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447691#comment-15447691
 ] 

ASF GitHub Bot commented on PHOENIX-1647:
-

Github user kliewkliew closed the pull request at:

https://github.com/apache/phoenix/pull/204


> Correctly return that Phoenix supports schema name references in 
> DatabaseMetaData
> -
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1647) Correctly return that Phoenix supports schema name references in DatabaseMetaData

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447692#comment-15447692
 ] 

ASF GitHub Bot commented on PHOENIX-1647:
-

Github user kliewkliew commented on the issue:

https://github.com/apache/phoenix/pull/204
  

https://github.com/apache/phoenix/commit/d873c2ffd1e539ecd56858c82f0ba2d23e877cf9


> Correctly return that Phoenix supports schema name references in 
> DatabaseMetaData
> -
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #204: PHOENIX-1647 Fully qualified tablename query supp...

2016-08-29 Thread kliewkliew
Github user kliewkliew closed the pull request at:

https://github.com/apache/phoenix/pull/204


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #204: PHOENIX-1647 Fully qualified tablename query support in ...

2016-08-29 Thread kliewkliew
Github user kliewkliew commented on the issue:

https://github.com/apache/phoenix/pull/204
  

https://github.com/apache/phoenix/commit/d873c2ffd1e539ecd56858c82f0ba2d23e877cf9


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3193) Tracing UI cleanup - final tasks before GSoC pull request

2016-08-29 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447668#comment-15447668
 ] 

Nishani  commented on PHOENIX-3193:
---

Hi,
When the jetty version 7 is used the issue is resolved. I am working on a
better solution.

Thanks.

On Tue, Aug 30, 2016 at 12:55 AM, ASF GitHub Bot (JIRA) 




-- 
Best Regards,
Ayola Jayamaha
http://ayolajayamaha.blogspot.com/


> Tracing UI cleanup - final tasks before GSoC pull request
> -
>
> Key: PHOENIX-3193
> URL: https://issues.apache.org/jira/browse/PHOENIX-3193
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Mujtaba Chohan
>Assignee: Nishani 
>
> Points from GSoC presentation on tracing imporvements:
> *Tracing UI*
> * Remove line chart 
> * In list page, run query with description, start_time, (end_time-start_time) 
> duration from T where trace_id = ?
> * More space for descriptions on bar chart. Wrap if necessary
> * Label for X axis on timeline sometime again start from 0, if X axis is in 
> seconds then it should not rollover after 60 seconds unless minutes are also 
> shown
> * X-axis labeled as Node on various charts, but should be Percentage
> *Zipkin*
> * Flip zipkin chart on vertical axis with arrows going other way. So start 
> from the top level root on the leftmost side and work toward children on the 
> right.
> * Ask zipkin community if there's a way to tell it that date/time is in 
> milliseconds.
> *Overall*
> * Please put together a pull request to the phoenix project to add the 
> zipkiin work you've done to the OS project. Ideally, include the zipkin work 
> in the phoenix-tracing module and call it phoeix-tracing. Only if there is 
> some major hurdle, create a new module.
> * Test with trace_ids that have multiple spans with duration (end_time - 
> start_time) > 5 ms and verify that UI and Zipkin output shows the correct 
> corresponding timeline
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread prakul agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447653#comment-15447653
 ] 

prakul agarwal commented on PHOENIX-3210:
-

[~jamestaylor] I tried both the suggestions decimal without scale and 
precision, and using constant like 123e2. Neither repros the error. I'm trying 
to trace what values are parsed as Double. Otherwise I'll be submitting a patch 
with fixes you suggested.

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447610#comment-15447610
 ] 

James Taylor commented on PHOENIX-3210:
---

Actually, to get a double literal, you need to use e notation. Try a constant 
like 123e2.


> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1647) Correctly return that Phoenix supports schema name references in DatabaseMetaData

2016-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447416#comment-15447416
 ] 

Hudson commented on PHOENIX-1647:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1374 (See 
[https://builds.apache.org/job/Phoenix-master/1374/])
PHOENIX-1647 Correctly return that Phoenix supports schema name (mujtaba: rev 
d873c2ffd1e539ecd56858c82f0ba2d23e877cf9)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java


> Correctly return that Phoenix supports schema name references in 
> DatabaseMetaData
> -
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3219) RuntimeExceptions should be caught and thrown as SQLException

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447364#comment-15447364
 ] 

James Taylor commented on PHOENIX-3219:
---

For the most part they do. If you have a specific case where that's not 
happening, that'd be much appreciated.

> RuntimeExceptions should be caught and thrown as SQLException
> -
>
> Key: PHOENIX-3219
> URL: https://issues.apache.org/jira/browse/PHOENIX-3219
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>
> Guarding against SQLExceptions is how one usually defends against unexpected 
> issues in a JDBC call. It would be nice if all exceptions thrown by JDBC 
> calls (e.g. PreparedStatement.execute) would actually get caught and then 
> rethrown as SQLExceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3189) HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447350#comment-15447350
 ] 

ASF GitHub Bot commented on PHOENIX-3189:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/191
  
@dbahir I believe b5be8d8 would address your concerns. WDYT?


> HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url
> 
>
> Key: PHOENIX-3189
> URL: https://issues.apache.org/jira/browse/PHOENIX-3189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 4.9.0, 4.8.1
>
>
> We've been doing some more testing after PHOENIX-3126 and, with the help of 
> [~arpitgupta] and [~harsha_ch], we've found an issue in a test between Storm 
> and Phoenix.
> Storm was configured to create a JDBC Bolt, specifying the principal and 
> keytab in the JDBC URL, relying on PhoenixDriver to do the Kerberos login for 
> them. After PHOENIX-3126, a ZK server blacklisted the host running the bolt, 
> and we observed that there were over 140 active ZK threads in the JVM.
> This results in a subtle change where every time the client tries to get a 
> new Connection, we end up getting a new UGI instance (because the 
> {{ConnectionQueryServicesImpl#openConnection()}} always does a new login).
> If users are correctly caching Connections, there isn't an issue (best as I 
> can presently tell). However, if users rely on the getting the same 
> connection every time (the pre-PHOENIX-3126), they will saturate their local 
> JVM with connections and crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #191: PHOENIX-3189 Perform Kerberos login before ConnectionInf...

2016-08-29 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/191
  
@dbahir I believe b5be8d8 would address your concerns. WDYT?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (PHOENIX-3219) RuntimeExceptions should be caught and thrown as SQLException

2016-08-29 Thread Shehzaad Nakhoda (JIRA)
Shehzaad Nakhoda created PHOENIX-3219:
-

 Summary: RuntimeExceptions should be caught and thrown as 
SQLException
 Key: PHOENIX-3219
 URL: https://issues.apache.org/jira/browse/PHOENIX-3219
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Shehzaad Nakhoda


Guarding against SQLExceptions is how one usually defends against unexpected 
issues in a JDBC call. It would be nice if all exceptions thrown by JDBC calls 
(e.g. PreparedStatement.execute) would actually get caught and then rethrown as 
SQLExceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447318#comment-15447318
 ] 

James Taylor commented on PHOENIX-3210:
---

That's a good clue, that it's through a view. Try DECIMAL without specifying 
precision or scale. If we can't repro it, the fix I indicated would still be 
valid to commit IMO.

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447299#comment-15447299
 ] 

Shehzaad Nakhoda commented on PHOENIX-3210:
---

I was trying to go through UpsertCompiler (and related code) to see what kind 
of literal value in an upsert statement would be parsed as a Double, but I 
wasn't successful at first try.

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447293#comment-15447293
 ] 

Shehzaad Nakhoda commented on PHOENIX-3210:
---

It's actually a view over a table that has just a couple of columns.

The columns in the view are:

("OPPORTUNITY_NAME" VARCHAR, "o.ID" VARCHAR, "TYPE" VARCHAR, "LEAD_SOURCE" 
VARCHAR, "AMOUNT" DECIMAL, "o.ISOCODE" VARCHAR, "EXP_AMOUNT" DECIMAL, 
"CLOSE_DATE" DATE, "NEXT_STEP" VARCHAR, "STAGE_NAME" VARCHAR, "PROBABILITY" 
DECIMAL, "FISCAL_QUARTER" VARCHAR, "AGE" DECIMAL, "AGE.ID" VARCHAR, 
"CREATED_DATE" TIME, "FULL_NAME" VARCHAR, "u.ID" VARCHAR, "ROLLUP_DESCRIPTION" 
VARCHAR, "ACCOUNT_NAME" VARCHAR, "a.ID" VARCHAR, "OWNER_ID" VARCHAR)

Unfortunately I still don't have the actual upsert statement that fails.


> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3189) HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447291#comment-15447291
 ] 

ASF GitHub Bot commented on PHOENIX-3189:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/191
  
> This solution is not thread safe and will not allow to safely create 
multiple instances of a driver on different threads in the JVM. 

Yes, that's why I directed you over here, bud. That wasn't an initial goal 
of these changes.

> With that said I am not sure that you can support multiple users and 
support renewals with the way the UGI works.

Right.. you're catching on to what I was pointing out. This is something 
that you should be managing inside of Storm. We cannot do this effectively 
inside of Phoenix. We can only put a bandaid on top.

> Do we want the Phoenix driver to allow multiple instances instantiated 
with a different logged in user for each in the same JVM ?

The only change I think we can do here is to prevent multiple clients from 
doing what you're suggesting and hope they don't shoot themselves in the foot.


> HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url
> 
>
> Key: PHOENIX-3189
> URL: https://issues.apache.org/jira/browse/PHOENIX-3189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 4.9.0, 4.8.1
>
>
> We've been doing some more testing after PHOENIX-3126 and, with the help of 
> [~arpitgupta] and [~harsha_ch], we've found an issue in a test between Storm 
> and Phoenix.
> Storm was configured to create a JDBC Bolt, specifying the principal and 
> keytab in the JDBC URL, relying on PhoenixDriver to do the Kerberos login for 
> them. After PHOENIX-3126, a ZK server blacklisted the host running the bolt, 
> and we observed that there were over 140 active ZK threads in the JVM.
> This results in a subtle change where every time the client tries to get a 
> new Connection, we end up getting a new UGI instance (because the 
> {{ConnectionQueryServicesImpl#openConnection()}} always does a new login).
> If users are correctly caching Connections, there isn't an issue (best as I 
> can presently tell). However, if users rely on the getting the same 
> connection every time (the pre-PHOENIX-3126), they will saturate their local 
> JVM with connections and crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #191: PHOENIX-3189 Perform Kerberos login before ConnectionInf...

2016-08-29 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/191
  
> This solution is not thread safe and will not allow to safely create 
multiple instances of a driver on different threads in the JVM. 

Yes, that's why I directed you over here, bud. That wasn't an initial goal 
of these changes.

> With that said I am not sure that you can support multiple users and 
support renewals with the way the UGI works.

Right.. you're catching on to what I was pointing out. This is something 
that you should be managing inside of Storm. We cannot do this effectively 
inside of Phoenix. We can only put a bandaid on top.

> Do we want the Phoenix driver to allow multiple instances instantiated 
with a different logged in user for each in the same JVM ?

The only change I think we can do here is to prevent multiple clients from 
doing what you're suggesting and hope they don't shoot themselves in the foot.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (PHOENIX-1647) Correctly return that Phoenix supports schema name references in DatabaseMetaData

2016-08-29 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan resolved PHOENIX-1647.
-
Resolution: Fixed

> Correctly return that Phoenix supports schema name references in 
> DatabaseMetaData
> -
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2645) Wildcard characters do not match newline characters

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447217#comment-15447217
 ] 

ASF GitHub Bot commented on PHOENIX-2645:
-

Github user kliewkliew commented on the issue:

https://github.com/apache/phoenix/pull/199
  
Sure, I'll pick up that JIRA. The unit tests evaluate both byte and string 
based regex (and compares the two results) but the integration tests only 
evaluate string based regex.


> Wildcard characters do not match newline characters
> ---
>
> Key: PHOENIX-2645
> URL: https://issues.apache.org/jira/browse/PHOENIX-2645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix 4.7.0 on Calcite 1.5
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: newbie
> Fix For: 4.9.0, 4.8.1
>
>
> Wildcard characters do not match newline characters
> {code:sql}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table testnewline (pk 
> varchar(10) primary key)
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> No rows affected (2.643 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into testnewline values 
> ('AA\nA');
> 1 row affected (0.079 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%'
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> ++
> | PK |
> ++
> | AA
> A   |
> ++
> 1 row selected (0.086 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA_A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.053 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.032 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-1874) Add tests for both JONI and Java regex usage

2016-08-29 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew reassigned PHOENIX-1874:
---

Assignee: Kevin Liew

> Add tests for both JONI and Java regex usage
> 
>
> Key: PHOENIX-1874
> URL: https://issues.apache.org/jira/browse/PHOENIX-1874
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Kevin Liew
>  Labels: newbie
> Fix For: 4.9.0
>
>
> We should have tests that use both the JONI regex library and the Java regex 
> library. One easy way would be to do the following:
> - Pull out the regex related tests from VariableLengthPKIT into a new 
> abstract RegExIT test class
> - Derive two concrete class from RegExIT: JoniRegExIT and JavaRegExIT
> - Set QueryServices.USE_BYTE_BASED_REGEX_ATTRIB to true in one and false in 
> the other. You'd do this by each having a static doSetup() method like this:
> {code}
> @BeforeClass
> @Shadower(classBeingShadowed = BaseHBaseManagedTimeIT.class)
> public static void doSetup() throws Exception {
> Map props = Maps.newHashMapWithExpectedSize(3);
> 
> props.put(QueryServices.USE_BYTE_BASED_REGEX_ATTRIB,
> Boolean.toString(true));
> setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[GitHub] phoenix issue #199: PHOENIX-2645 Wildcard characters do not match newline ch...

2016-08-29 Thread kliewkliew
Github user kliewkliew commented on the issue:

https://github.com/apache/phoenix/pull/199
  
Sure, I'll pick up that JIRA. The unit tests evaluate both byte and string 
based regex (and compares the two results) but the integration tests only 
evaluate string based regex.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3201) Implement DAYOFWEEK and DAYOFYEAR built-in functions

2016-08-29 Thread prakul agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447199#comment-15447199
 ] 

prakul agarwal commented on PHOENIX-3201:
-

[~jamestaylor] Submitted a rebased patch.

> Implement DAYOFWEEK and DAYOFYEAR built-in functions
> 
>
> Key: PHOENIX-3201
> URL: https://issues.apache.org/jira/browse/PHOENIX-3201
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: prakul agarwal
>  Labels: newbie
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3201.patch
>
>
> DAYOFWEEK() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005645
> DAYOFYEAR() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005676



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3201) Implement DAYOFWEEK and DAYOFYEAR built-in functions

2016-08-29 Thread prakul agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

prakul agarwal updated PHOENIX-3201:

Attachment: PHOENIX-3201.patch

> Implement DAYOFWEEK and DAYOFYEAR built-in functions
> 
>
> Key: PHOENIX-3201
> URL: https://issues.apache.org/jira/browse/PHOENIX-3201
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: prakul agarwal
>  Labels: newbie
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3201.patch
>
>
> DAYOFWEEK() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005645
> DAYOFYEAR() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005676



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3201) Implement DAYOFWEEK and DAYOFYEAR built-in functions

2016-08-29 Thread prakul agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

prakul agarwal updated PHOENIX-3201:

Attachment: (was: PHOENIX-3201.patch)

> Implement DAYOFWEEK and DAYOFYEAR built-in functions
> 
>
> Key: PHOENIX-3201
> URL: https://issues.apache.org/jira/browse/PHOENIX-3201
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: prakul agarwal
>  Labels: newbie
> Fix For: 4.9.0
>
>
> DAYOFWEEK() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005645
> DAYOFYEAR() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005676



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] which 4.8 branches will be active?

2016-08-29 Thread James Taylor
The "why" would because no one needs. If there are users consuming it, then
I agree.

On Mon, Aug 29, 2016 at 2:55 PM, Enis Söztutar  wrote:

> We have explicitly decided to do the 4.8 release supporting all 4 of 0.98,
> 1.0, 1.1 and 1.2.
>
> Why would we even consider dropping support for patch releases? Having a
> policy for keeping the supported hbase branches supported throughout the
> patch releases makes the process simpler. Preventing upgrade to 4.8.1 for
> HBase-1.0 and HBase-1.1 users will not look good at all.
>
> I think we can decide the base hbase versions on major version boundaries.
>
> Enis
>
> On Thu, Aug 25, 2016 at 1:39 AM, Andrew Purtell 
> wrote:
>
> > According to the responses received so far on our version usage survey,
> > HBase 1.1 is the most popular, followed by 1.2, then 0.98.
> >
> > > On Aug 25, 2016, at 12:43 AM, James Taylor 
> > wrote:
> > >
> > > I propose we only continue releases for HBase 0.98 and 1.2, both for
> 4.8
> > > patch releases as well as 4.9 minor releases.
> > >
> > > Thoughts?
> >
>


Re: [DISCUSS] which 4.8 branches will be active?

2016-08-29 Thread Enis Söztutar
We have explicitly decided to do the 4.8 release supporting all 4 of 0.98,
1.0, 1.1 and 1.2.

Why would we even consider dropping support for patch releases? Having a
policy for keeping the supported hbase branches supported throughout the
patch releases makes the process simpler. Preventing upgrade to 4.8.1 for
HBase-1.0 and HBase-1.1 users will not look good at all.

I think we can decide the base hbase versions on major version boundaries.

Enis

On Thu, Aug 25, 2016 at 1:39 AM, Andrew Purtell 
wrote:

> According to the responses received so far on our version usage survey,
> HBase 1.1 is the most popular, followed by 1.2, then 0.98.
>
> > On Aug 25, 2016, at 12:43 AM, James Taylor 
> wrote:
> >
> > I propose we only continue releases for HBase 0.98 and 1.2, both for 4.8
> > patch releases as well as 4.9 minor releases.
> >
> > Thoughts?
>


[jira] [Commented] (PHOENIX-2645) Wildcard characters do not match newline characters

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447145#comment-15447145
 ] 

ASF GitHub Bot commented on PHOENIX-2645:
-

Github user twdsilva commented on the issue:

https://github.com/apache/phoenix/pull/199
  
@kliewkliew  +1 I will get this committed . Any chance you can pick up 
PHOENIX-1874 also? None of our tests set the 
QueryServices.USE_BYTE_BASED_REGEX_ATTRIB  to true currently.


> Wildcard characters do not match newline characters
> ---
>
> Key: PHOENIX-2645
> URL: https://issues.apache.org/jira/browse/PHOENIX-2645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix 4.7.0 on Calcite 1.5
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: newbie
> Fix For: 4.9.0, 4.8.1
>
>
> Wildcard characters do not match newline characters
> {code:sql}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table testnewline (pk 
> varchar(10) primary key)
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> No rows affected (2.643 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into testnewline values 
> ('AA\nA');
> 1 row affected (0.079 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%'
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> ++
> | PK |
> ++
> | AA
> A   |
> ++
> 1 row selected (0.086 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA_A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.053 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.032 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #199: PHOENIX-2645 Wildcard characters do not match newline ch...

2016-08-29 Thread twdsilva
Github user twdsilva commented on the issue:

https://github.com/apache/phoenix/pull/199
  
@kliewkliew  +1 I will get this committed . Any chance you can pick up 
PHOENIX-1874 also? None of our tests set the 
QueryServices.USE_BYTE_BASED_REGEX_ATTRIB  to true currently.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread prakul agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447135#comment-15447135
 ] 

prakul agarwal edited comment on PHOENIX-3210 at 8/29/16 9:42 PM:
--

I tried creating tables with
{code}
CREATE TABLE T(K INTEGER PRIMARY KEY, V DECIMAL(10,2));
or CREATE TABLE T(K INTEGER PRIMARY KEY, V DOUBLE);
{code}
and upserting a bunch of possibilities like 
{code}
ps = UPSERT INTO T VALUES(1,?);
ps.setDouble(1,222.333);
or ps.setBigDecimal(1,BigDecimal.valueOf(100.333));
{code}


was (Author: prakul):
I tried creating tables with
{code}
CREATE TABLE T(K INTEGER PRIMARY KEY, V DECIMAL(10,2));
or CREATE TABLE T(K INTEGER PRIMARY KEY, V DOUBLE);
{code}
and upserting a bunch of possibilities like 
{code}
ps = UPSERT INTO T VALUES(1,?);
ps.setDouble(222.333);
or ps.setBigDecimal(1,BigDecimal.valueOf(100.333));
{code}

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread prakul agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447068#comment-15447068
 ] 

prakul agarwal commented on PHOENIX-3210:
-

[~shehzaadn] I have been unable to repro the query. Can you find the table on 
which it was executed so I can know how its column have been defined ?

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2474) Cannot round to a negative precision (to the left of the decimal)

2016-08-29 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446897#comment-15446897
 ] 

Kevin Liew commented on PHOENIX-2474:
-

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify 
(ClientManagedTimeTests) on project phoenix-core: There was a timeout or other 
error in the fork
{noformat}

I'll resubmit the patch to re-run the tests.

> Cannot round to a negative precision (to the left of the decimal)
> -
>
> Key: PHOENIX-2474
> URL: https://issues.apache.org/jira/browse/PHOENIX-2474
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: function, newbie, phoenix
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2474.patch
>
>
> Query:
> {noformat}select ROUND(444.44, -2){noformat}
> Expected result:
> {noformat}400{noformat}
> Actual result:
> {noformat}444.44{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #191: PHOENIX-3189 Perform Kerberos login before ConnectionInf...

2016-08-29 Thread dbahir
Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/191
  
This solution is not thread safe and will not allow to safely create 
multiple instances of a driver on different threads in the JVM. 

This area should be protected, 
https://github.com/joshelser/phoenix/blob/d17a8d855dc4a2c8cff578dd26e14c6c2c13cc3a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriver.java#L351.

With that said I am not sure that you can support multiple users and 
support renewals with the way the UGI works.

If in the same JVM a driver is instantiated for User A and then another 
driver is instantiated for User B the last call to loginUserFromKeytab will set 
the the user information in the UGI.

loginUserFromKeytabAndReturnUGI can be used which will preserve the 
original user info in the UGI but I think will not work correctly with renewing.

Do we want the Phoenix driver to allow multiple instances instantiated with 
a different logged in user for each in the same JVM ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3189) HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446895#comment-15446895
 ] 

ASF GitHub Bot commented on PHOENIX-3189:
-

Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/191
  
This solution is not thread safe and will not allow to safely create 
multiple instances of a driver on different threads in the JVM. 

This area should be protected, 
https://github.com/joshelser/phoenix/blob/d17a8d855dc4a2c8cff578dd26e14c6c2c13cc3a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriver.java#L351.

With that said I am not sure that you can support multiple users and 
support renewals with the way the UGI works.

If in the same JVM a driver is instantiated for User A and then another 
driver is instantiated for User B the last call to loginUserFromKeytab will set 
the the user information in the UGI.

loginUserFromKeytabAndReturnUGI can be used which will preserve the 
original user info in the UGI but I think will not work correctly with renewing.

Do we want the Phoenix driver to allow multiple instances instantiated with 
a different logged in user for each in the same JVM ?


> HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url
> 
>
> Key: PHOENIX-3189
> URL: https://issues.apache.org/jira/browse/PHOENIX-3189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 4.9.0, 4.8.1
>
>
> We've been doing some more testing after PHOENIX-3126 and, with the help of 
> [~arpitgupta] and [~harsha_ch], we've found an issue in a test between Storm 
> and Phoenix.
> Storm was configured to create a JDBC Bolt, specifying the principal and 
> keytab in the JDBC URL, relying on PhoenixDriver to do the Kerberos login for 
> them. After PHOENIX-3126, a ZK server blacklisted the host running the bolt, 
> and we observed that there were over 140 active ZK threads in the JVM.
> This results in a subtle change where every time the client tries to get a 
> new Connection, we end up getting a new UGI instance (because the 
> {{ConnectionQueryServicesImpl#openConnection()}} always does a new login).
> If users are correctly caching Connections, there isn't an issue (best as I 
> can presently tell). However, if users rely on the getting the same 
> connection every time (the pre-PHOENIX-3126), they will saturate their local 
> JVM with connections and crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2641) Implicit wildcard in LIKE predicate search pattern

2016-08-29 Thread Kevin Liew (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446887#comment-15446887
 ] 

Kevin Liew commented on PHOENIX-2641:
-

Hi James, it is ready for review now.

> Implicit wildcard in LIKE predicate search pattern
> --
>
> Key: PHOENIX-2641
> URL: https://issues.apache.org/jira/browse/PHOENIX-2641
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix 4.7.0 on Calcite 1.5.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: newbie
>
> LIKE predicates have an implicit wildcard at the end of the search pattern
> ie.
> {code:sql}select distinct at1.col2 from at1 group by at1.col2 having at1.col2 
> like '_'{code}
> will match every cell in col2 whereas it should only match single-character 
> cells.
> This affects both VARCHAR and CHAR.
> Note that selecting by pattern '__' (two single-character wildcards) works 
> properly for VARCHAR but not for CHAR (no result).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3189) HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446848#comment-15446848
 ] 

ASF GitHub Bot commented on PHOENIX-3189:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/191
  
Thanks for pursuing this tricky issue, @joshelser. I think what you have 
here is definitely an improvement and should be pulled in for 4.8.1, but I do 
think we should look at doing a value-based equality check for User instead. 
There's a fair amount of overhead in what you're doing and Phoenix does not do 
connection pooling but relies on being able to quickly/cheaply get a 
connection. Would you have some cycles to investigate that a bit first?


> HBase/ZooKeeper connection leaks when providing principal/keytab in JDBC url
> 
>
> Key: PHOENIX-3189
> URL: https://issues.apache.org/jira/browse/PHOENIX-3189
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 4.9.0, 4.8.1
>
>
> We've been doing some more testing after PHOENIX-3126 and, with the help of 
> [~arpitgupta] and [~harsha_ch], we've found an issue in a test between Storm 
> and Phoenix.
> Storm was configured to create a JDBC Bolt, specifying the principal and 
> keytab in the JDBC URL, relying on PhoenixDriver to do the Kerberos login for 
> them. After PHOENIX-3126, a ZK server blacklisted the host running the bolt, 
> and we observed that there were over 140 active ZK threads in the JVM.
> This results in a subtle change where every time the client tries to get a 
> new Connection, we end up getting a new UGI instance (because the 
> {{ConnectionQueryServicesImpl#openConnection()}} always does a new login).
> If users are correctly caching Connections, there isn't an issue (best as I 
> can presently tell). However, if users rely on the getting the same 
> connection every time (the pre-PHOENIX-3126), they will saturate their local 
> JVM with connections and crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #191: PHOENIX-3189 Perform Kerberos login before ConnectionInf...

2016-08-29 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/191
  
Thanks for pursuing this tricky issue, @joshelser. I think what you have 
here is definitely an improvement and should be pulled in for 4.8.1, but I do 
think we should look at doing a value-based equality check for User instead. 
There's a fair amount of overhead in what you're doing and Phoenix does not do 
connection pooling but relies on being able to quickly/cheaply get a 
connection. Would you have some cycles to investigate that a bit first?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2474) Cannot round to a negative precision (to the left of the decimal)

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446834#comment-15446834
 ] 

Hadoop QA commented on PHOENIX-2474:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12826050/PHOENIX-2474.patch
  against master branch at commit 14dab2f40df0d09f48f9cabbaea897009f635914.
  ATTACHMENT ID: 12826050

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/544//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/544//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/544//console

This message is automatically generated.

> Cannot round to a negative precision (to the left of the decimal)
> -
>
> Key: PHOENIX-2474
> URL: https://issues.apache.org/jira/browse/PHOENIX-2474
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: function, newbie, phoenix
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2474.patch
>
>
> Query:
> {noformat}select ROUND(444.44, -2){noformat}
> Expected result:
> {noformat}400{noformat}
> Actual result:
> {noformat}444.44{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2641) Implicit wildcard in LIKE predicate search pattern

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446828#comment-15446828
 ] 

James Taylor commented on PHOENIX-2641:
---

[~kliew] - thanks for the pull request. Is it ready to be reviewed, as I 
noticed you mentioned that it needs more testing?

> Implicit wildcard in LIKE predicate search pattern
> --
>
> Key: PHOENIX-2641
> URL: https://issues.apache.org/jira/browse/PHOENIX-2641
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix 4.7.0 on Calcite 1.5.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: newbie
>
> LIKE predicates have an implicit wildcard at the end of the search pattern
> ie.
> {code:sql}select distinct at1.col2 from at1 group by at1.col2 having at1.col2 
> like '_'{code}
> will match every cell in col2 whereas it should only match single-character 
> cells.
> This affects both VARCHAR and CHAR.
> Note that selecting by pattern '__' (two single-character wildcards) works 
> properly for VARCHAR but not for CHAR (no result).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1647) Correctly return that Phoenix supports schema name references in DatabaseMetaData

2016-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1647:
--
Summary: Correctly return that Phoenix supports schema name references in 
DatabaseMetaData  (was: Fully qualified tablename query support in Phoenix)

> Correctly return that Phoenix supports schema name references in 
> DatabaseMetaData
> -
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3210) Exception trying to cast Double to BigDecimal in UpsertCompiler

2016-08-29 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446815#comment-15446815
 ] 

Shehzaad Nakhoda commented on PHOENIX-3210:
---

[~prakul] any update? 

> Exception trying to cast Double to BigDecimal in UpsertCompiler
> ---
>
> Key: PHOENIX-3210
> URL: https://issues.apache.org/jira/browse/PHOENIX-3210
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Shehzaad Nakhoda
>Assignee: prakul agarwal
>  Labels: SFDC
> Fix For: 4.9.0, 4.8.1
>
>
> We have an UPSERT statement that is resulting in this stack trace. 
> Unfortunately I can't get a hold of the actual Upsert statement since we 
> don't log it. 
> Cause0: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.math.BigDecimal
>  Cause0-StackTrace: 
>   at 
> org.apache.phoenix.schema.types.PDecimal.isSizeCompatible(PDecimal.java:312)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:887)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:335)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:321)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1274)
>   at 
> phoenix.connection.ProtectedPhoenixStatement.executeUpdate(ProtectedPhoenixStatement.java:127)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3193) Tracing UI cleanup - final tasks before GSoC pull request

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446817#comment-15446817
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user onkarkadam7 commented on the issue:

https://github.com/apache/phoenix/pull/202
  
Hi We are facing issue when we try to start the Trace server ,
#==
Log
#==
[root@dev02-slv-02 phoenix]# ./bin/traceserver.py
16/08/29 19:19:56 DEBUG util.Shell: setsid exited with exit code 0
16/08/29 19:19:56 DEBUG util.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.eclipse.jetty.util.log) via 
org.eclipse.jetty.util.log.Slf4jLog
16/08/29 19:19:56 DEBUG component.Container: Container 
org.eclipse.jetty.server.Server@73ad2d6 + SelectChannelConnector@0.0.0.0:8864 
as connector
16/08/29 19:19:57 DEBUG component.Container: Container 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} + 
org.eclipse.jetty.servlet.ErrorPageErrorHandler@185d8b6 as error
16/08/29 19:19:57 DEBUG component.Container: Container 
org.eclipse.jetty.server.Server@73ad2d6 + 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} as handler
16/08/29 19:19:57 DEBUG component.AbstractLifeCycle: starting 
org.eclipse.jetty.server.Server@73ad2d6
16/08/29 19:19:57 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/08/29 19:19:57 DEBUG component.Container: Container 
org.eclipse.jetty.server.Server@73ad2d6 + qtp1468357786{8<=0<=0/254,-1} as 
threadpool
16/08/29 19:19:57 DEBUG component.AbstractLifeCycle: starting 
o.e.j.w.WebAppContext{/,file:/src/main/webapp}
16/08/29 19:19:57 DEBUG webapp.WebAppContext: Thread Context classloader 
WebAppClassLoader=85777802@51cdd8a
16/08/29 19:19:57 DEBUG webapp.WebAppContext: Parent class loader: 
sun.misc.Launcher$AppClassLoader@4e25154f
16/08/29 19:19:57 DEBUG webapp.WebAppContext: Parent class loader: 
sun.misc.Launcher$ExtClassLoader@d44fc21
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebInfConfiguration@23faf8f2
16/08/29 19:19:57 DEBUG webapp.WebInfConfiguration: Set temp dir 
/tmp/jetty-0.0.0.0-8864-webapp-_-any-
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebXmlConfiguration@396f6598
16/08/29 19:19:57 DEBUG webapp.WebDescriptor: 
jar:file:/usr/lib/phoenix/phoenix-tracing-webapp-4.7.0-HBase-1.1-guavus-runnable.jar!/org/eclipse/jetty/webapp/webdefault.xml:
 Calculated metadatacomplete = True with version=2.5
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.MetaInfConfiguration@7a765367
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.FragmentConfiguration@76b0bfab
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.JettyWebXmlConfiguration@17d677df
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebInfConfiguration@23faf8f2
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebXmlConfiguration@396f6598
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.MetaInfConfiguration@7a765367
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.FragmentConfiguration@76b0bfab
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.JettyWebXmlConfiguration@17d677df
16/08/29 19:19:57 DEBUG webapp.JettyWebXmlConfiguration: Configuring 
web-jetty.xml
16/08/29 19:19:57 DEBUG webapp.MetaData: metadata resolve 
o.e.j.w.WebAppContext{/,file:/src/main/webapp}
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.ELContextCleaner
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.ELContextCleaner from 
sun.misc.Launcher$AppClassLoader@4e25154f
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.IntrospectorCleaner
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.IntrospectorCleaner from 
sun.misc.Launcher$AppClassLoader@4e25154f
16/08/29 19:19:57 DEBUG servlet.ServletHandler: filterNameMap={}

[GitHub] phoenix issue #202: PHOENIX-3193 Tracing UI cleanup

2016-08-29 Thread onkarkadam7
Github user onkarkadam7 commented on the issue:

https://github.com/apache/phoenix/pull/202
  
Hi We are facing issue when we try to start the Trace server ,
#==
Log
#==
[root@dev02-slv-02 phoenix]# ./bin/traceserver.py
16/08/29 19:19:56 DEBUG util.Shell: setsid exited with exit code 0
16/08/29 19:19:56 DEBUG util.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.eclipse.jetty.util.log) via 
org.eclipse.jetty.util.log.Slf4jLog
16/08/29 19:19:56 DEBUG component.Container: Container 
org.eclipse.jetty.server.Server@73ad2d6 + SelectChannelConnector@0.0.0.0:8864 
as connector
16/08/29 19:19:57 DEBUG component.Container: Container 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} + 
org.eclipse.jetty.servlet.ErrorPageErrorHandler@185d8b6 as error
16/08/29 19:19:57 DEBUG component.Container: Container 
org.eclipse.jetty.server.Server@73ad2d6 + 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} as handler
16/08/29 19:19:57 DEBUG component.AbstractLifeCycle: starting 
org.eclipse.jetty.server.Server@73ad2d6
16/08/29 19:19:57 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/08/29 19:19:57 DEBUG component.Container: Container 
org.eclipse.jetty.server.Server@73ad2d6 + qtp1468357786{8<=0<=0/254,-1} as 
threadpool
16/08/29 19:19:57 DEBUG component.AbstractLifeCycle: starting 
o.e.j.w.WebAppContext{/,file:/src/main/webapp}
16/08/29 19:19:57 DEBUG webapp.WebAppContext: Thread Context classloader 
WebAppClassLoader=85777802@51cdd8a
16/08/29 19:19:57 DEBUG webapp.WebAppContext: Parent class loader: 
sun.misc.Launcher$AppClassLoader@4e25154f
16/08/29 19:19:57 DEBUG webapp.WebAppContext: Parent class loader: 
sun.misc.Launcher$ExtClassLoader@d44fc21
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebInfConfiguration@23faf8f2
16/08/29 19:19:57 DEBUG webapp.WebInfConfiguration: Set temp dir 
/tmp/jetty-0.0.0.0-8864-webapp-_-any-
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebXmlConfiguration@396f6598
16/08/29 19:19:57 DEBUG webapp.WebDescriptor: 
jar:file:/usr/lib/phoenix/phoenix-tracing-webapp-4.7.0-HBase-1.1-guavus-runnable.jar!/org/eclipse/jetty/webapp/webdefault.xml:
 Calculated metadatacomplete = True with version=2.5
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.MetaInfConfiguration@7a765367
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.FragmentConfiguration@76b0bfab
16/08/29 19:19:57 DEBUG webapp.WebAppContext: preConfigure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.JettyWebXmlConfiguration@17d677df
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebInfConfiguration@23faf8f2
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.WebXmlConfiguration@396f6598
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.MetaInfConfiguration@7a765367
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.FragmentConfiguration@76b0bfab
16/08/29 19:19:57 DEBUG webapp.WebAppContext: configure 
o.e.j.w.WebAppContext{/,file:/src/main/webapp} with 
org.eclipse.jetty.webapp.JettyWebXmlConfiguration@17d677df
16/08/29 19:19:57 DEBUG webapp.JettyWebXmlConfiguration: Configuring 
web-jetty.xml
16/08/29 19:19:57 DEBUG webapp.MetaData: metadata resolve 
o.e.j.w.WebAppContext{/,file:/src/main/webapp}
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.ELContextCleaner
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.ELContextCleaner from 
sun.misc.Launcher$AppClassLoader@4e25154f
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.IntrospectorCleaner
16/08/29 19:19:57 DEBUG webapp.WebAppClassLoader: loaded class 
org.eclipse.jetty.servlet.listener.IntrospectorCleaner from 
sun.misc.Launcher$AppClassLoader@4e25154f
16/08/29 19:19:57 DEBUG servlet.ServletHandler: filterNameMap={}
16/08/29 19:19:57 DEBUG servlet.ServletHandler: pathFilters=null
16/08/29 19:19:57 DEBUG servlet.ServletHandler: servletFilterMap=null
16/08/29 19:19:57 DEBUG servlet.ServletHandler: servletPathMap={/=default}
16/08/29 19:19:57 

[jira] [Updated] (PHOENIX-1647) Fully qualified tablename query support in Phoenix

2016-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1647:
--
Fix Version/s: 4.8.1

> Fully qualified tablename query support in Phoenix
> --
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1647) Fully qualified tablename query support in Phoenix

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446814#comment-15446814
 ] 

James Taylor commented on PHOENIX-1647:
---

[~mujtabachohan] - would you mind committing this on behalf of [~kliew]?

> Fully qualified tablename query support in Phoenix
> --
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0, 4.8.1
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2645) Wildcard characters do not match newline characters

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446805#comment-15446805
 ] 

James Taylor commented on PHOENIX-2645:
---

Would you mind reviewing and committing, [~tdsilva]?

> Wildcard characters do not match newline characters
> ---
>
> Key: PHOENIX-2645
> URL: https://issues.apache.org/jira/browse/PHOENIX-2645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix 4.7.0 on Calcite 1.5
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: newbie
> Fix For: 4.9.0, 4.8.1
>
>
> Wildcard characters do not match newline characters
> {code:sql}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table testnewline (pk 
> varchar(10) primary key)
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> No rows affected (2.643 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into testnewline values 
> ('AA\nA');
> 1 row affected (0.079 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%'
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> ++
> | PK |
> ++
> | AA
> A   |
> ++
> 1 row selected (0.086 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA_A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.053 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.032 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2645) Wildcard characters do not match newline characters

2016-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2645:
--
Fix Version/s: 4.8.1

> Wildcard characters do not match newline characters
> ---
>
> Key: PHOENIX-2645
> URL: https://issues.apache.org/jira/browse/PHOENIX-2645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix 4.7.0 on Calcite 1.5
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: newbie
> Fix For: 4.9.0, 4.8.1
>
>
> Wildcard characters do not match newline characters
> {code:sql}
> 0: jdbc:phoenix:thin:url=http://localhost:876> create table testnewline (pk 
> varchar(10) primary key)
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> No rows affected (2.643 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> upsert into testnewline values 
> ('AA\nA');
> 1 row affected (0.079 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%'
> . . . . . . . . . . . . . . . . . . . . . . .> ;
> ++
> | PK |
> ++
> | AA
> A   |
> ++
> 1 row selected (0.086 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA_A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.053 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> select * from testnewline 
> where pk like 'AA%A';
> ++
> | PK |
> ++
> ++
> No rows selected (0.032 seconds)
> 0: jdbc:phoenix:thin:url=http://localhost:876> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3201) Implement DAYOFWEEK and DAYOFYEAR built-in functions

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446801#comment-15446801
 ] 

James Taylor commented on PHOENIX-3201:
---

[~prakul] - can you give [~samarthjain] a rebased patch please?

> Implement DAYOFWEEK and DAYOFYEAR built-in functions
> 
>
> Key: PHOENIX-3201
> URL: https://issues.apache.org/jira/browse/PHOENIX-3201
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: prakul agarwal
>  Labels: newbie
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3201.patch
>
>
> DAYOFWEEK() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005645
> DAYOFYEAR() as documented here: 
> https://docs.oracle.com/cd/B19188_01/doc/B15917/sqfunc.htm#i1005676



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446773#comment-15446773
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
HBase renew implementation is similar to the HDFS one.

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java#L658

Thanks for your comments, will look at your changes and see where these 
changes can fit in.




> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread dbahir
Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
HBase renew implementation is similar to the HDFS one.

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientImpl.java#L658

Thanks for your comments, will look at your changes and see where these 
changes can fit in.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2474) Cannot round to a negative precision (to the left of the decimal)

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446722#comment-15446722
 ] 

James Taylor commented on PHOENIX-2474:
---

[~samarthjain] - would you mind reviewing and committing this one?

> Cannot round to a negative precision (to the left of the decimal)
> -
>
> Key: PHOENIX-2474
> URL: https://issues.apache.org/jira/browse/PHOENIX-2474
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: function, newbie, phoenix
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2474.patch
>
>
> Query:
> {noformat}select ROUND(444.44, -2){noformat}
> Expected result:
> {noformat}400{noformat}
> Actual result:
> {noformat}444.44{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2675) Allow stats to be configured on a table-by-table basis

2016-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-2675:
-

Assignee: James Taylor  (was: ramkrishna.s.vasudevan)

> Allow stats to be configured on a table-by-table basis
> --
>
> Key: PHOENIX-2675
> URL: https://issues.apache.org/jira/browse/PHOENIX-2675
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.9.0
>
>
> Currently stats are controlled and collected at a global level. We should 
> allow them to be configured on a table-by-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2890) Extend IndexTool to allow incremental index rebuilds

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446713#comment-15446713
 ] 

James Taylor commented on PHOENIX-2890:
---

[~an...@apache.org] - are you still pursuing this one? We'd like to see this in 
4.9.0.

> Extend IndexTool to allow incremental index rebuilds
> 
>
> Key: PHOENIX-2890
> URL: https://issues.apache.org/jira/browse/PHOENIX-2890
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
> Fix For: 4.9.0
>
> Attachments: PHOENIX-2890_wip.patch
>
>
> Currently , IndexTool is used for initial index rebuild but I think we should 
> extend it to be used for recovering index from last disabled timestamp too. 
> In general terms if we run IndexTool on already existing/new index, then it 
> should follow the same semantics as followed by background Index rebuilding 
> thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446694#comment-15446694
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
> Regarding the renewal, I understand from, 
http://stackoverflow.com/questions/34616676/should-i-call-ugi-checktgtandreloginfromkeytab-before-every-action-on-hadoop,
 that the RPC layer takes care of that.

Well, if you're talking to HDFS directly it would take care of it :). But 
we're talking about accessing HBase here. I'm not sure if the same holds true. 
I know there is something similar in the HBase RPC level, but I'd have to find 
it again in code to double check.

> I am trying to fix the scenario in which multiple threads call 
loginUserFromKeytab concurrently and then the renewal process no longer works 
as expected. 
> If only one login happens the renewal works properly.

Is this the same principal over and over again? Are you essentially 
providing the same principal and keytab in the JDBC URL, expecting Phoenix to 
do everything for you instead of doing the login in Storm?

> Your concern regarding security is correct.

Ok. I would like to redirect your efforts to PHOENIX-3189 then. We cannot 
sacrifice security for multi-threading (as you can already handle the Kerberos 
login yourself). Can you take a look at the changes I have staged on #191? If 
this is the above case I outlined, we can add some concurrency control to 
prevent concurrent logins from happening.

> you can see that this class is not thread safe and not designed to have 
different users login in the same JVM as loginUser is defined in this way.

Phoenix itself is not well-designed to support concurrent (different) users 
accessing HBase because of how UGI works. If your application (Storm) needs to 
provide this functionality, Storm should perform logins itself, cache the UGI 
instances, and use {{UGI.doAs(..)}} instead of relying on the static state in 
UGI.


> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
> Regarding the renewal, I understand from, 
http://stackoverflow.com/questions/34616676/should-i-call-ugi-checktgtandreloginfromkeytab-before-every-action-on-hadoop,
 that the RPC layer takes care of that.

Well, if you're talking to HDFS directly it would take care of it :). But 
we're talking about accessing HBase here. I'm not sure if the same holds true. 
I know there is something similar in the HBase RPC level, but I'd have to find 
it again in code to double check.

> I am trying to fix the scenario in which multiple threads call 
loginUserFromKeytab concurrently and then the renewal process no longer works 
as expected. 
> If only one login happens the renewal works properly.

Is this the same principal over and over again? Are you essentially 
providing the same principal and keytab in the JDBC URL, expecting Phoenix to 
do everything for you instead of doing the login in Storm?

> Your concern regarding security is correct.

Ok. I would like to redirect your efforts to PHOENIX-3189 then. We cannot 
sacrifice security for multi-threading (as you can already handle the Kerberos 
login yourself). Can you take a look at the changes I have staged on #191? If 
this is the above case I outlined, we can add some concurrency control to 
prevent concurrent logins from happening.

> you can see that this class is not thread safe and not designed to have 
different users login in the same JVM as loginUser is defined in this way.

Phoenix itself is not well-designed to support concurrent (different) users 
accessing HBase because of how UGI works. If your application (Storm) needs to 
provide this functionality, Storm should perform logins itself, cache the UGI 
instances, and use {{UGI.doAs(..)}} instead of relying on the static state in 
UGI.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1647) Fully qualified tablename query support in Phoenix

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446636#comment-15446636
 ] 

ASF GitHub Bot commented on PHOENIX-1647:
-

GitHub user kliewkliew opened a pull request:

https://github.com/apache/phoenix/pull/204

PHOENIX-1647 Fully qualified tablename query support in Phoenix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kliewkliew/phoenix PHOENIX-1647

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/204.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #204






> Fully qualified tablename query support in Phoenix
> --
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>  Labels: Newbie
> Fix For: 4.9.0
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-1647) Fully qualified tablename query support in Phoenix

2016-08-29 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew reassigned PHOENIX-1647:
---

Assignee: Kevin Liew

> Fully qualified tablename query support in Phoenix
> --
>
> Key: PHOENIX-1647
> URL: https://issues.apache.org/jira/browse/PHOENIX-1647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1.1
> Environment: Phoenix driver 4.1.1
> HBase 98.9
> Hadoop 2
>Reporter: suraj misra
>Assignee: Kevin Liew
>  Labels: Newbie
> Fix For: 4.9.0
>
>
> I am able to execute queries having fully qualified names in table names. For 
> example:
> UPSERT INTO TEST.CUSTOMERS_TEST VALUES(102,'hbase2',20,'del')
> But when I look at the phoenix driver implementation, I can see that 
> implementation for DatabaseMetaData .supportsSchemasInDataManipulation method 
> always return false.
> As per JDBC documentation, this method retrieves whether a schema name can be 
> used in a data manipulation statement.But as you can see in above example, I 
> can execute DML statements with schema names as well along with other 
> statements. 
> Could someone please let me know if there is any specific reason to keep it 
> as false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #204: PHOENIX-1647 Fully qualified tablename query supp...

2016-08-29 Thread kliewkliew
GitHub user kliewkliew opened a pull request:

https://github.com/apache/phoenix/pull/204

PHOENIX-1647 Fully qualified tablename query support in Phoenix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kliewkliew/phoenix PHOENIX-1647

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/204.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #204






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-2474) Cannot round to a negative precision (to the left of the decimal)

2016-08-29 Thread Kevin Liew (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Liew updated PHOENIX-2474:

Attachment: PHOENIX-2474.patch

> Cannot round to a negative precision (to the left of the decimal)
> -
>
> Key: PHOENIX-2474
> URL: https://issues.apache.org/jira/browse/PHOENIX-2474
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>  Labels: function, newbie, phoenix
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-2474.patch
>
>
> Query:
> {noformat}select ROUND(444.44, -2){noformat}
> Expected result:
> {noformat}400{noformat}
> Actual result:
> {noformat}444.44{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446611#comment-15446611
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
If you look at 
https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/UserGroupInformation.java
 you can see that this class is not thread safe and not designed to have 
different users login in the same JVM as loginUser is defined in this way.
 private static UserGroupInformation loginUser = null;


> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread dbahir
Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
If you look at 
https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/UserGroupInformation.java
 you can see that this class is not thread safe and not designed to have 
different users login in the same JVM as loginUser is defined in this way.
 private static UserGroupInformation loginUser = null;


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446582#comment-15446582
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
Regarding the renewal, I understand from, 
http://stackoverflow.com/questions/34616676/should-i-call-ugi-checktgtandreloginfromkeytab-before-every-action-on-hadoop,
 that the RPC layer takes care of that.

I am trying to fix the scenario in which multiple threads call 
loginUserFromKeytab concurrently and then the renewal process no longer works 
as expected. 

An example of that scenario is a storm topology that has multiple 
HBase/Phoenix/HDFS bolts in the same JVM. When the topology starts it will 
initialize all bolts which will execute a login from each one, when that 
happens the renewal no longer works. If only one login happens the renewal 
works properly.

In regarding to Phoenix, we came got into a similar situation with a 
multi-threaded application that caused loginUserFromKeytab to be called 
concurrently. The code change was made to protect that and works.

Your concern regarding security is correct.

I looked into PHOENIX-3189 which i was not aware of. The fix can be folded 
into it however we would need to handle synchronization of the 
loginUserFromKeytab if multple instances of the driver are created.


> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread dbahir
Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
Regarding the renewal, I understand from, 
http://stackoverflow.com/questions/34616676/should-i-call-ugi-checktgtandreloginfromkeytab-before-every-action-on-hadoop,
 that the RPC layer takes care of that.

I am trying to fix the scenario in which multiple threads call 
loginUserFromKeytab concurrently and then the renewal process no longer works 
as expected. 

An example of that scenario is a storm topology that has multiple 
HBase/Phoenix/HDFS bolts in the same JVM. When the topology starts it will 
initialize all bolts which will execute a login from each one, when that 
happens the renewal no longer works. If only one login happens the renewal 
works properly.

In regarding to Phoenix, we came got into a similar situation with a 
multi-threaded application that caused loginUserFromKeytab to be called 
concurrently. The code change was made to protect that and works.

Your concern regarding security is correct.

I looked into PHOENIX-3189 which i was not aware of. The fix can be folded 
into it however we would need to handle synchronization of the 
loginUserFromKeytab if multple instances of the driver are created.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446423#comment-15446423
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
Ignoring the aforementioned issue, I don't think this change is correctly 
handling multiple users.

It would be re-introducing the bug that was talked about in PHOENIX-3126. 
If there was a user that was already logged in and then a different URL was 
provided with different credentials, the old user's credentials would be used 
instead of the new user's credentials. This would be a security vulnerability.


> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
Ignoring the aforementioned issue, I don't think this change is correctly 
handling multiple users.

It would be re-introducing the bug that was talked about in PHOENIX-3126. 
If there was a user that was already logged in and then a different URL was 
provided with different credentials, the old user's credentials would be used 
instead of the new user's credentials. This would be a security vulnerability.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (PHOENIX-3217) Phoenix sets MAX_FILESIZE table attribute for index tables

2016-08-29 Thread Andrew Purtell (JIRA)
Andrew Purtell created PHOENIX-3217:
---

 Summary: Phoenix sets MAX_FILESIZE table attribute for index tables
 Key: PHOENIX-3217
 URL: https://issues.apache.org/jira/browse/PHOENIX-3217
 Project: Phoenix
  Issue Type: Bug
Reporter: Andrew Purtell


Phoenix appears to set the HBase table attribute MAX_FILESIZE for index tables. 
We should discuss this. This setting is a user tunable that impacts splitting 
decisions. It should be set in conjunction with a split policy not in 
isolation. Is it necessary to do this for index tables? Because this is an 
important user tunable, overriding the value is likely to lead to surprise and 
unexpected cluster behavior. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446356#comment-15446356
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
> This fix has been tested and it solves the issue, the same fix has been 
applied to the storm hdfs and hbase connectors. 

But I still don't understand what you're trying to fix. 
https://github.com/apache/hadoop/blob/94225152399e6e89fa7b4cff6d17d33e544329a3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L957-L958

`UserGroupInformation` does *not* spawn any renewal thread for ticket 
renewal. Can you clarify what doesn't work? Given your description on JIRA, it 
doesn't make sense to me.


> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
> This fix has been tested and it solves the issue, the same fix has been 
applied to the storm hdfs and hbase connectors. 

But I still don't understand what you're trying to fix. 
https://github.com/apache/hadoop/blob/94225152399e6e89fa7b4cff6d17d33e544329a3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L957-L958

`UserGroupInformation` does *not* spawn any renewal thread for ticket 
renewal. Can you clarify what doesn't work? Given your description on JIRA, it 
doesn't make sense to me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2675) Allow stats to be configured on a table-by-table basis

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446351#comment-15446351
 ] 

James Taylor commented on PHOENIX-2675:
---

How would this survive a cluster bounce, [~lhofhansl]? I think we need to 
persist this in our SYSTEM.CATALOG.

> Allow stats to be configured on a table-by-table basis
> --
>
> Key: PHOENIX-2675
> URL: https://issues.apache.org/jira/browse/PHOENIX-2675
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
> Fix For: 4.9.0
>
>
> Currently stats are controlled and collected at a global level. We should 
> allow them to be configured on a table-by-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread dbahir
Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
This fix has been tested and it solves the issue, the same fix has been 
applied to the storm hdfs and hbase connectors. 
https://issues.apache.org/jira/browse/STORM-1521
https://issues.apache.org/jira/browse/STORM-1535



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446297#comment-15446297
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user dbahir commented on the issue:

https://github.com/apache/phoenix/pull/203
  
This fix has been tested and it solves the issue, the same fix has been 
applied to the storm hdfs and hbase connectors. 
https://issues.apache.org/jira/browse/STORM-1521
https://issues.apache.org/jira/browse/STORM-1535



> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446281#comment-15446281
 ] 

ASF GitHub Bot commented on PHOENIX-3216:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
> That is caused by UserInformationGroup loginUserFromKeytab being called 
multiple time from different threads if using a multi threaded environment. 
this fix ensures that there will only be one login per process.

`UGI.loginUserFromKeytab` never spawns a renewal thread so as it is. I 
don't think this change has the effect you intend it to have.


> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #203: [PHOENIX-3216] Kerberos ticket is not renewed when using...

2016-08-29 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/203
  
> That is caused by UserInformationGroup loginUserFromKeytab being called 
multiple time from different threads if using a multi threaded environment. 
this fix ensures that there will only be one login per process.

`UGI.loginUserFromKeytab` never spawns a renewal thread so as it is. I 
don't think this change has the effect you intend it to have.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446273#comment-15446273
 ] 

Josh Elser commented on PHOENIX-3216:
-

BTW, in case you haven't noticed it [~dbahir]. I have some changes which appear 
mighty similar to what you have here in PHOENIX-3189.

> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3216:
--
Fix Version/s: 4.8.1
   4.9.0

> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
> Fix For: 4.9.0, 4.8.1
>
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3216:

Assignee: Dan Bahir

> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>Assignee: Dan Bahir
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446216#comment-15446216
 ] 

Josh Elser commented on PHOENIX-3216:
-

bq. I already have a fix, will create a pull request shortly

Great! You have my attention. I'll watch for a patch/pull-request from ya.

> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread Dan Bahir (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446186#comment-15446186
 ] 

Dan Bahir commented on PHOENIX-3216:


I already have a fix, will create a pull request shortly

> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446184#comment-15446184
 ] 

James Taylor commented on PHOENIX-3216:
---

[~elserj] would you have spare cycles to pick this up?

> Kerberos ticket is not renewed when using Kerberos authentication with 
> Phoenix JDBC driver
> --
>
> Key: PHOENIX-3216
> URL: https://issues.apache.org/jira/browse/PHOENIX-3216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.5.1, 4.6.0, 4.5.2, 4.8.0
> Environment: Kerberized
>Reporter: Dan Bahir
>
> When using Phoenix jdbc driver in a Kerberized environment and logging in 
> with a keytab is not automatically renewed.
> Expected:The ticket will be automatically renewed and the Phoenix driver will 
> be able to write to the database.
> Actual: The ticket is not renewed and driver loses access to the database.
> 2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception 
> encountered 
> while connecting to the server : javax.security.sasl.Sa
> slException: GSS initiate failed [Caused by GSSException: No valid 
> credentials 
> provided (Mechanism level: Failed to find any Kerberos tgt)]
> 2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
> [hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
> failed. The most likely cause is missing or invalid crede
> ntials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: 
> No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
> :211)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
> nt.java:179)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
> ntImpl.java:611)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
> va:156)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 7)
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
> 4)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.ja



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3216) Kerberos ticket is not renewed when using Kerberos authentication with Phoenix JDBC driver

2016-08-29 Thread Dan Bahir (JIRA)
Dan Bahir created PHOENIX-3216:
--

 Summary: Kerberos ticket is not renewed when using Kerberos 
authentication with Phoenix JDBC driver
 Key: PHOENIX-3216
 URL: https://issues.apache.org/jira/browse/PHOENIX-3216
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0, 4.5.2, 4.6.0, 4.5.1, 4.5.0, 4.4.0
 Environment: Kerberized
Reporter: Dan Bahir


When using Phoenix jdbc driver in a Kerberized environment and logging in with 
a keytab is not automatically renewed.

Expected:The ticket will be automatically renewed and the Phoenix driver will 
be able to write to the database.
Actual: The ticket is not renewed and driver loses access to the database.

2016-08-15 00:00:59.738 WARN  AbstractRpcClient 
[hconnection-0x4763c727-metaLookup-shared--pool1-t686] - Exception encountered 
while connecting to the server : javax.security.sasl.Sa
slException: GSS initiate failed [Caused by GSSException: No valid credentials 
provided (Mechanism level: Failed to find any Kerberos tgt)]
2016-08-15 00:00:59.739 ERROR AbstractRpcClient 
[hconnection-0x4763c727-metaLookup-shared--pool1-t686] - SASL authentication 
failed. The most likely cause is missing or invalid crede
ntials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java
:211)
at 
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClie
nt.java:179)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClie
ntImpl.java:611)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.ja
va:156)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
7)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:73
4)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.ja





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Commented] (PHOENIX-3193) Tracing UI cleanup - final tasks before GSoC pull request

2016-08-29 Thread Ayola Jayamaha
Hi,
I used the jetty version 8.1.7.v20120910 from the beginning. So far I was
building only the tracing web app and I didn't get this issue. After I saw
this message I built from root level and I got the above error message.
When I investigate the issue I found the following resource [1]. Sometimes
clean repo building could fix this issue. Reducing the jetty version to a
lower value would also be fine. If ObjectMapper from
org.apache.htrace.fasterxml.jackson could cover the necessary requirements
it is fine. But I haven't used it so I will check and let you know.

Thanks,
Nishani


[1]
http://stackoverflow.com/questions/7785021/updating-jetty-7-to-jetty-8-java-lang-noclassdeffounderror-javax-servlet-filt

On Mon, Aug 29, 2016 at 5:52 PM, ASF GitHub Bot (JIRA) 
wrote:

>
> [ https://issues.apache.org/jira/browse/PHOENIX-3193?page=
> com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel=15445707#comment-15445707 ]
>
> ASF GitHub Bot commented on PHOENIX-3193:
> -
>
> Github user chrajeshbabu commented on the issue:
>
> https://github.com/apache/phoenix/pull/202
>
> Here are couple of issues found one while starting traceserver and one
> while getting the results in UI.
> Currently the eclipse jetty version used is 8.1.7.v20120910
> From main pom.xml
> 8.1.7.v20120910
>
> `Exception in thread "main" java.lang.NoClassDefFoundError:
> javax/servlet/FilterRegistration
> at org.eclipse.jetty.servlet.ServletContextHandler.(
> ServletContextHandler.java:134)
> at org.eclipse.jetty.servlet.ServletContextHandler.(
> ServletContextHandler.java:114)
> at org.eclipse.jetty.servlet.ServletContextHandler.(
> ServletContextHandler.java:102)
> at org.eclipse.jetty.webapp.WebAppContext.(
> WebAppContext.java:181)
> at org.apache.phoenix.tracingwebapp.http.Main.run(
> Main.java:72)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.phoenix.tracingwebapp.http.Main.main(
> Main.java:54)
> Caused by: java.lang.ClassNotFoundException: javax.servlet.
> FilterRegistration
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(
> Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 7 more
> `
>
> When I changed the jetty version to 7.6.19.v20160209 it's working
> fine? Aren't you facing it?
> Once I do that again getting below exception and not able to read
> anything from trace table.
>
> `104933 [qtp1157440841-20] WARN org.eclipse.jetty.servlet.ServletHandler
> - Error for /trace/
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at org.apache.phoenix.tracingwebapp.http.
> TraceServlet.getResults(TraceServlet.java:136)
> at org.apache.phoenix.tracingwebapp.http.
> TraceServlet.searchTrace(TraceServlet.java:112)
> at org.apache.phoenix.tracingwebapp.http.TraceServlet.doGet(
> TraceServlet.java:67)
> at javax.servlet.http.HttpServlet.service(
> HttpServlet.java:707)
> at javax.servlet.http.HttpServlet.service(
> HttpServlet.java:820)
> at org.eclipse.jetty.servlet.ServletHolder.handle(
> ServletHolder.java:652)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:445)
> at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:137)
> at org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:556)
> at org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:227)
> at org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1044)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:372)
> at org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:189)
> at org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:978)
> at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:135)
> at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:369)
> at org.eclipse.jetty.server.AbstractHttpConnection.
> handleRequest(AbstractHttpConnection.java:464)
> at org.eclipse.jetty.server.AbstractHttpConnection.
> headerComplete(AbstractHttpConnection.java:913)
>  

[jira] [Commented] (PHOENIX-3193) Tracing UI cleanup - final tasks before GSoC pull request

2016-08-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445707#comment-15445707
 ] 

ASF GitHub Bot commented on PHOENIX-3193:
-

Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/202
  
Here are couple of issues found one while starting traceserver and one 
while getting the results in UI.
Currently the eclipse jetty version used is 8.1.7.v20120910
From main pom.xml
8.1.7.v20120910 

`Exception in thread "main" java.lang.NoClassDefFoundError: 
javax/servlet/FilterRegistration
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:134)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:114)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:102)
at 
org.eclipse.jetty.webapp.WebAppContext.(WebAppContext.java:181)
at org.apache.phoenix.tracingwebapp.http.Main.run(Main.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.phoenix.tracingwebapp.http.Main.main(Main.java:54)
Caused by: java.lang.ClassNotFoundException: 
javax.servlet.FilterRegistration
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 7 more
`

When I changed the jetty version to 7.6.19.v20160209 it's working fine? 
Aren't you facing it?
Once I do that again getting below exception and not able to read anything 
from trace table. 

`104933 [qtp1157440841-20] WARN org.eclipse.jetty.servlet.ServletHandler  - 
Error for /trace/
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.getResults(TraceServlet.java:136)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.searchTrace(TraceServlet.java:112)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.doGet(TraceServlet.java:67)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:445)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:556)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:369)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:464)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:913)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:975)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: 
org.codehaus.jackson.map.ObjectMapper
at 

[GitHub] phoenix issue #202: PHOENIX-3193 Tracing UI cleanup

2016-08-29 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/202
  
Here are couple of issues found one while starting traceserver and one 
while getting the results in UI.
Currently the eclipse jetty version used is 8.1.7.v20120910
From main pom.xml
8.1.7.v20120910 

`Exception in thread "main" java.lang.NoClassDefFoundError: 
javax/servlet/FilterRegistration
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:134)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:114)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:102)
at 
org.eclipse.jetty.webapp.WebAppContext.(WebAppContext.java:181)
at org.apache.phoenix.tracingwebapp.http.Main.run(Main.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.phoenix.tracingwebapp.http.Main.main(Main.java:54)
Caused by: java.lang.ClassNotFoundException: 
javax.servlet.FilterRegistration
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 7 more
`

When I changed the jetty version to 7.6.19.v20160209 it's working fine? 
Aren't you facing it?
Once I do that again getting below exception and not able to read anything 
from trace table. 

`104933 [qtp1157440841-20] WARN org.eclipse.jetty.servlet.ServletHandler  - 
Error for /trace/
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.getResults(TraceServlet.java:136)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.searchTrace(TraceServlet.java:112)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.doGet(TraceServlet.java:67)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:445)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:556)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:369)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:464)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:913)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:975)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: 
org.codehaus.jackson.map.ObjectMapper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at 

[jira] [Assigned] (PHOENIX-3211) Support running UPSERT SELECT asynchronously

2016-08-29 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  reassigned PHOENIX-3211:


Assignee: Loknath Priyatham Teja Singamsetty 

> Support running UPSERT SELECT asynchronously
> 
>
> Key: PHOENIX-3211
> URL: https://issues.apache.org/jira/browse/PHOENIX-3211
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Loknath Priyatham Teja Singamsetty 
>
> We have support for creating indexes asynchronously. We should add the 
> ability to run an UPSERT SELECT asynchronously too for very large tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2566) Support NOT NULL constraint for any column for immutable table

2016-08-29 Thread Loknath Priyatham Teja Singamsetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Loknath Priyatham Teja Singamsetty  reassigned PHOENIX-2566:


Assignee: Loknath Priyatham Teja Singamsetty   (was: Thomas D'Silva)

> Support NOT NULL constraint for any column for immutable table
> --
>
> Key: PHOENIX-2566
> URL: https://issues.apache.org/jira/browse/PHOENIX-2566
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Loknath Priyatham Teja Singamsetty 
>
> Since write-once/append-only tables do not partially update rows, we can 
> support NOT NULL constraints for non PK columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3215) Add oracle regexp_like function in phoenix

2016-08-29 Thread Loknath Priyatham Teja Singamsetty (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445188#comment-15445188
 ] 

Loknath Priyatham Teja Singamsetty  commented on PHOENIX-3215:
--

>From [~larsh],

Regex' have the potential to be extremely slow and CPU intensive. We need to be 
careful if we're allowing this at scale.

> Add oracle regexp_like function in phoenix
> --
>
> Key: PHOENIX-3215
> URL: https://issues.apache.org/jira/browse/PHOENIX-3215
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Loknath Priyatham Teja Singamsetty 
>Assignee: Loknath Priyatham Teja Singamsetty 
>Priority: Minor
>
> We have regexp_substr today which returns a substring of a string by applying 
> a regular expression start from the offset of a one-based position.
> However, when using query builder frameworks like JOOQ code generators that 
> generates java code from database and build type safe sql queries out of box, 
> lack of regexp_like syntax is making developers to take work arounds to build 
> the equivalent queries.
> Hard coding Query for regexp_substr as JOOQ does not support it:
> {quote}
> regex = regex + " AND regexp_substr("+ 
> TestResultEntity.Column.BASELINE_MESSAGE.getName() +", ?)" + matching;
> {quote}
> Here is the oracle documentation for regexp_like 
> https://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions018.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3215) Add oracle regexp_like function in phoenix

2016-08-29 Thread Loknath Priyatham Teja Singamsetty (JIRA)
Loknath Priyatham Teja Singamsetty  created PHOENIX-3215:


 Summary: Add oracle regexp_like function in phoenix
 Key: PHOENIX-3215
 URL: https://issues.apache.org/jira/browse/PHOENIX-3215
 Project: Phoenix
  Issue Type: Improvement
Reporter: Loknath Priyatham Teja Singamsetty 
Assignee: Loknath Priyatham Teja Singamsetty 
Priority: Minor


We have regexp_substr today which returns a substring of a string by applying a 
regular expression start from the offset of a one-based position.

However, when using query builder frameworks like JOOQ code generators that 
generates java code from database and build type safe sql queries out of box, 
lack of regexp_like syntax is making developers to take work arounds to build 
the equivalent queries.


Hard coding Query for regexp_substr as JOOQ does not support it:

{quote}
regex = regex + " AND regexp_substr("+ 
TestResultEntity.Column.BASELINE_MESSAGE.getName() +", ?)" + matching;
{quote}


Here is the oracle documentation for regexp_like 
https://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions018.htm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3214) Kafka Phoenix Consumer

2016-08-29 Thread Kalyan (JIRA)
Kalyan created PHOENIX-3214:
---

 Summary: Kafka Phoenix Consumer
 Key: PHOENIX-3214
 URL: https://issues.apache.org/jira/browse/PHOENIX-3214
 Project: Phoenix
  Issue Type: New Feature
Reporter: Kalyan
Assignee: Kalyan


Providing a new feature to Phoenix.

Directly ingest Kafka messages to Phoenix.

Similar to flume phoenix integration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-2922) NPE when selecting from transactional table if transactions disabled

2016-08-29 Thread wangweiyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangweiyi updated PHOENIX-2922:
---
Comment: was deleted

(was: how to set phoenix.transactions.enable=true from jdbc client side?)

> NPE when selecting from transactional table if transactions disabled 
> -
>
> Key: PHOENIX-2922
> URL: https://issues.apache.org/jira/browse/PHOENIX-2922
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.9.0
>
>
> 0: jdbc:phoenix:jtaylor-lt9.internal.salesfor> select * from tx1;
> java.lang.NullPointerException
> at 
> org.apache.tephra.TransactionContext.start(TransactionContext.java:91)
> at 
> org.apache.phoenix.execute.MutationState.startTransaction(MutationState.java:419)
> at 
> org.apache.phoenix.execute.MutationState.sendUncommitted(MutationState.java:1308)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:277)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:807)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)