[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2018-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314201#comment-16314201
 ] 

Hudson commented on PHOENIX-4466:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1908 (See 
[https://builds.apache.org/job/Phoenix-master/1908/])
PHOENIX-4466 Do not relocate hadoop code (addendum) (elserj: rev 
2136b002c37db478ffea11233f9ebb80276d2594)
* (edit) phoenix-queryserver-client/pom.xml


> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.addendum.patch, 
> PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2018-01-05 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314108#comment-16314108
 ] 

James Taylor commented on PHOENIX-4466:
---

Belated +1. Thanks [~brfrn169] and [~elserj].

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.addendum.patch, 
> PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301978#comment-16301978
 ] 

Hudson commented on PHOENIX-4466:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1901 (See 
[https://builds.apache.org/job/Phoenix-master/1901/])
PHOENIX-4466 Relocate Avatica and hadoop-common in thin-client jar (elserj: rev 
34693843abe4490b54fbd30512bf7d98d0f59c0d)
* (edit) phoenix-queryserver-client/pom.xml


> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301858#comment-16301858
 ] 

Hadoop QA commented on PHOENIX-4466:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903435/PHOENIX-4466-v2.patch
  against master branch at commit 412329a7415302831954891285d291055328c28b.
  ATTACHMENT ID: 12903435

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//console

This message is automatically generated.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301749#comment-16301749
 ] 

Josh Elser commented on PHOENIX-4466:
-

Thanks, [~brfrn169]! I'll try to pull this down to double-check and then commit.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301697#comment-16301697
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

Thanks [~elserj]. I agree with you. I attached a new patch adding hadoop-common 
relocation.

In this patch, even when userClassPathFirst is specified, we can run 
spark-shell successfully. I think it is better and safer than the previous 
patch as you mentioned.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298872#comment-16298872
 ] 

Josh Elser commented on PHOENIX-4466:
-

Thanks for the thorough explanation, Toshi.

bq. So I thought the userClassPathFirst was either insufficient or broken and 
we were not able to use it. And we needed to relocate Avatica to use Avatica 
classes from phoenix-thin-client jar. Also, I thought if we don't use 
userClassPathFirst, the hadoop-common relocation wasn't needed.

Ok, so, this is a "maybe" related to {{userClassPathFirst}}, but we're not 
sure, right? I'm still of the mindset that not shading-relocating hadoop-common 
the first time around was a mistake. Any reason to not proactively do that? Do 
you still have your environment up to make sure that doesn't inadvertently 
break anything?

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-20 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298652#comment-16298652
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

[~elserj] I'm sorry for the confusion.

Let me explain it chronologically.

1. We were facing the protocol mismatch error as explained in the description.

2. To avoid the the protocol mismatch error, we tried to run spark-shell with 
spark.{driver,executor}.userClassPathFirst, but it failed with the following 
error you mentioned:
{code}
# spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
--conf spark.driver.userClassPathFirst=true --conf 
spark.executor.userClassPathFirst=true
...
scala> val query = 
sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://:8765;serialization=PROTOBUF").option("dbtable","").load 
...
Exception in thread "main" java.lang.RuntimeException: 
java.lang.RuntimeException: class 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not 
org.apache.hadoop.security.GroupMappingServiceProvider
...
{code}

JniBasedUnixGroupsMappingWithFallback and GroupMappingServiceProvider are in 
both spark-assembly jar and phoenix-thin-client jar and the version is same in 
HDP-2.6.3, but the above error occurred. So we thought there was a problem 
around classloader when specifying spark.{driver,executor}.userClassPathFirst.

3. We built a dev jar (phoenix-4.7.0.2.6.1.0-SNAPSHOT-thin-client.jar) 
relocating hadoop-common and tried with it, and the classloader error 
disappeared but the first error re-occurred for some reason.

{code}
# spark-shell --jars phoenix-4.7.0.2.6.1.0-SNAPSHOT-thin-client.jar --conf 
spark.driver.userClassPathFirst=true --conf 
spark.executor.userClassPathFirst=true
...
scala> val query = 
sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://:8765;serialization=PROTOBUF").option("dbtable","").load 
...
java.sql.SQLException: While closing connection
at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
at 
org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
at 
org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
at $iwC$$iwC$$iwC$$iwC.(:36)
at $iwC$$iwC$$iwC.(:38)
at $iwC$$iwC.(:40)
at $iwC.(:42)
at (:44)
at .(:48)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at 
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at 
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at 
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at 
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at 
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297092#comment-16297092
 ] 

Josh Elser commented on PHOENIX-4466:
-

{quote}
The first problem is Avatica protocol mismatch between queryserver and 
thin-client caused by Avatica jar conflict between phoenix-thin-client jar and 
spark-assembly jar (In HDP-2.6.3, phoenix-thin-client jar depends on 
calcite-avatica-1.8.0.2.6.1.0 and spark-assembly jar depends on 
calcite-avatica-1.2.0-incubating.) And according to the output of spark-shell 
with -verbose:class, it seemed like Avatica classes from spark-assembly jar 
(1.2.0-incubating) were used.

To resolve this issue, I think we need to relocate only Avatica (except 
protobuf package due to a limitation in the Avatica protocol,) and this can 
make Avatica classes from phoenix-thin-client jar be used.

If I'm wrong, please correct me.
{quote}

Nope, no disagreement about the Avatica relocation change. I'm just confused 
about our offline conversation where we had agreed that there was also an issue 
with hadoop-common needing to be relocated for a different error message we 
saw. [~pbhardwaj] was the one who reported this, actually.

{code}
Exception in thread "main" java.lang.RuntimeException: 
java.lang.RuntimeException: class 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not 
org.apache.hadoop.security.GroupMappingServiceProvider
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2273)
at org.apache.hadoop.security.Groups.(Groups.java:99)
at org.apache.hadoop.security.Groups.(Groups.java:95)
at 
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:420)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:324)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
at 
org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2214)
at 
org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2214)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2214)
at org.apache.spark.SecurityManager.(SecurityManager.scala:214)
at org.apache.spark.repl.SparkIMain.(SparkIMain.scala:118)
at 
org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.(SparkILoop.scala:187)
at 
org.apache.spark.repl.SparkILoop.createInterpreter(SparkILoop.scala:217)
at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:949)
at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at 
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: class 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not 
org.apache.hadoop.security.GroupMappingServiceProvider
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2267)
... 33 more
{code}

This was because the environment had loaded some Hadoop classes from the 
spark-assembly jar and others from phoenix-thin-client:

{noformat}
[Loaded org.apache.hadoop.security.GroupMappingServiceProvider from 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-19 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296430#comment-16296430
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

[~elserj]

{quote}
That's strange. I'm just surprised how the Avatica relocation would have 
'fixed' whatever issue we saw with hadoop-common. IMO, I can't think of a 
reason why to not shade hadoop-common.
{quote}

The first problem is Avatica protocol mismatch between queryserver and 
thin-client caused by Avatica jar conflict between phoenix-thin-client jar and 
spark-assembly jar (In HDP-2.6.3, phoenix-thin-client jar depends on 
calcite-avatica-1.8.0.2.6.1.0 and spark-assembly jar depends on 
calcite-avatica-1.2.0-incubating.) And according to the output of spark-shell 
with -verbose:class, it seemed like Avatica classes from spark-assembly jar 
(1.2.0-incubating) were used.

To resolve this issue, I think we need to relocate only Avatica (except 
protobuf package due to a limitation in the Avatica protocol,) and this can 
make Avatica classes from phoenix-thin-client jar be used.

If I'm wrong, please correct me.

{quote}
My only other concern is "how do we know if this works?". I'm not sure how much 
work it would be to get a spark environment up in an IT to verify that we can 
use PQS+thin-client...
{quote}

Yes, that's what I was thinking. It is hard to write unit test code or 
integration test code for this issue. When I tested the patch in my env, it 
looked good to me.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295615#comment-16295615
 ] 

Josh Elser commented on PHOENIX-4466:
-

{quote}
Actually, when I ran spark-shell with 
spark.\{driver,executor\}.userClassPathFirst, it looked like we need to shade 
hadoop-common as well. But in the first place if we shade Avatica, we don't 
need spark.\{driver,executor\}.userClassPathFirst and seems we don't need to 
shade hadoop-common.
{quote}

That's strange. I'm just surprised how the Avatica relocation would have 
'fixed' whatever issue we saw with hadoop-common. IMO, I can't think of a 
reason why to not shade hadoop-common.

My only other concern is "how do we know if this works?". I'm not sure how much 
work it would be to get a spark environment up in an IT to verify that we can 
use PQS+thin-client...

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-18 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295561#comment-16295561
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

[~elserj] When I tested the patch in my env, it seems like we don't need to 
shade hadoop-common. 

Actually, when I ran spark-shell with 
spark.{driver,executor}.userClassPathFirst, it looked like we need to shade 
hadoop-common as well. But in the first place if we shade Avatica, we don't 
need spark.{driver,executor}.userClassPathFirst and seems we don't need to 
shade hadoop-common.

Do you have any concern about this?

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295216#comment-16295216
 ] 

Josh Elser commented on PHOENIX-4466:
-

[~brfrn169], was the shading of hadoop-common not actually needed after all?

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>   at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>   at org.apache.spark.repl.Main$.main(Main.scala:31)
>   at org.apache.spark.repl.Main.main(Main.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294687#comment-16294687
 ] 

Hadoop QA commented on PHOENIX-4466:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902599/PHOENIX-4466.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902599

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixClientRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1675//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1675//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1675//console

This message is automatically generated.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-17 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294604#comment-16294604
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

I just attached a patch for this issue.

I added the following relocation config to phoenix-queryserver-client/pom.xml.
{code}

  org.apache.calcite.avatica
  ${shaded.package}.org.apache.calcite.avatica
  
  
org.apache.calcite.avatica.proto.*
  

{code}

I think this is a short-term solution.

The long-term solution was mentioned in PHOENIX-3136:
{code}
Long term, I think the protocol itself will have to be updated in Avatica to be 
a little less brittle in this regard (I did not originally consider the 
implications that client/server might have the classes, but with a different 
name than what we had in Avatica).
{code}

CC: [~elserj]



> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
>   at 
> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
>   at 
> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>   at 
> 

[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-17 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294521#comment-16294521
 ] 

Toshihiro Suzuki commented on PHOENIX-4466:
---

I captured network packets by using tcpdump when the issue was reproduced. The 
output is as follows:
{code}
15:03:50.422898 IP (tos 0x0, ttl 64, id 30259, offset 0, flags [DF], proto TCP 
(6), length 52)
172.26.101.78.8765 > 172.26.101.77.48206: Flags [.], cksum 0x22f7 
(incorrect -> 0x0d32), seq 3933046844, ack 1437099126, win 118, options 
[nop,nop,TS val 393018895 ecr 393022983], length 0
E..4v3@.@.eN..eM"=.N.m. 172.26.101.77.48206: Flags [.], cksum 0x286d 
(incorrect -> 0xaa7d), seq 3933046844:3933048242, ack 1437099126, win 118, 
options [nop,nop,TS val 393018901 ecr 393022983], length 1398
E...v4@.@..I..eN..eM"=.N.m. 172.26.101.77.48206: Flags [P.], cksum 0x254b 
(incorrect -> 0x9526), seq 3933048242:3933048838, ack 1437099126, win 118, 
options [nop,nop,TS val 393018901 ecr 393022983], length 596
E...v5@.@..j..eN..eM"=.N.m..U.dv...v%K.
.l...m...apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
.QIllegalArgumentException: Cannot fetch parser for Request with missing class 
name .*.02.
.master1.openstacklocal:87658.
{code}

According to the above, it seems like the message format was not correct.

Also, one more thing I noticed is that it looks like the exception in the 
description indicates the client used JSON serialization:
{code}
at 
org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:227)
{code}


I finally found there is Avatica jar conflict between phoenix-thin-client jar 
and spark-assembly jar.
It seems like both phoenix-thin-client jar and spark-assembly jar depend on 
calcite-avatica jar, but the version is different.
(In HDP-2.6.3, phoenix-thin-client jar depends on calcite-avatica-1.8.0.2.6.1.0 
and spark-assembly jar depends on calcite-avatica-1.2.0-incubating.)

When I ran spark-shell with -verbose:class, the following was shown:
{code}
...
[Loaded org.apache.calcite.avatica.UnregisteredDriver from 
file:/usr/hdp/2.6.3.0-235/spark/lib/spark-assembly-1.6.3.2.6.3.0-235-hadoop2.7.3.2.6.3.0-235.jar]
[Loaded org.apache.calcite.avatica.remote.Driver from 
file:/usr/hdp/2.6.3.0-235/spark/lib/spark-assembly-1.6.3.2.6.3.0-235-hadoop2.7.3.2.6.3.0-235.jar]
[Loaded org.apache.calcite.avatica.Handler from 
file:/usr/hdp/2.6.3.0-235/spark/lib/spark-assembly-1.6.3.2.6.3.0-235-hadoop2.7.3.2.6.3.0-235.jar]
...
{code}
This means the classes in calcite-avatica from spark-assembly jar 
(1.2.0-incubating) were used.

Since calcite-avatica-1.2.0-incubating doesn't support PROTOBUF serialization 
and it supports only JSON