[ 
https://issues.apache.org/jira/browse/PHOENIX-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084061#comment-17084061
 ] 

Christian Danner commented on PHOENIX-5146:
-------------------------------------------

Same Issue here:

We upgraded our Cluster (Hadoop 2.6.0, Hbase 1.2.0, Hive 1.1.0 and Phoenix 
4.14.1) to (Hadoop 3.0, HBase 2.1.4, Hive 2.3 and Phoenix 5.0). 
Since then we have Issues with our Hive + Phoenix integration with a error Log 
that looks pretty similar to this provided version.
This happens when we run a query on a hive table which resides in an encryption 
zone.

It is confusing since this table has nothing to do with Phoenix, HiveServer2 
Logs this stack trace:  
{code:java}
Exception in thread "HiveServer2-Handler-Pool: Thread-39" 
java.lang.NoClassDefFoundError: org/apache/phoenix/shaded/org/apache/http/Consts
        at 
org.apache.phoenix.shaded.org.apache.http.client.utils.URIBuilder.digestURI(URIBuilder.java:179)
        at 
org.apache.phoenix.shaded.org.apache.http.client.utils.URIBuilder.<init>(URIBuilder.java:80)
        at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createURL(KMSClientProvider.java:435)
        at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1034)
        at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:203)
        at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:200)
        at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:126)
        at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:200)
        at 
org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110)
        at 
org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:84)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2604)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
        at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:213)
        at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:322)
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits(FetchOperator.java:372)
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:304)
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:459)
        at 
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:428)
        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)
        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2227)
        at 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:491)
        at 
org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:297)
        at 
org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:869)
        at 
org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:507)
        at 
org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:708)
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1717)
        at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1702)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:605)
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: 
org.apache.phoenix.shaded.org.apache.http.Consts
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 36 more

{code}
It is even persistent after we removed the Phoenix Hive integration and removed 
the aux libs from the Hive server + client configs and restarted the services. 

 

> Phoenix missing class definition: java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/org/apache/http/Consts
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5146
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5146
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.0.0
>         Environment: 3 node kerberised cluster.
> Hbase 2.0.2
>            Reporter: Narendra Kumar
>            Priority: Major
>
> While running a SparkCompatibility check for Phoniex hitting this issue:
> {noformat}
> 2019-02-15 09:03:38,470|INFO|MainThread|machine.py:169 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|RUNNING: echo "
>  import org.apache.spark.graphx._;
>  import org.apache.phoenix.spark._;
>  val rdd = sc.phoenixTableAsRDD(\"EMAIL_ENRON\", Seq(\"MAIL_FROM\", 
> \"MAIL_TO\"), 
> zkUrl=Some(\"huaycloud012.l42scl.hortonworks.com:2181:/hbase-secure\"));
>  val rawEdges = rdd.map
> { e => (e(\"MAIL_FROM\").asInstanceOf[VertexId], 
> e(\"MAIL_TO\").asInstanceOf[VertexId])}
> ;
>  val graph = Graph.fromEdgeTuples(rawEdges, 1.0);
>  val pr = graph.pageRank(0.001);
>  pr.vertices.saveToPhoenix(\"EMAIL_ENRON_PAGERANK\", Seq(\"ID\", \"RANK\"), 
> zkUrl = Some(\"huaycloud012.l42scl.hortonworks.com:2181:/hbase-secure\"));
>  " | spark-shell --master yarn --jars 
> /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.1.0.0-75.jar 
> --properties-file 
> /grid/0/log/cluster/run_phoenix_secure_ha_all_1/artifacts/spark_defaults.conf 
> 2>&1 | tee 
> /grid/0/log/cluster/run_phoenix_secure_ha_all_1/artifacts/Spark_clientLogs/phoenix-spark.txt
>  2019-02-15 09:03:38,488|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SPARK_MAJOR_VERSION is set 
> to 2, using Spark2
>  2019-02-15 09:03:39,901|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: Class path contains 
> multiple SLF4J bindings.
>  2019-02-15 09:03:39,902|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-75/phoenix/phoenix-5.0.0.3.1.0.0-75-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  2019-02-15 09:03:39,902|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-75/spark2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  2019-02-15 09:03:39,902|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: See 
> [http://www.slf4j.org/codes.html#multiple_bindings] for an explanation.
>  2019-02-15 09:03:41,400|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|Setting default log level to 
> "WARN".
>  2019-02-15 09:03:41,400|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|To adjust logging level use 
> sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
>  2019-02-15 09:03:54,837|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84{color:#ff0000}*|java.lang.NoClassDefFoundError:
>  org/apache/phoenix/shaded/org/apache/http/Consts*{color}
>  2019-02-15 09:03:54,838|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.phoenix.shaded.org.apache.http.client.utils.URIBuilder.digestURI(URIBuilder.java:181)
>  2019-02-15 09:03:54,839|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.phoenix.shaded.org.apache.http.client.utils.URIBuilder.<init>(URIBuilder.java:82)
>  2019-02-15 09:03:54,839|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createURL(KMSClientProvider.java:468)
>  2019-02-15 09:03:54,839|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getDelegationToken(KMSClientProvider.java:1023)
>  2019-02-15 09:03:54,840|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:252)
>  2019-02-15 09:03:54,840|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:249)
>  2019-02-15 09:03:54,840|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:172)
>  2019-02-15 09:03:54,841|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.getDelegationToken(LoadBalancingKMSClientProvider.java:249)
>  2019-02-15 09:03:54,841|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  2019-02-15 09:03:54,841|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:107)
>  2019-02-15 09:03:54,842|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  2019-02-15 09:03:54,842|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$$anonfun$org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens$1.apply(HadoopFSDelegationTokenProvider.scala:98)
>  2019-02-15 09:03:54,842|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$$anonfun$org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens$1.apply(HadoopFSDelegationTokenProvider.scala:96)
>  2019-02-15 09:03:54,843|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
>  2019-02-15 09:03:54,843|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to