[ 
https://issues.apache.org/jira/browse/PHOENIX-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-5941.
----------------------------------
    Resolution: Cannot Reproduce

This is a classpath issue.
Either this was a setup issue, or it has been fixed since.

We know that this works in the current versions if the classpath is set up 
correctly.

> org.apache.phoenix.query.ConnectionQueryServices.getAdmin()Lorg/apache/hadoop/hbase/client/HBaseAdmin
> -----------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5941
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5941
>             Project: Phoenix
>          Issue Type: Bug
>          Components: spark-connector
>    Affects Versions: 5.0.0
>            Reporter: Lunna
>            Priority: Major
>
> Error while trying to execute a show from a DataFrame created with Phoenix 
> table.
>  
> scala> df.filter(df("COL1") === "test_row_1" && df("ID") === 
> 1L).select(df("ID")).show
> java.lang.NoSuchMethodError: 
> org.apache.phoenix.query.ConnectionQueryServices.getAdmin()Lorg/apache/hadoop/hbase/client/HBaseAdmin;
>   at 
> org.apache.phoenix.spark.datasource.v2.reader.PhoenixDataSourceReader.planInputPartitions(PhoenixDataSourceReader.java:165)
>   at 
> org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.partitions$lzycompute(DataSourceV2ScanExec.scala:76)
>   at 
> org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.partitions(DataSourceV2ScanExec.scala:75)
>   at 
> org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.outputPartitioning(DataSourceV2ScanExec.scala:65)
>   at 
> org.apache.spark.sql.execution.exchange.EnsureRequirements$$anonfun$org$apache$spark$sql$execution$exchange$EnsureRequirements$$ensureDistributionAndOrdering$1.apply(EnsureRequirements.scala:149)
>   at 
> org.apache.spark.sql.execution.exchange.EnsureRequirements$$anonfun$org$apache$spark$sql$execution$exchange$EnsureRequirements$$ensureDistributionAndOrdering$1.apply(EnsureRequirements.scala:148)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.immutable.List.map(List.scala:296)
>   at



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to