stoty commented on PR #143:
URL: https://github.com/apache/hbase-connectors/pull/143#issuecomment-2857595211

   > Hi @stoty what is the error you are seeing. I remember building 
[hbase-connectors](https://issues.apache.org/jira/browse/HBASE-connectors) with 
hbase 2.6.0 and also fixed 
[8c9de32](https://github.com/apache/hbase-connectors/commit/8c9de329c5feb006798f89d13a7d2f60fc58d87a)
 for that. Maybe something we changed post 2.6.0 release?
   
   `Driver stacktrace:
   25/05/07 10:23:35 INFO DAGScheduler: Job 17 failed: show at 
PartitionFilterSuite.scala:480, took 0.030877 s
   - or *** FAILED ***
     org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
in stage 18.0 failed 1 times, most recent failure: Lost task 0.0 in stage 18.0 
(TID 27) (172.30.65.179 executor driver): java.lang.NoSuchMethodError: 'void 
org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.makeExtensionsImmutable()'
        at 
org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.<init>(SparkFilterProtos.java:894)
        at 
org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.<init>(SparkFilterProtos.java:805)
        at 
org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter$1.parsePartialFrom(SparkFilterProtos.java:915)
        at 
org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter$1.parsePartialFrom(SparkFilterProtos.java:910)
        at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:135)
        at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:168)
        at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:180)
        at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:185)
        at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:25)
        at 
org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.parseFrom(SparkFilterProtos.java:1224)
        at 
org.apache.hadoop.hbase.spark.SparkSQLPushDownFilter.parseFrom(SparkSQLPushDownFilter.java:172)
        at 
org.apache.hadoop.hbase.spark.datasources.SerializedFilter$.$anonfun$fromSerializedFilter$1(HBaseTableScanRDD.scala:309)
        at scala.Option.map(Option.scala:230)
        at 
org.apache.hadoop.hbase.spark.datasources.SerializedFilter$.fromSerializedFilter(HBaseTableScanRDD.scala:309)
        at 
org.apache.hadoop.hbase.spark.datasources.HBaseTableScanRDD.compute(HBaseTableScanRDD.scala:237)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:136)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to