Kristin Cowalcijk created SEDONA-224:
----------------------------------------

             Summary: java.lang.NoSuchMethodError when loading GeoParquet files 
using Spark 3.0.x ~ 3.2.x
                 Key: SEDONA-224
                 URL: https://issues.apache.org/jira/browse/SEDONA-224
             Project: Apache Sedona
          Issue Type: Bug
            Reporter: Kristin Cowalcijk


{{spark.read.format("geoparquet").load("/path/to/geoparquet.parquet")}} does 
not work on Spark 3.0.x ~ 3.2.x, it raises an {{java.lang.NoSuchMethodError}}:

{code:scala}
spark.read.format("geoparquet").load("/path/to/example1.parquet")
22/12/29 15:53:44 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
java.lang.NoSuchMethodError: 
org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter$.$lessinit$greater$default$3()Z
        at 
org.apache.spark.sql.execution.datasources.parquet.GeoParquetToSparkSchemaConverter.<init>(GeoParquetSchemaConverter.scala:48)
        at 
org.apache.spark.sql.execution.datasources.parquet.GeoParquetFileFormat$.$anonfun$mergeSchemasInParallel$1(GeoParquetFileFormat.scala:265)
        at 
org.apache.spark.sql.execution.datasources.parquet.GeoParquetFileFormat$.$anonfun$mergeSchemasInParallel$1$adapted(GeoParquetFileFormat.scala:261)
        at 
org.apache.spark.sql.execution.datasources.parquet.GeoSchemaMergeUtils$.$anonfun$mergeSchemasInParallel$2(GeoSchemaMergeUtils.scala:69)
        at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:863)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:863)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:131)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to