fmendezlopez commented on issue #1776:
URL: https://github.com/apache/sedona/issues/1776#issuecomment-2647698442

   Hello,
   
   We have tried the following code:
   
   `df_floods_tile = sedona.sql(f"SELECT RS_TileExplode(content, 2, 2) FROM 
floods_tif")`
   
   and now the error thrown is this:
   
   `An error was encountered:
   An error occurred while calling o232.showString.
   : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 
(TID 10) (ip-10-60-253-102.eu-south-2.compute.internal executor 2): 
java.lang.IllegalArgumentException: Unsupported raster type: 73
       at 
org.apache.sedona.common.raster.serde.Serde.deserialize(Serde.java:184)
       at 
org.apache.spark.sql.sedona_sql.expressions.raster.implicits$RasterInputExpressionEnhancer.toRaster(implicits.scala:38)
       at 
org.apache.spark.sql.sedona_sql.expressions.raster.RS_TileExplode.eval(RasterConstructors.scala:107)
       at 
org.apache.spark.sql.execution.GenerateExec.$anonfun$doExecute$8(GenerateExec.scala:108)
       at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
       at scala.collection.Iterator$ConcatIterator.hasNext(Iterator.scala:224)
       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
       at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
       at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:35)
       at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hasNext(Unknown
 Source)
       at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:959)
       at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:407)
       at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:888)
       at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:888)
       at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
       at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
       at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
       at org.apache.spark.scheduler.Task.run(Task.scala:141)
       at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
       at java.lang.Thread.run(Thread.java:750)
   
   Driver stacktrace:
       at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2974)
       at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2910)
       at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2909)
       at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
       at scala.collection.mutable.ResizableArray.fore`
   
   
   Is there any way to achieve the correct read of the file by tuning the 
argumnents passed to `RS_TileExplode`? If not, is there a way we can do 
something with Sedona in general?
   
   Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to