Thanks a lot. I have already solved  the problem as you suggested.
[image: image.png]



邱亮亮 | Liangliang Qiu


Jia Yu <[email protected]> 于2021年8月3日周二 下午2:27写道:

> Hi Liangliang,
>
> This is a known issue in Sedona 1.0+. You should use flip coordinates to
> flip X/Y: https://issues.apache.org/jira/browse/SEDONA-54
> https://sedona.apache.org/api/sql/Function/#st_flipcoordinates
>
> Thanks,
> Jia
>
> On Mon, Aug 2, 2021 at 1:53 AM Liangliang Qiu <[email protected]>
> wrote:
>
>> Dear all,
>>     I'am a data engineer from China, Liangliang.Qiu.
>>     when i use ST_Transform, it raises an issue.
>>
>> ST_Transform(polygon, 'epsg:4326','epsg:3857')
>>
>> the polygon formed by geojson, and the order is in this way
>> [117.48860931396484, 5.879166126251334]
>> But it failed,
>>
>> Caused by: java.lang.AssertionError: assertion failed
>>         at scala.Predef$.assert(Predef.scala:208)
>>         at
>>
>> org.apache.spark.sql.sedona_sql.expressions.ST_Transform.eval(Functions.scala:244)
>>         at
>>
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.writeFields_0_5$(Unknown
>> Source)
>>         at
>>
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>> Source)
>>         at
>>
>> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>> Source)
>>         at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
>>         at
>>
>> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346)
>>         at
>> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
>>         at
>>
>> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
>>         at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
>>         at
>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>>         at org.apache.spark.scheduler.Task.run(Task.scala:131)
>>         at
>>
>> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>>         at
>> org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
>>         at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>         ... 1 more
>>
>> Would you like to tell me why it happended ?
>>
>> Bests,
>>
>> Thanks
>> Liangliang.Qiu
>>
>

Reply via email to