jiayuasu commented on code in PR #1281:
URL: https://github.com/apache/sedona/pull/1281#discussion_r1534414711
##########
docs/setup/compile.md:
##########
@@ -73,11 +73,20 @@ For example,
export SPARK_HOME=$PWD/spark-3.0.1-bin-hadoop2.7
export PYTHONPATH=$SPARK_HOME/python
```
-2. Compile the Sedona Scala and Java code with `-Dgeotools` and then copy the
==sedona-spark-shaded-{{ sedona.current_version }}.jar== to
==SPARK_HOME/jars/== folder.
+2. Put JAI jars to ==SPARK_HOME/jars/== folder.
+```
+export JAI_CORE_VERSION="1.1.3"
Review Comment:
Should we put these jars in geotools-wrapper?
##########
docs/tutorial/raster.md:
##########
@@ -583,6 +583,44 @@ SELECT RS_AsPNG(raster)
Please refer to [Raster writer docs](../../api/sql/Raster-writer) for more
details.
+## Collecting raster Dataframes and working with them locally in Python
Review Comment:
Can you add one more section to explain how to write a regular Python User
Defined Function (not Pandas UDF) to work on the raster type? I understand that
the UDF cannot return a raster type directly since we only have a Python
deserializer, but with the `RS_MakeRaster()` + NumPy array, we can still
construct the raster type. It is important to show this workflow. Maybe we can
show this in a separate Doc PR?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]