This is an automated email from the ASF dual-hosted git repository.
jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-sedona.git
The following commit(s) were added to refs/heads/master by this push:
new 3dabc4d Update the docs
3dabc4d is described below
commit 3dabc4dc6becbb73089f6fa859d0486f1e24298f
Author: Jia Yu <[email protected]>
AuthorDate: Thu Dec 31 12:40:26 2020 -0800
Update the docs
---
docs/api/sql/GeoSparkSQL-Overview.md | 10 +++++-----
docs/tutorial/geospark-core-python.md | 7 ++-----
docs/tutorial/sql.md | 6 +++---
3 files changed, 10 insertions(+), 13 deletions(-)
diff --git a/docs/api/sql/GeoSparkSQL-Overview.md
b/docs/api/sql/GeoSparkSQL-Overview.md
index aebdb52..cf596be 100644
--- a/docs/api/sql/GeoSparkSQL-Overview.md
+++ b/docs/api/sql/GeoSparkSQL-Overview.md
@@ -8,20 +8,20 @@ var myDataFrame = sparkSession.sql("YOUR_SQL")
* Constructor: Construct a Geometry given an input string or coordinates
* Example: ST_GeomFromWKT (string). Create a Geometry from a WKT String.
- * Documentation: [Here](./GeoSparkSQL-Constructor)
+ * Documentation: [Here](../GeoSparkSQL-Constructor)
* Function: Execute a function on the given column or columns
* Example: ST_Distance (A, B). Given two Geometry A and B, return the
Euclidean distance of A and B.
- * Documentation: [Here](./GeoSparkSQL-Function)
+ * Documentation: [Here](../GeoSparkSQL-Function)
* Aggregate function: Return the aggregated value on the given column
* Example: ST_Envelope_Aggr (Geometry column). Given a Geometry column,
calculate the entire envelope boundary of this column.
- * Documentation: [Here](./GeoSparkSQL-AggregateFunction)
+ * Documentation: [Here](../GeoSparkSQL-AggregateFunction)
* Predicate: Execute a logic judgement on the given columns and return true or
false
* Example: ST_Contains (A, B). Check if A fully contains B. Return
"True" if yes, else return "False".
- * Documentation: [Here](./GeoSparkSQL-Predicate)
+ * Documentation: [Here](../GeoSparkSQL-Predicate)
Sedona also provides an Adapter to convert SpatialRDD <-> DataFrame. Please
read [Adapter
Scaladoc](../../javadoc/sql/org/apache/sedona/sql/utils/index.html)
-SedonaSQL supports SparkSQL query optimizer, documentation is
[Here](./GeoSparkSQL-Optimizer)
+SedonaSQL supports SparkSQL query optimizer, documentation is
[Here](../GeoSparkSQL-Optimizer)
## Quick start
diff --git a/docs/tutorial/geospark-core-python.md
b/docs/tutorial/geospark-core-python.md
index a42ffdc..53f1d3d 100644
--- a/docs/tutorial/geospark-core-python.md
+++ b/docs/tutorial/geospark-core-python.md
@@ -204,9 +204,7 @@ Besides the rectangle (Envelope) type range query window,
Apache Sedona range qu
<li> LineString </li>
</br>
-The code to create a point is as follows:
-To create shapely geometries please follow official shapely <a href="">
documentation </a>
-
+To create shapely geometries please follow [Shapely official
docs](https://shapely.readthedocs.io/en/stable/manual.html)
### Use spatial indexes
@@ -300,8 +298,7 @@ Besides the Point type, Apache Sedona KNN query center can
be
<li> Polygon </li>
<li> LineString </li>
-To create Polygon or Linestring object please follow Shapely official <a
href="https://shapely.readthedocs.io/en/stable/manual.html"> documentation </a>
-
+To create Polygon or Linestring object please follow [Shapely official
docs](https://shapely.readthedocs.io/en/stable/manual.html)
### Use spatial indexes
To utilize a spatial index in a spatial KNN query, use the following code:
diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index 43232f9..0939ef2 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -132,7 +132,7 @@ root
## Load Shapefile and GeoJSON
-Shapefile and GeoJSON must be loaded by SpatialRDD and converted to DataFrame
using Adapter. Please read [Load SpatialRDD](rdd/#create-a-generic-spatialrdd)
and [DataFrame <-> RDD](sql/#convert-between-dataframe-and-spatialrdd).
+Shapefile and GeoJSON must be loaded by SpatialRDD and converted to DataFrame
using Adapter. Please read [Load
SpatialRDD](../rdd/#create-a-generic-spatialrdd) and [DataFrame <->
RDD](#convert-between-dataframe-and-spatialrdd).
## Transform the Coordinate Reference System
@@ -261,7 +261,7 @@ Use SedonaSQL DataFrame-RDD Adapter to convert a DataFrame
to an SpatialRDD. Ple
var spatialDf = Adapter.toDf(spatialRDD, sparkSession)
```
-All other attributes such as price and age will be also brought to the
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other
attributes in an SpatialRDD](./rdd#read-other-attributes-in-an-spatialrdd)).
+All other attributes such as price and age will be also brought to the
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other
attributes in an SpatialRDD](../rdd#read-other-attributes-in-an-spatialrdd)).
### SpatialPairRDD to DataFrame
@@ -271,4 +271,4 @@ PairRDD is the result of a spatial join query or distance
join query. SedonaSQL
var joinResultDf = Adapter.toDf(joinResultPairRDD, sparkSession)
```
-All other attributes such as price and age will be also brought to the
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other
attributes in an SpatialRDD](./rdd#read-other-attributes-in-an-spatialrdd)).
+All other attributes such as price and age will be also brought to the
DataFrame as long as you specify ==carryOtherAttributes== (see [Read other
attributes in an SpatialRDD](../rdd#read-other-attributes-in-an-spatialrdd)).