This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git


The following commit(s) were added to refs/heads/master by this push:
     new caf090ddb [DOCS] Standardize whitespace in Markdown files (#1602)
caf090ddb is described below

commit caf090ddbd02c72f73bfba3d468a4e8817be8a21
Author: John Bampton <[email protected]>
AuthorDate: Tue Sep 24 21:40:40 2024 +1000

    [DOCS] Standardize whitespace in Markdown files (#1602)
---
 docs/api/sql/Optimizer.md         | 2 +-
 docs/community/contributor.md     | 4 ++--
 docs/community/publication.md     | 2 +-
 docs/community/release-manager.md | 2 +-
 docs/community/rule.md            | 2 +-
 docs/setup/databricks.md          | 2 +-
 docs/tutorial/raster.md           | 2 +-
 docs/tutorial/snowflake/sql.md    | 2 +-
 docs/tutorial/sql-pure-sql.md     | 2 +-
 docs/tutorial/zeppelin.md         | 2 +-
 10 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/docs/api/sql/Optimizer.md b/docs/api/sql/Optimizer.md
index f52258150..6c64c9371 100644
--- a/docs/api/sql/Optimizer.md
+++ b/docs/api/sql/Optimizer.md
@@ -284,7 +284,7 @@ GROUP BY (lcs_geom, rcs_geom)
 
 This also works for distance join. You first need to use `ST_Buffer(geometry, 
distance)` to wrap one of your original geometry column. If your original 
geometry column contains points, this `ST_Buffer` will make them become circles 
with a radius of `distance`.
 
-Since the coordinates are in the longitude and latitude system, so the unit of 
`distance` should be degree instead of meter or mile. You can get an 
approximation by performing `METER_DISTANCE/111000.0`, then filter out 
false-positives.  Note that this might lead to inaccurate results if your data 
is close to the poles or antimeridian.
+Since the coordinates are in the longitude and latitude system, so the unit of 
`distance` should be degree instead of meter or mile. You can get an 
approximation by performing `METER_DISTANCE/111000.0`, then filter out 
false-positives. Note that this might lead to inaccurate results if your data 
is close to the poles or antimeridian.
 
 In a nutshell, run this query first on the left table before Step 1. Please 
replace `METER_DISTANCE` with a meter distance. In Step 1, generate S2 IDs 
based on the `buffered_geom` column. Then run Step 2, 3, 4 on the original 
`geom` column.
 
diff --git a/docs/community/contributor.md b/docs/community/contributor.md
index 4eb2583a7..2c29be735 100644
--- a/docs/community/contributor.md
+++ b/docs/community/contributor.md
@@ -130,14 +130,14 @@ to guide the direction of the project.
 
 Being a committer does not require you to
 participate any more than you already do. It does
-tend to make one even more committed.  You will
+tend to make one even more committed. You will
 probably find that you spend more time here.
 
 Of course, you can decline and instead remain as a
 contributor, participating as you do now.
 
 A. This personal invitation is a chance for you to
-accept or decline in private.  Either way, please
+accept or decline in private. Either way, please
 let us know in reply to the [email protected]
 address only.
 
diff --git a/docs/community/publication.md b/docs/community/publication.md
index 02ee04c7a..b23871717 100644
--- a/docs/community/publication.md
+++ b/docs/community/publication.md
@@ -50,4 +50,4 @@ GeoSpark Perspective and 
Beyond"](https://jiayuasu.github.io/files/paper/GeoSpar
 
 ### A Tutorial about Geospatial Data Management in Spark
 
-["Geospatial Data Management in Apache Spark: A 
Tutorial"](https://jiayuasu.github.io/files/talk/jia-icde19-tutorial.pdf) 
(Tutorial) Jia Yu and Mohamed Sarwat.  In Proceedings of the International 
Conference on Data Engineering, ICDE, 2019
+["Geospatial Data Management in Apache Spark: A 
Tutorial"](https://jiayuasu.github.io/files/talk/jia-icde19-tutorial.pdf) 
(Tutorial) Jia Yu and Mohamed Sarwat. In Proceedings of the International 
Conference on Data Engineering, ICDE, 2019
diff --git a/docs/community/release-manager.md 
b/docs/community/release-manager.md
index 1702f2167..882d869f3 100644
--- a/docs/community/release-manager.md
+++ b/docs/community/release-manager.md
@@ -19,7 +19,7 @@ If your Maven (`mvn --version`) points to other JDK versions, 
you must change it
 JAVA_HOME="${JAVA_HOME:-$(/usr/libexec/java_home)}" exec 
"/usr/local/Cellar/maven/3.6.3/libexec/bin/mvn" "$@"
 ```
 
-4. Change `JAVA_HOME:-$(/usr/libexec/java_home)}` to 
`JAVA_HOME:-$(/usr/libexec/java_home -v 1.8)}`.  The resulting content will be 
like this:
+4. Change `JAVA_HOME:-$(/usr/libexec/java_home)}` to 
`JAVA_HOME:-$(/usr/libexec/java_home -v 1.8)}`. The resulting content will be 
like this:
 
 ```
 #!/bin/bash
diff --git a/docs/community/rule.md b/docs/community/rule.md
index 59446fe3c..77db497fc 100644
--- a/docs/community/rule.md
+++ b/docs/community/rule.md
@@ -13,7 +13,7 @@ It is important to confirm that your contribution is 
acceptable. You should crea
 Code contributions should include the following:
 
 * Detailed documentations on classes and methods.
-* Unit Tests to demonstrate code correctness and allow this to be maintained 
going forward.  In the case of bug fixes the unit test should demonstrate the 
bug in the absence of the fix (if any).  Unit Tests can be JUnit test or Scala 
test. Some Sedona functions need to be tested in both Scala and Java.
+* Unit Tests to demonstrate code correctness and allow this to be maintained 
going forward. In the case of bug fixes the unit test should demonstrate the 
bug in the absence of the fix (if any). Unit Tests can be JUnit test or Scala 
test. Some Sedona functions need to be tested in both Scala and Java.
 * Updates on corresponding Sedona documentation if necessary.
 
 Code contributions must include an Apache 2.0 license header at the top of 
each file.
diff --git a/docs/setup/databricks.md b/docs/setup/databricks.md
index 875b590f0..0d32de281 100644
--- a/docs/setup/databricks.md
+++ b/docs/setup/databricks.md
@@ -71,7 +71,7 @@ Of course, you can also do the steps above manually.
 ### Create an init script
 
 !!!warning
-    Starting from December 2023, Databricks has disabled all DBFS based init 
script (/dbfs/XXX/<script-name>.sh).  So you will have to store the init script 
from a workspace level (`/Workspace/Users/<user-name>/<script-name>.sh`) or 
Unity Catalog volume 
(`/Volumes/<catalog>/<schema>/<volume>/<path-to-script>/<script-name>.sh`). 
Please see [Databricks init 
scripts](https://docs.databricks.com/en/init-scripts/cluster-scoped.html#configure-a-cluster-scoped-init-script-using-the-ui)
 for more [...]
+    Starting from December 2023, Databricks has disabled all DBFS based init 
script (/dbfs/XXX/<script-name>.sh). So you will have to store the init script 
from a workspace level (`/Workspace/Users/<user-name>/<script-name>.sh`) or 
Unity Catalog volume 
(`/Volumes/<catalog>/<schema>/<volume>/<path-to-script>/<script-name>.sh`). 
Please see [Databricks init 
scripts](https://docs.databricks.com/en/init-scripts/cluster-scoped.html#configure-a-cluster-scoped-init-script-using-the-ui)
 for more  [...]
 
 !!!note
     If you are creating a Shared cluster, you won't be able to use init 
scripts and jars stored under `Workspace`. Please instead store them in 
`Volumes`. The overall process should be the same.
diff --git a/docs/tutorial/raster.md b/docs/tutorial/raster.md
index 67069fc3c..053a64175 100644
--- a/docs/tutorial/raster.md
+++ b/docs/tutorial/raster.md
@@ -242,7 +242,7 @@ The output will look like this:
 For multiple raster data files use the following code to load the data [from 
path](https://github.com/apache/sedona/blob/0eae42576c2588fe278f75cef3b17fee600eac90/spark/common/src/test/resources/raster/)
 and create raw DataFrame.
 
 !!!note
-    The above code works too for loading multiple raster data files.  if the 
raster files are in separate directories and the option also makes sure that 
only `.tif` or `.tiff` files are being loaded.
+    The above code works too for loading multiple raster data files. If the 
raster files are in separate directories and the option also makes sure that 
only `.tif` or `.tiff` files are being loaded.
 
 === "Scala"
     ```scala
diff --git a/docs/tutorial/snowflake/sql.md b/docs/tutorial/snowflake/sql.md
index 1f57a6aa6..20824e094 100644
--- a/docs/tutorial/snowflake/sql.md
+++ b/docs/tutorial/snowflake/sql.md
@@ -346,7 +346,7 @@ GROUP BY (lcs_geom, rcs_geom)
 
 This also works for distance join. You first need to use `ST_Buffer(geometry, 
distance)` to wrap one of your original geometry column. If your original 
geometry column contains points, this `ST_Buffer` will make them become circles 
with a radius of `distance`.
 
-Since the coordinates are in the longitude and latitude system, so the unit of 
`distance` should be degree instead of meter or mile. You can get an 
approximation by performing `METER_DISTANCE/111000.0`, then filter out 
false-positives.  Note that this might lead to inaccurate results if your data 
is close to the poles or antimeridian.
+Since the coordinates are in the longitude and latitude system, so the unit of 
`distance` should be degree instead of meter or mile. You can get an 
approximation by performing `METER_DISTANCE/111000.0`, then filter out 
false-positives. Note that this might lead to inaccurate results if your data 
is close to the poles or antimeridian.
 
 In a nutshell, run this query first on the left table before Step 1. Please 
replace `METER_DISTANCE` with a meter distance. In Step 1, generate S2 IDs 
based on the `buffered_geom` column. Then run Step 2, 3, 4 on the original 
`geom` column.
 
diff --git a/docs/tutorial/sql-pure-sql.md b/docs/tutorial/sql-pure-sql.md
index 833d8e385..b78ab8361 100644
--- a/docs/tutorial/sql-pure-sql.md
+++ b/docs/tutorial/sql-pure-sql.md
@@ -31,7 +31,7 @@ This will register all Sedona types, functions and 
optimizations in SedonaSQL an
 
 ## Load data
 
-Let use data from `examples/sql`.  To load data from CSV file we need to 
execute two commands:
+Let use data from `examples/sql`. To load data from CSV file we need to 
execute two commands:
 
 Use the following code to load the data and create a raw DataFrame:
 
diff --git a/docs/tutorial/zeppelin.md b/docs/tutorial/zeppelin.md
index 17e5f1abb..cd2a8e9a6 100644
--- a/docs/tutorial/zeppelin.md
+++ b/docs/tutorial/zeppelin.md
@@ -1,4 +1,4 @@
-Sedona provides a Helium visualization plugin tailored for [Apache 
Zeppelin](https://zeppelin.apache.org/). This finally bridges the gap between 
Sedona and Zeppelin.  Please read [Install 
Sedona-Zeppelin](../setup/zeppelin.md) to learn how to install this plugin in 
Zeppelin.
+Sedona provides a Helium visualization plugin tailored for [Apache 
Zeppelin](https://zeppelin.apache.org/). This finally bridges the gap between 
Sedona and Zeppelin. Please read [Install 
Sedona-Zeppelin](../setup/zeppelin.md) to learn how to install this plugin in 
Zeppelin.
 
 Sedona-Zeppelin equips two approaches to visualize spatial data in Zeppelin. 
The first approach uses Zeppelin to plot all spatial objects on the map. The 
second one leverages SedonaViz to generate map images and overlay them on maps.
 

Reply via email to