This is an automated email from the ASF dual-hosted git repository.
jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-sedona.git
The following commit(s) were added to refs/heads/master by this push:
new cc3efb2 Change the project structure according to the voting result
(#505)
cc3efb2 is described below
commit cc3efb282412f368cb4fe6e66e64d66545cc5ffd
Author: Jia Yu <[email protected]>
AuthorDate: Tue Jan 19 18:34:29 2021 -0800
Change the project structure according to the voting result (#505)
* Change the artifact name according to ASF requirement
* Generate MD5, sha1, sha512 checksum by default
* Update artifact name again
* Update the parent pom name and publish it
* Update the parent pom name and publish it
* Ignore idea venv
* Only publish parent pom for spark 3.0 + Scala 2.12
* Update the template project
* Remove GeoTools binaries from Pyhton Adapter binaries
* Update test base
* Disable Geotools dependencies by default. Users have to manually compile
the source code to package geotools jars.
* Update docs
---
.github/workflows/python.yml | 4 +-
DISCLAIMER-WIP | 1 -
core/pom.xml | 2 +-
...oSpark-All-Modules-Maven-Central-Coordinates.md | 50 ++++++++++++++--------
docs/download/compile.md | 11 ++++-
docs/download/overview.md | 17 +++++---
docs/download/scalashell.md | 10 ++---
examples/rdd-colocation-mining/build.sbt | 2 +-
examples/sql/build.sbt | 2 +-
examples/viz/build.sbt | 2 +-
pom.xml | 22 +++++++++-
python-adapter/pom.xml | 21 ++++++++-
python/.gitignore | 2 +
python/sedona/core/jvm/config.py | 2 +-
sql/pom.xml | 2 +-
viz/pom.xml | 2 +-
16 files changed, 107 insertions(+), 45 deletions(-)
diff --git a/.github/workflows/python.yml b/.github/workflows/python.yml
index 1f980f3..26451ae 100644
--- a/.github/workflows/python.yml
+++ b/.github/workflows/python.yml
@@ -49,7 +49,7 @@ jobs:
- env:
SPARK_VERSION: ${{ matrix.spark }}
SCALA_VERSION: ${{ matrix.scala }}
- run: mvn -q clean install -DskipTests -Dscala=${SCALA_VERSION:0:4}
-Dspark=${SPARK_VERSION:0:3}
+ run: mvn -q clean install -DskipTests -Dscala=${SCALA_VERSION:0:4}
-Dspark=${SPARK_VERSION:0:3} -Dgeotools
- env:
SPARK_VERSION: ${{ matrix.spark }}
run: wget
https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
@@ -70,4 +70,4 @@ jobs:
run: find python-adapter/target -name sedona-* -exec cp {}
spark-${SPARK_VERSION}-bin-hadoop2.7/jars/ \;
- env:
SPARK_VERSION: ${{ matrix.spark }}
- run: (export SPARK_HOME=$PWD/spark-${SPARK_VERSION}-bin-hadoop2.7;export
PYTHONPATH=$SPARK_HOME/python;cd python;pipenv run pytest tests)
+ run: (export SPARK_HOME=$PWD/spark-${SPARK_VERSION}-bin-hadoop2.7;export
PYTHONPATH=$SPARK_HOME/python;cd python;pipenv run pytest tests)
\ No newline at end of file
diff --git a/DISCLAIMER-WIP b/DISCLAIMER-WIP
index 04f8d0a..0df680a 100644
--- a/DISCLAIMER-WIP
+++ b/DISCLAIMER-WIP
@@ -3,6 +3,5 @@ Apache Sedona is an effort undergoing incubation at The Apache
Software Foundati
Some of the incubating project’s releases may not be fully compliant with ASF
policy. For example, releases may have incomplete or un-reviewed licensing
conditions. What follows is a list of known issues the project is currently
aware of (note that this list, by definition, is likely to be incomplete):
1. The content of GeoJSONWriterNew is directly copied from jts2geojson library
(MIT License). This is to fix the incompatibility between jts2geojson 1.4.3 and
JTS 1.17+. GeoJSONWriterNew will be removed in the future if the developer of
jts2geojson fixes this issue.
-2. To use Sedona Python, users have to use the Sedona Python Adapter jar which
uses GeoTools binaries under LGPL license.
If you are planning to incorporate this work into your product/project, please
be aware that you will need to conduct a thorough licensing review to determine
the overall implications of including this work. For the current status of this
project through the Apache Incubator visit:
https://incubator.apache.org/projects/sedona.html
\ No newline at end of file
diff --git a/core/pom.xml b/core/pom.xml
index 07bd0c5..8fae46b 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-parent</artifactId>
- <version>1.0.1-incubator-SNAPSHOT</version>
+ <version>1.0.0-incubating-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<artifactId>sedona-core-${spark.compat.version}_${scala.compat.version}</artifactId>
diff --git a/docs/download/GeoSpark-All-Modules-Maven-Central-Coordinates.md
b/docs/download/GeoSpark-All-Modules-Maven-Central-Coordinates.md
index 92deb99..64a4ae4 100644
--- a/docs/download/GeoSpark-All-Modules-Maven-Central-Coordinates.md
+++ b/docs/download/GeoSpark-All-Modules-Maven-Central-Coordinates.md
@@ -1,9 +1,9 @@
# Maven Coordinates
-Sedona has four modules: `sedona-core, sedona-sql, sedona-viz,
sedona-python-adapter`. If you use Scala and Java API, you only need to use
`sedona-core, sedona-sql, sedona-viz`. If you use Python API, you only need to
use `sedona-python-adapter`
+Sedona has four modules: `sedona-core, sedona-sql, sedona-viz,
sedona-python-adapter`. If you use Scala and Java API, you only need to use
`sedona-core, sedona-sql, sedona-viz`. If you use Python API, you only need to
use `sedona-python-adapter`.
!!!note
- Sedona Scala and Java API also requires additional dependencies to work
(see below). Python API does not need them.
+ Sedona Scala, Java and Python API also requires additional dependencies
to work (see below).
## Spark 3.0 + Scala 2.12
@@ -12,17 +12,17 @@ Scala and Java API only
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-core-3.0_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-sql-3.0_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-viz-3.0_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
```
@@ -31,7 +31,7 @@ Python API only
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-python-adapter-3.0_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
```
@@ -42,17 +42,17 @@ Scala and Java API only
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-core-2.4_2.11</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-sql-2.4_2.11</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-viz-2.4_2.11</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
```
@@ -61,7 +61,7 @@ Python API only
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-python-adapter-2.4_2.11</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
```
@@ -72,17 +72,17 @@ Scala and Java API only
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-core-2.4_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-sql-2.4_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-viz-2.4_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
```
@@ -91,16 +91,20 @@ Python API only
<dependency>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-python-adapter-2.4_2.12</artifactId>
- <version>1.0.0-incubator</version>
+ <version>1.0.0-incubating</version>
</dependency>
```
## Additional dependencies
-To avoid conflicts in downstream projects and solve the copyright issue,
Sedona almost does not package any dependencies in the release jars. Therefore,
you need to add the following jars in your `build.sbt` or `pom.xml` if you use
Sedona Scala and Java API.
+To avoid conflicts in downstream projects and solve the copyright issue,
Sedona almost does not package any dependencies in the release jars. Therefore,
you need to add the following jars in your `build.sbt` or `pom.xml` if you use
Sedona Scala and Java API. You may need to compile Sedona source code to get
GeoTools jars if you use Sedona Python.
### LocationTech JTS-core 1.18.0+
+For Scala / Java API: `required` under Eclipse Public License 2.0 ("EPL") or
the Eclipse Distribution License 1.0 (a BSD Style License)
+
+For Python API: `not required, already included`
+
```xml
<!-- https://mvnrepository.com/artifact/org.locationtech.jts/jts-core -->
<dependency>
@@ -112,7 +116,9 @@ To avoid conflicts in downstream projects and solve the
copyright issue, Sedona
### jts2geojson 0.14.3+
-This is only needed if you read GeoJSON files. Under MIT License
+For Scala / Java API: `required` if you read GeoJSON files. Under MIT License
+
+For Python API: `not required, already included`
```xml
<!-- https://mvnrepository.com/artifact/org.wololo/jts2geojson -->
@@ -125,7 +131,11 @@ This is only needed if you read GeoJSON files. Under MIT
License
### GeoTools 24.0+
-This is only needed if you want to do CRS transformation. Under GNU Lesser
General Public License (LGPL) license.
+For Scala / Java API: `required` if you want to use CRS transformation and
ShapefileReader.
+
+For Python API: `required` if you want to use CRS transformation and
ShapefileReader. You have to compile the Sedona source code by yourself. See
[Install Sedona Python](/download/overview/#install-sedona-python)
+
+Under GNU Lesser General Public License (LGPL) license
```xml
<!-- https://mvnrepository.com/artifact/org.geotools/gt-main -->
@@ -183,7 +193,9 @@ resolvers +=
### SernetCDF 0.1.0
-This is only needed if you want to read HDF files. Under Apache License 2.0.
+For Scala / Java API: `required` if you want to read HDF files.
+
+Under Apache License 2.0.
```xml
<!-- https://mvnrepository.com/artifact/org.datasyslab/sernetcdf -->
@@ -196,7 +208,7 @@ This is only needed if you want to read HDF files. Under
Apache License 2.0.
```
## SNAPSHOT versions
-Sometimes Sedona has a SNAPSHOT version for the upcoming release. It follows
the same naming conversion but has "SNAPSHOT" as suffix in the version. For
example, `1.0.0-incubator-SNAPSHOT`
+Sometimes Sedona has a SNAPSHOT version for the upcoming release. It follows
the same naming conversion but has "SNAPSHOT" as suffix in the version. For
example, `1.0.0-incubating-SNAPSHOT`
In order to download SNAPSHOTs, you need to add the following repositories in
your POM.XML or build.sbt
### build.sbt
diff --git a/docs/download/compile.md b/docs/download/compile.md
index 7907a94..be59af2 100644
--- a/docs/download/compile.md
+++ b/docs/download/compile.md
@@ -15,9 +15,13 @@ mvn clean install -DskipTests
```
This command will first delete the old binary files and compile all modules.
This compilation will skip the unit tests. To compile a single module, please
make sure you are in the folder of that module. Then enter the same command.
-!!!note
+!!!warning
By default, this command will compile Sedona with Spark 3.0 and Scala
2.12
+!!!tip
+ To get the Sedona Python-adapter jar with all GeoTools jars included,
simply append `-Dgeotools` option. The command is like this:`mvn clean install
-DskipTests -Dgeotools`
+
+
To run unit tests, just simply remove `-DskipTests` option. The command is
like this:
```
mvn clean install
@@ -41,6 +45,9 @@ mvn clean install -DskipTests -Dscala=2.11 -Dspark=2.4
mvn clean install -DskipTests -Dscala=2.12 -Dspark=2.4
```
+!!!tip
+ To get the Sedona Python-adapter jar with all GeoTools jars included,
simply append `-Dgeotools` option. The command is like this:`mvn clean install
-DskipTests -Dscala=2.12 -Dspark=3.0 -Dgeotools`
+
### Download staged jars
Sedona uses GitHub action to automatically generate jars per commit. You can
go
[here](https://github.com/apache/incubator-sedona/actions?query=workflow%3A%22Scala+and+Java+build%22)
and download the jars by clicking the commit's ==Artifacts== tag.
@@ -54,7 +61,7 @@ For example,
export SPARK_HOME=$PWD/spark-3.0.1-bin-hadoop2.7
export PYTHONPATH=$SPARK_HOME/python
```
-2. Compile the Sedona Scala and Java code and then copy the
==sedona-python-adapter-xxx.jar== to ==SPARK_HOME/jars/== folder
+2. Compile the Sedona Scala and Java code with `-Dgeotools` and then copy the
==sedona-python-adapter-xxx.jar== to ==SPARK_HOME/jars/== folder.
```
cp python-adapter/target/sedona-python-adapter-xxx.jar SPARK_HOME/jars/
```
diff --git a/docs/download/overview.md b/docs/download/overview.md
index d93917d..d10e9b3 100644
--- a/docs/download/overview.md
+++ b/docs/download/overview.md
@@ -68,24 +68,27 @@ python3 setup.py install
### Prepare python-adapter jar
-Sedona Python needs one additional jar file call
`sedona-python-adapter-3.0_2.12-1.0.0-incubator.jar` to work properly. Please
make sure you use the correct version for Spark and Scala.
+Sedona Python needs one additional jar file called `sedona-python-adapter` to
work properly. Please make sure you use the correct version for Spark and
Scala. For Spark 3.0 + Scala 2.12, it is called
`sedona-python-adapter-3.0_2.12-1.0.0-incubating.jar`
You can get it using one of the following methods:
-* Compile from the source within main project directory and copy it (in
`target` folder) to SPARK_HOME/jars/ folder ([more
details](/download/compile/#compile-scala-and-java-source-code))
+1. Compile from the source within main project directory and copy it (in
`python-adapter/target` folder) to SPARK_HOME/jars/ folder ([more
details](/download/compile/#compile-scala-and-java-source-code))
-* Download from [GitHub
release](https://github.com/apache/incubator-sedona/releases) and copy it to
SPARK_HOME/jars/ folder
-* Call the [Maven Central
coordinate](../GeoSpark-All-Modules-Maven-Central-Coordinates) in your python
program. For example, in PySparkSQL
+2. Download from [GitHub
release](https://github.com/apache/incubator-sedona/releases) and copy it to
SPARK_HOME/jars/ folder
+3. Call the [Maven Central
coordinate](../GeoSpark-All-Modules-Maven-Central-Coordinates) in your python
program. For example, in PySparkSQL
```python
spark = SparkSession.\
builder.\
appName('appName').\
config("spark.serializer", KryoSerializer.getName).\
config("spark.kryo.registrator", SedonaKryoRegistrator.getName) .\
- config('spark.jars.packages',
'org.apache.sedona:sedona-python-adapter-3.0_2.12:1.0.0-incubator').\
+ config('spark.jars.packages',
'org.apache.sedona:sedona-python-adapter-3.0_2.12:1.0.0-incubating').\
getOrCreate()
```
+!!!warning
+ If you are going to use Sedona CRS transformation and ShapefileReader
functions, you have to use Method 1. Because these functions internally use
GeoTools libraries which are under LGPL license, Apache Sedona binary release
cannot include them.
+
### Setup environment variables
If you manually copy the python-adapter jar to `SPARK_HOME/jars/` folder, you
need to setup two environment variables
@@ -100,4 +103,6 @@ export SPARK_HOME=~/Downloads/spark-3.0.1-bin-hadoop2.7
```bash
export PYTHONPATH=$SPARK_HOME/python
-```
\ No newline at end of file
+```
+
+You can then play with [Sedona Python Jupyter
notebook](/tutorial/jupyter-notebook/)
\ No newline at end of file
diff --git a/docs/download/scalashell.md b/docs/download/scalashell.md
index 2a0ee85..b16c62f 100644
--- a/docs/download/scalashell.md
+++ b/docs/download/scalashell.md
@@ -12,12 +12,12 @@ Spark distribution provides an interactive Scala shell that
allows a user to exe
* Local mode: test Sedona without setting up a cluster
```
-./bin/spark-shell --packages
org.apache.sedona:sedona-core-3.0_2.12:1.0.0-incubator,org.apache.sedona:sedona-sql-3.0_2.12:1.0.0-incubator,org.apache.sedona:sedona-viz-3.0_2.12:1.0.0-incubator
+./bin/spark-shell --packages
org.apache.sedona:sedona-core-3.0_2.12:1.0.0-incubating,org.apache.sedona:sedona-sql-3.0_2.12:1.0.0-incubating,org.apache.sedona:sedona-viz-3.0_2.12:1.0.0-incubating
```
-* Cluster mode: you need to specify Spark Master IP
+* Cluster mode: you 1.0.0-incubatingneed to specify Spark Master IP
```
-./bin/spark-shell --master spark://localhost:7077 --packages
org.apache.sedona:sedona-core-3.0_2.12:1.0.0-incubator,org.apache.sedona:sedona-sql-3.0_2.12:1.0.0-incubator,org.apache.sedona:sedona-viz-3.0_2.12:1.0.0-incubator
+./bin/spark-shell --master spark://localhost:7077 --packages
org.apache.sedona:sedona-core-3.0_2.12:1.0.0-incubating,org.apache.sedona:sedona-sql-3.0_2.12:1.0.0-incubating,org.apache.sedona:sedona-viz-3.0_2.12:1.0.0-incubating
```
## Download Sedona jar manually
@@ -33,10 +33,10 @@ Spark distribution provides an interactive Scala shell that
allows a user to exe
* Local mode: test Sedona without setting up a cluster
```
-./bin/spark-shell --jars
sedona-core-3.0_2.12-1.0.0-incubator.jar,sedona-sql-3.0_2.12-1.0.0-incubator.jar,sedona-viz-3.0_2.12-1.0.0-incubator.jar
+./bin/spark-shell --jars
sedona-core-3.0_2.12-1.0.0-incubating.jar,sedona-sql-3.0_2.12-1.0.0-incubating.jar,sedona-viz-3.0_2.12-1.0.0-incubating.jar
```
* Cluster mode: you need to specify Spark Master IP
```
-./bin/spark-shell --master spark://localhost:7077 --jars
sedona-core-3.0_2.12-1.0.0-incubator.jar,sedona-sql-3.0_2.12-1.0.0-incubator.jar,sedona-viz-3.0_2.12-1.0.0-incubator.jar
+./bin/spark-shell --master spark://localhost:7077 --jars
sedona-core-3.0_2.12-1.0.0-incubating.jar,sedona-sql-3.0_2.12-1.0.0-incubating.jar,sedona-viz-3.0_2.12-1.0.0-incubating.jar
```
diff --git a/examples/rdd-colocation-mining/build.sbt
b/examples/rdd-colocation-mining/build.sbt
index af27ab4..f211553 100644
--- a/examples/rdd-colocation-mining/build.sbt
+++ b/examples/rdd-colocation-mining/build.sbt
@@ -20,7 +20,7 @@ val SparkCompatibleVersion = "3.0"
val HadoopVersion = "2.7.2"
-val SedonaVersion = "1.0.0-incubator-SNAPSHOT"
+val SedonaVersion = "1.0.0-incubating-SNAPSHOT"
val ScalaCompatibleVersion = "2.12"
diff --git a/examples/sql/build.sbt b/examples/sql/build.sbt
index ccff320..99d24ff 100644
--- a/examples/sql/build.sbt
+++ b/examples/sql/build.sbt
@@ -20,7 +20,7 @@ val SparkCompatibleVersion = "3.0"
val HadoopVersion = "2.7.2"
-val SedonaVersion = "1.0.0-incubator-SNAPSHOT"
+val SedonaVersion = "1.0.0-incubating-SNAPSHOT"
val ScalaCompatibleVersion = "2.12"
diff --git a/examples/viz/build.sbt b/examples/viz/build.sbt
index 3047d19..9ff347e 100644
--- a/examples/viz/build.sbt
+++ b/examples/viz/build.sbt
@@ -21,7 +21,7 @@ val SparkCompatibleVersion = "3.0"
val HadoopVersion = "2.7.2"
-val SedonaVersion = "1.0.0-incubator-SNAPSHOT"
+val SedonaVersion = "1.0.0-incubating-SNAPSHOT"
val ScalaCompatibleVersion = "2.12"
diff --git a/pom.xml b/pom.xml
index b283935..f4e758b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -26,7 +26,7 @@
</parent>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-parent</artifactId>
- <version>1.0.1-incubator-SNAPSHOT</version>
+ <version>1.0.0-incubating-SNAPSHOT</version>
<packaging>pom</packaging>
<name>sedona-parent</name>
<url>http://sedona.apache.org/</url>
@@ -69,7 +69,6 @@
<dependency.scope>provided</dependency.scope>
<jts.version>1.18.0</jts.version>
<jts2geojson.version>0.14.3</jts2geojson.version>
- <maven.deploy.skip>true</maven.deploy.skip>
</properties>
<dependencies>
@@ -305,6 +304,23 @@
</pluginManagement>
<plugins>
<plugin>
+ <groupId>net.nicoulaj.maven.plugins</groupId>
+ <artifactId>checksum-maven-plugin</artifactId>
+ <version>1.9</version>
+ <executions>
+ <execution>
+ <goals>
+ <goal>artifacts</goal>
+ </goals>
+ </execution>
+ </executions>
+ <configuration>
+ <algorithms>
+ <algorithm>SHA-512</algorithm>
+ </algorithms>
+ </configuration>
+ </plugin>
+ <plugin>
<!-- see http://davidb.github.com/scala-maven-plugin -->
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
@@ -482,6 +498,7 @@
<spark.compat.version>3.0</spark.compat.version>
<spark.converter.version>spark3</spark.converter.version>
<jackson.version>2.10.0</jackson.version>
+ <maven.deploy.skip>false</maven.deploy.skip>
</properties>
</profile>
<profile>
@@ -498,6 +515,7 @@
<spark.compat.version>2.4</spark.compat.version>
<spark.converter.version>spark2</spark.converter.version>
<jackson.version>2.6.7</jackson.version>
+ <maven.deploy.skip>true</maven.deploy.skip>
</properties>
</profile>
<profile>
diff --git a/python-adapter/pom.xml b/python-adapter/pom.xml
index c15ae84..7ce0834 100644
--- a/python-adapter/pom.xml
+++ b/python-adapter/pom.xml
@@ -22,7 +22,7 @@
<parent>
<artifactId>sedona-parent</artifactId>
<groupId>org.apache.sedona</groupId>
- <version>1.0.1-incubator-SNAPSHOT</version>
+ <version>1.0.0-incubating-SNAPSHOT</version>
</parent>
<artifactId>sedona-python-adapter-${spark.compat.version}_${scala.compat.version}</artifactId>
@@ -33,6 +33,7 @@
<properties>
<maven.deploy.skip>false</maven.deploy.skip>
+ <geotools.scope>provided</geotools.scope>
</properties>
<dependencies>
@@ -61,6 +62,7 @@
<groupId>org.geotools</groupId>
<artifactId>gt-main</artifactId>
<version>${geotools.version}</version>
+ <scope>${geotools.scope}</scope>
<exclusions>
<exclusion>
<groupId>org.locationtech.jts</groupId>
@@ -77,6 +79,7 @@
<groupId>org.geotools</groupId>
<artifactId>gt-referencing</artifactId>
<version>${geotools.version}</version>
+ <scope>${geotools.scope}</scope>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
@@ -89,6 +92,7 @@
<groupId>org.geotools</groupId>
<artifactId>gt-epsg-hsql</artifactId>
<version>${geotools.version}</version>
+ <scope>${geotools.scope}</scope>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
@@ -101,6 +105,7 @@
<groupId>org.geotools</groupId>
<artifactId>gt-epsg-extension</artifactId>
<version>${geotools.version}</version>
+ <scope>${geotools.scope}</scope>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
@@ -137,4 +142,18 @@
</plugin>
</plugins>
</build>
+ <profiles>
+ <profile>
+ <id>geotools</id>
+ <activation>
+ <property>
+ <name>geotools</name>
+ </property>
+ <activeByDefault>false</activeByDefault>
+ </activation>
+ <properties>
+ <geotools.scope>compile</geotools.scope>
+ </properties>
+ </profile>
+ </profiles>
</project>
diff --git a/python/.gitignore b/python/.gitignore
new file mode 100644
index 0000000..cfbb80a
--- /dev/null
+++ b/python/.gitignore
@@ -0,0 +1,2 @@
+/.idea/
+/venv/
diff --git a/python/sedona/core/jvm/config.py b/python/sedona/core/jvm/config.py
index a265a97..4139907 100644
--- a/python/sedona/core/jvm/config.py
+++ b/python/sedona/core/jvm/config.py
@@ -109,7 +109,7 @@ class SedonaMeta:
@classmethod
def get_version(cls, spark_jars: str) -> Optional[str]:
# Find Spark version, Scala version and Sedona version.
- versions =
findall(r"sedona-python-adapter-([^,\n]+)_([^,\n]+)-([^,\n]+)-incubator",
spark_jars)
+ versions =
findall(r"sedona-python-adapter-([^,\n]+)_([^,\n]+)-([^,\n]+)-incubating",
spark_jars)
try:
sedona_version = versions[0][2]
except IndexError:
diff --git a/sql/pom.xml b/sql/pom.xml
index 68271d5..41be229 100644
--- a/sql/pom.xml
+++ b/sql/pom.xml
@@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-parent</artifactId>
- <version>1.0.1-incubator-SNAPSHOT</version>
+ <version>1.0.0-incubating-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<artifactId>sedona-sql-${spark.compat.version}_${scala.compat.version}</artifactId>
diff --git a/viz/pom.xml b/viz/pom.xml
index 519362f..4457589 100644
--- a/viz/pom.xml
+++ b/viz/pom.xml
@@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.sedona</groupId>
<artifactId>sedona-parent</artifactId>
- <version>1.0.1-incubator-SNAPSHOT</version>
+ <version>1.0.0-incubating-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<artifactId>sedona-viz-${spark.compat.version}_${scala.compat.version}</artifactId>