This is an automated email from the ASF dual-hosted git repository.

eyal pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/datafu.git


The following commit(s) were added to refs/heads/master by this push:
     new 705fda3  DATAFU-166 Drop support for Spark 2.1.x and advance our 
version to 1.7.0-SNAPSHOT
705fda3 is described below

commit 705fda39156b1a3c48754d7115be207339544790
Author: Eyal Allweil <e...@apache.org>
AuthorDate: Wed Aug 3 20:02:16 2022 +0300

    DATAFU-166 Drop support for Spark 2.1.x and advance our version to 
1.7.0-SNAPSHOT
---
 datafu-spark/README.md               | 6 +++---
 datafu-spark/build_and_test_spark.sh | 4 ++--
 gradle.properties                    | 3 +--
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/datafu-spark/README.md b/datafu-spark/README.md
index b72221e..e8125e5 100644
--- a/datafu-spark/README.md
+++ b/datafu-spark/README.md
@@ -12,7 +12,7 @@ Here are some examples of things you can do with it:
 
 * [Count distinct up 
to](https://github.com/apache/datafu/blob/spark-tmp/datafu-spark/src/main/scala/datafu/spark/SparkUDAFs.scala#L224)
 - an efficient implementation when you just want to verify that a certain 
minimum of distinct rows appear in a table
 
-It has been tested on Spark releases from 2.1.0 to 2.4.3, using Scala 2.10, 
2.11 and 2.12. You can check if your Spark/Scala version combination has been 
tested by looking 
[here.](https://github.com/apache/datafu/blob/spark-tmp/datafu-spark/build_and_test_spark.sh#L20)
+It has been tested on Spark releases from 2.2.0 to 2.4.3, using Scala 2.11 and 
2.12. You can check if your Spark/Scala version combination has been tested by 
looking 
[here.](https://github.com/apache/datafu/blob/master/datafu-spark/build_and_test_spark.sh#L20)
 
 -----------
 
@@ -21,9 +21,9 @@ In order to call the datafu-spark API's from Pyspark, you can 
do the following (
 First, call pyspark with the following parameters
 
 ```bash
-export PYTHONPATH=datafu-spark_2.11_2.3.0-1.5.0-SNAPSHOT.jar
+export PYTHONPATH=datafu-spark_2.11_2.3.0-1.6.1.jar
 
-pyspark --jars datafu-spark_2.11_2.3.0-1.5.0-SNAPSHOT.jar --conf 
spark.executorEnv.PYTHONPATH=datafu-spark_2.11_2.3.0-1.5.0-SNAPSHOT.jar
+pyspark --jars datafu-spark_2.11_2.3.0-1.6.1.jar --conf 
spark.executorEnv.PYTHONPATH=datafu-spark_2.11_2.3.0-1.6.1.jar
 ```
 
 The following is an example of calling the Spark version of the datafu _dedup_ 
method
diff --git a/datafu-spark/build_and_test_spark.sh 
b/datafu-spark/build_and_test_spark.sh
index 703818f..744ecd3 100755
--- a/datafu-spark/build_and_test_spark.sh
+++ b/datafu-spark/build_and_test_spark.sh
@@ -17,10 +17,10 @@
 
 #!/bin/bash
 
-export SPARK_VERSIONS_FOR_SCALA_211="2.1.0 2.1.1 2.1.2 2.1.3 2.2.0 2.2.1 2.2.2 
2.3.0 2.3.1 2.3.2 2.4.0 2.4.1 2.4.2 2.4.3"
+export SPARK_VERSIONS_FOR_SCALA_211="2.2.0 2.2.1 2.2.2 2.3.0 2.3.1 2.3.2 2.4.0 
2.4.1 2.4.2 2.4.3"
 export SPARK_VERSIONS_FOR_SCALA_212="2.4.0 2.4.1 2.4.2 2.4.3"
 
-export LATEST_SPARK_VERSIONS_FOR_SCALA_211="2.1.3 2.2.2 2.3.2 2.4.3"
+export LATEST_SPARK_VERSIONS_FOR_SCALA_211="2.2.2 2.3.2 2.4.3"
 export LATEST_SPARK_VERSIONS_FOR_SCALA_212="2.4.3"
 
 STARTTIME=$(date +%s)
diff --git a/gradle.properties b/gradle.properties
index 89e1a3f..b520f86 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -16,9 +16,8 @@
 # under the License.
 
 group=org.apache.datafu
-version=1.6.1
+version=1.7.0-SNAPSHOT
 gradleVersion=5.6.4
-org.gradle.jvmargs="-XX:MaxPermSize=512m"
 scalaCompatVersion=2.11
 sparkCompatVersion=2.4
 sparkVersion=2.4.3

Reply via email to