svn commit: r31939 - in /dev/spark/2.3.4-SNAPSHOT-2019_01_12_23_40-1979712-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sun Jan 13 07:54:08 2019
New Revision: 31939

Log:
Apache Spark 2.3.4-SNAPSHOT-2019_01_12_23_40-1979712 docs


[This commit notification would consist of 1443 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.3 updated: [SPARK-26120][TESTS][SS][SPARKR] Fix a streaming query leak in Structured Streaming R tests

2019-01-12 Thread felixcheung
This is an automated email from the ASF dual-hosted git repository.

felixcheung pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 1979712  [SPARK-26120][TESTS][SS][SPARKR] Fix a streaming query leak 
in Structured Streaming R tests
1979712 is described below

commit 19797124f1e169138258c8c113874ec6ffedbe3d
Author: Shixiong Zhu 
AuthorDate: Wed Nov 21 09:31:12 2018 +0800

[SPARK-26120][TESTS][SS][SPARKR] Fix a streaming query leak in Structured 
Streaming R tests

## What changes were proposed in this pull request?

Stop the streaming query in `Specify a schema by using a DDL-formatted 
string when reading` to avoid outputting annoying logs.

## How was this patch tested?

Jenkins

Closes #23089 from zsxwing/SPARK-26120.

Authored-by: Shixiong Zhu 
Signed-off-by: hyukjinkwon 
(cherry picked from commit 4b7f7ef5007c2c8a5090f22c6e08927e9f9a407b)
Signed-off-by: Felix Cheung 
---
 R/pkg/tests/fulltests/test_streaming.R | 1 +
 1 file changed, 1 insertion(+)

diff --git a/R/pkg/tests/fulltests/test_streaming.R 
b/R/pkg/tests/fulltests/test_streaming.R
index bfb1a04..6f0d2ae 100644
--- a/R/pkg/tests/fulltests/test_streaming.R
+++ b/R/pkg/tests/fulltests/test_streaming.R
@@ -127,6 +127,7 @@ test_that("Specify a schema by using a DDL-formatted string 
when reading", {
   expect_false(awaitTermination(q, 5 * 1000))
   callJMethod(q@ssq, "processAllAvailable")
   expect_equal(head(sql("SELECT count(*) FROM people3"))[[1]], 3)
+  stopQuery(q)
 
   expect_error(read.stream(path = parquetPath, schema = "name stri"),
"DataType stri is not supported.")


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r31938 - in /dev/spark/3.0.0-SNAPSHOT-2019_01_12_21_38-4ff2b94-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sun Jan 13 05:50:51 2019
New Revision: 31938

Log:
Apache Spark 3.0.0-SNAPSHOT-2019_01_12_21_38-4ff2b94 docs


[This commit notification would consist of 1775 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r31937 - in /dev/spark/2.3.4-SNAPSHOT-2019_01_12_19_37-3137dca-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sun Jan 13 03:51:05 2019
New Revision: 31937

Log:
Apache Spark 2.3.4-SNAPSHOT-2019_01_12_19_37-3137dca docs


[This commit notification would consist of 1443 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-26503][CORE][DOC][FOLLOWUP] Get rid of spark.sql.legacy.timeParser.enabled

2019-01-12 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ff2b94  [SPARK-26503][CORE][DOC][FOLLOWUP] Get rid of 
spark.sql.legacy.timeParser.enabled
4ff2b94 is described below

commit 4ff2b94a7c827f9cc3e6c79fe090568d2743c0ca
Author: Maxim Gekk 
AuthorDate: Sun Jan 13 11:20:22 2019 +0800

[SPARK-26503][CORE][DOC][FOLLOWUP] Get rid of 
spark.sql.legacy.timeParser.enabled

## What changes were proposed in this pull request?

The SQL config `spark.sql.legacy.timeParser.enabled` was removed by 
https://github.com/apache/spark/pull/23495. The PR cleans up the SQL migration 
guide and the comment for `UnixTimestamp`.

Closes #23529 from MaxGekk/get-rid-off-legacy-parser-followup.

Authored-by: Maxim Gekk 
Signed-off-by: Hyukjin Kwon 
---
 docs/sql-migration-guide-upgrade.md   | 4 ++--
 .../apache/spark/sql/catalyst/expressions/datetimeExpressions.scala   | 4 +---
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/docs/sql-migration-guide-upgrade.md 
b/docs/sql-migration-guide-upgrade.md
index a2d782e..fce0b9a 100644
--- a/docs/sql-migration-guide-upgrade.md
+++ b/docs/sql-migration-guide-upgrade.md
@@ -33,13 +33,13 @@ displayTitle: Spark SQL Upgrading Guide
 
   - In Spark version 2.4 and earlier, the `SET` command works without any 
warnings even if the specified key is for `SparkConf` entries and it has no 
effect because the command does not update `SparkConf`, but the behavior might 
confuse users. Since 3.0, the command fails if a `SparkConf` key is used. You 
can disable such a check by setting 
`spark.sql.legacy.setCommandRejectsSparkCoreConfs` to `false`.
 
-  - Since Spark 3.0, CSV/JSON datasources use java.time API for parsing and 
generating CSV/JSON content. In Spark version 2.4 and earlier, 
java.text.SimpleDateFormat is used for the same purpose with fallbacks to the 
parsing mechanisms of Spark 2.0 and 1.x. For example, `2018-12-08 10:39:21.123` 
with the pattern `-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0 
because the timestamp does not match to the pattern but it can be parsed by 
earlier Spark versions due to a fallback  [...]
+  - Since Spark 3.0, CSV/JSON datasources use java.time API for parsing and 
generating CSV/JSON content. In Spark version 2.4 and earlier, 
java.text.SimpleDateFormat is used for the same purpose with fallbacks to the 
parsing mechanisms of Spark 2.0 and 1.x. For example, `2018-12-08 10:39:21.123` 
with the pattern `-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0 
because the timestamp does not match to the pattern but it can be parsed by 
earlier Spark versions due to a fallback  [...]
 
   - In Spark version 2.4 and earlier, CSV datasource converts a malformed CSV 
string to a row with all `null`s in the PERMISSIVE mode. Since Spark 3.0, the 
returned row can contain non-`null` fields if some of CSV column values were 
parsed and converted to desired types successfully.
 
   - In Spark version 2.4 and earlier, JSON datasource and JSON functions like 
`from_json` convert a bad JSON record to a row with all `null`s in the 
PERMISSIVE mode when specified schema is `StructType`. Since Spark 3.0, the 
returned row can contain non-`null` fields if some of JSON column values were 
parsed and converted to desired types successfully.
 
-  - Since Spark 3.0, the `unix_timestamp`, `date_format`, `to_unix_timestamp`, 
`from_unixtime`, `to_date`, `to_timestamp` functions use java.time API for 
parsing and formatting dates/timestamps from/to strings by using ISO chronology 
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html) 
based on Proleptic Gregorian calendar. In Spark version 2.4 and earlier, 
java.text.SimpleDateFormat and java.util.GregorianCalendar (hybrid calendar 
that supports both the Julian [...]
+  - Since Spark 3.0, the `unix_timestamp`, `date_format`, `to_unix_timestamp`, 
`from_unixtime`, `to_date`, `to_timestamp` functions use java.time API for 
parsing and formatting dates/timestamps from/to strings by using ISO chronology 
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html) 
based on Proleptic Gregorian calendar. In Spark version 2.4 and earlier, 
java.text.SimpleDateFormat and java.util.GregorianCalendar (hybrid calendar 
that supports both the Julian [...]
 
   - Since Spark 3.0, JSON datasource and JSON function `schema_of_json` infer 
TimestampType from string values if they match to the pattern defined by the 
JSON option `timestampFormat`. Set JSON option `inferTimestamp` to `false` to 
disable such type inferring.
 
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
 

[spark] 01/01: Preparing development version 2.3.4-SNAPSHOT

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 3137dca44b69a0b224842dd727c96c6b5bb0430d
Author: Takeshi Yamamuro 
AuthorDate: Sun Jan 13 01:57:05 2019 +

Preparing development version 2.3.4-SNAPSHOT
---
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6ec4966..a82446e 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.3
+Version: 2.3.4
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 6a8cd4f..612a1b8 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 6010b6e..5547e97 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 8b5d3c8..119dde2 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index dd27a24..dba5224 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index aded5e7d..56902a3 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index a50f612..5302d95 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/tags/pom.xml b/common/tags/pom.xml
index 8112ca4..232ebfa 100644
--- a/common/tags/pom.xml
+++ b/common/tags/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/unsafe/pom.xml b/common/unsafe/pom.xml
index 0d5f61f..f0baa2a 100644
--- a/common/unsafe/pom.xml
+++ b/common/unsafe/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+

[spark] branch branch-2.3 updated (01511e4 -> 3137dca)

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 01511e4  [SPARK-25572][SPARKR] test only if not cran
 add 2e01a70  Preparing Spark release v2.3.3-rc1
 new 3137dca  Preparing development version 2.3.4-SNAPSHOT

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] tag v2.3.3-rc1 created (now 2e01a70)

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to tag v2.3.3-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git.


  at 2e01a70  (commit)
This tag includes the following new commits:

 new 2e01a70  Preparing Spark release v2.3.3-rc1

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] 01/01: Preparing Spark release v2.3.3-rc1

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to tag v2.3.3-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 2e01a70bfac7aedfd5992d49e13a9f8f6a92d8a2
Author: Takeshi Yamamuro 
AuthorDate: Sun Jan 13 01:56:48 2019 +

Preparing Spark release v2.3.3-rc1
---
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 2 +-
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 40 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/assembly/pom.xml b/assembly/pom.xml
index f8b15cc..6a8cd4f 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index e412a47..6010b6e 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index d8f9a3d..8b5d3c8 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index a1a4f87..dd27a24 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index e650978..aded5e7d 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index 350e3cb..a50f612 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/tags/pom.xml b/common/tags/pom.xml
index e7fea41..8112ca4 100644
--- a/common/tags/pom.xml
+++ b/common/tags/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/unsafe/pom.xml b/common/unsafe/pom.xml
index 601cc5d..0d5f61f 100644
--- a/common/unsafe/pom.xml
+++ b/common/unsafe/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/core/pom.xml b/core/pom.xml
index 2a7e644..930128d 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../pom.xml
   
 
diff --git a/docs/_config.yml b/docs/_config.yml
index 7629f5f..8e9c3b5 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -14,7 +14,7 @@ include:
 
 # These allow 

[spark] branch branch-2.3 updated (d397348 -> 01511e4)

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git.


 discard d397348  [SPARK-25572][SPARKR] test only if not cran
 discard a9a1bc7  [SPARK-26010][R] fix vignette eval with Java 11
 discard e46b0ed  Preparing development version 2.3.4-SNAPSHOT
 discard 0e3d5fd  Preparing Spark release v2.3.3-rc1
 new 20b7490  [SPARK-26010][R] fix vignette eval with Java 11
 new 01511e4  [SPARK-25572][SPARKR] test only if not cran

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (d397348)
\
 N -- N -- N   refs/heads/branch-2.3 (01511e4)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] 02/02: [SPARK-25572][SPARKR] test only if not cran

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 01511e479013c56d70fe8ffa805ecbd66591b57e
Author: Felix Cheung 
AuthorDate: Sat Sep 29 14:48:32 2018 -0700

[SPARK-25572][SPARKR] test only if not cran

## What changes were proposed in this pull request?

CRAN doesn't seem to respect the system requirements as running tests - we 
have seen cases where SparkR is run on Java 10, which unfortunately Spark does 
not start on. For 2.4, lets attempt skipping all tests

## How was this patch tested?

manual, jenkins, appveyor

Author: Felix Cheung 

Closes #22589 from felixcheung/ralltests.

(cherry picked from commit f4b138082ff91be74b0f5bbe19cdb90dd9e5f131)
Signed-off-by: Takeshi Yamamuro 
---
 R/pkg/tests/run-all.R | 83 +++
 1 file changed, 44 insertions(+), 39 deletions(-)

diff --git a/R/pkg/tests/run-all.R b/R/pkg/tests/run-all.R
index 94d7518..1e96418 100644
--- a/R/pkg/tests/run-all.R
+++ b/R/pkg/tests/run-all.R
@@ -18,50 +18,55 @@
 library(testthat)
 library(SparkR)
 
-# Turn all warnings into errors
-options("warn" = 2)
+# SPARK-25572
+if (identical(Sys.getenv("NOT_CRAN"), "true")) {
 
-if (.Platform$OS.type == "windows") {
-  Sys.setenv(TZ = "GMT")
-}
+  # Turn all warnings into errors
+  options("warn" = 2)
 
-# Setup global test environment
-# Install Spark first to set SPARK_HOME
+  if (.Platform$OS.type == "windows") {
+Sys.setenv(TZ = "GMT")
+  }
 
-# NOTE(shivaram): We set overwrite to handle any old tar.gz files or 
directories left behind on
-# CRAN machines. For Jenkins we should already have SPARK_HOME set.
-install.spark(overwrite = TRUE)
+  # Setup global test environment
+  # Install Spark first to set SPARK_HOME
 
-sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
-sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
-invisible(lapply(sparkRWhitelistSQLDirs,
- function(x) { unlink(file.path(sparkRDir, x), recursive = 
TRUE, force = TRUE)}))
-sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
+  # NOTE(shivaram): We set overwrite to handle any old tar.gz files or 
directories left behind on
+  # CRAN machines. For Jenkins we should already have SPARK_HOME set.
+  install.spark(overwrite = TRUE)
 
-sparkRTestMaster <- "local[1]"
-sparkRTestConfig <- list()
-if (identical(Sys.getenv("NOT_CRAN"), "true")) {
-  sparkRTestMaster <- ""
-} else {
-  # Disable hsperfdata on CRAN
-  old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
-  Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
-  tmpDir <- tempdir()
-  tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
-  sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
-   spark.executor.extraJavaOptions = tmpArg)
-}
+  sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
+  sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
+  invisible(lapply(sparkRWhitelistSQLDirs,
+   function(x) { unlink(file.path(sparkRDir, x), recursive = 
TRUE, force = TRUE)}))
+  sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
 
-test_package("SparkR")
+  sparkRTestMaster <- "local[1]"
+  sparkRTestConfig <- list()
+  if (identical(Sys.getenv("NOT_CRAN"), "true")) {
+sparkRTestMaster <- ""
+  } else {
+# Disable hsperfdata on CRAN
+old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
+Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
+tmpDir <- tempdir()
+tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
+sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
+ spark.executor.extraJavaOptions = tmpArg)
+  }
 
-if (identical(Sys.getenv("NOT_CRAN"), "true")) {
-  # set random seed for predictable results. mostly for base's sample() in 
tree and classification
-  set.seed(42)
-  # for testthat 1.0.2 later, change reporter from "summary" to 
default_reporter()
-  testthat:::run_tests("SparkR",
-   file.path(sparkRDir, "pkg", "tests", "fulltests"),
-   NULL,
-   "summary")
-}
+  test_package("SparkR")
+
+  if (identical(Sys.getenv("NOT_CRAN"), "true")) {
+# set random seed for predictable results. mostly for base's sample() in 
tree and classification
+set.seed(42)
+# for testthat 1.0.2 later, change reporter from "summary" to 
default_reporter()
+testthat:::run_tests("SparkR",
+ file.path(sparkRDir, "pkg", "tests", "fulltests"),
+ NULL,
+ "summary")
+  }
 
-SparkR:::uninstallDownloadedSpark()
+  SparkR:::uninstallDownloadedSpark()
+
+}


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional 

[spark] 01/02: [SPARK-26010][R] fix vignette eval with Java 11

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 20b749021bacaa2906775944e43597ccf37af62b
Author: Felix Cheung 
AuthorDate: Mon Nov 12 19:03:30 2018 -0800

[SPARK-26010][R] fix vignette eval with Java 11

## What changes were proposed in this pull request?

changes in vignette only to disable eval

## How was this patch tested?

Jenkins

Author: Felix Cheung 

Closes #23007 from felixcheung/rjavavervig.

(cherry picked from commit 88c82627267a9731b2438f0cc28dd656eb3dc834)
Signed-off-by: Felix Cheung 
---
 R/pkg/vignettes/sparkr-vignettes.Rmd | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/R/pkg/vignettes/sparkr-vignettes.Rmd 
b/R/pkg/vignettes/sparkr-vignettes.Rmd
index d4713de..70970bd 100644
--- a/R/pkg/vignettes/sparkr-vignettes.Rmd
+++ b/R/pkg/vignettes/sparkr-vignettes.Rmd
@@ -57,6 +57,20 @@ First, let's load and attach the package.
 library(SparkR)
 ```
 
+```{r, include=FALSE}
+# disable eval if java version not supported
+override_eval <- tryCatch(!is.numeric(SparkR:::checkJavaVersion()),
+  error = function(e) { TRUE },
+  warning = function(e) { TRUE })
+
+if (override_eval) {
+  opts_hooks$set(eval = function(options) {
+options$eval = FALSE
+options
+  })
+}
+```
+
 `SparkSession` is the entry point into SparkR which connects your R program to 
a Spark cluster. You can create a `SparkSession` using `sparkR.session` and 
pass in options such as the application name, any Spark packages depended on, 
etc.
 
 We use default settings in which it runs in local mode. It auto downloads 
Spark package in the background if no previous installation is found. For more 
details about setup, see [Spark Session](#SetupSparkSession).


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] tag v2.3.3-rc1 deleted (was 0e3d5fd)

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to tag v2.3.3-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git.


*** WARNING: tag v2.3.3-rc1 was deleted! ***

 was 0e3d5fd  Preparing Spark release v2.3.3-rc1

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] felixcheung closed pull request #168: add checker note in release process

2019-01-12 Thread GitBox
felixcheung closed pull request #168: add checker note in release process
URL: https://github.com/apache/spark-website/pull/168
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/release-process.md b/release-process.md
index 14c9c1667..31d395c9c 100644
--- a/release-process.md
+++ b/release-process.md
@@ -130,7 +130,7 @@ that looks something like `[RESULT] [VOTE]...`.
 **THIS STEP IS IRREVERSIBLE so make sure you selected the correct staging 
repository. Once you
 move the artifacts into the release folder, they cannot be removed.**
 
-After the vote passes, to upload the binaries to Apache mirrors, you move the 
binaries from dev directory (this should be where they are voted) to release 
directory. This "moving" is the only way you can add stuff to the actual 
release directory.
+After the vote passes, to upload the binaries to Apache mirrors, you move the 
binaries from dev directory (this should be where they are voted) to release 
directory. This "moving" is the only way you can add stuff to the actual 
release directory. (Note: only PMC can move to release directory)
 
 ```
 # Move the sub-directory in "dev" to the
@@ -146,7 +146,7 @@ curl "https://dist.apache.org/repos/dist/dev/spark/KEYS; > 
svn-spark/KEYS
 
 Verify that the resources are present in https://www.apache.org/dist/spark/;>https://www.apache.org/dist/spark/.
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network.
-There are a few remaining steps.
+Check the release checker result of the release at https://checker.apache.org/projs/spark.html;>https://checker.apache.org/projs/spark.html.
 
 
 For Maven Central Repository, you can Release from the https://repository.apache.org/;>Apache Nexus Repository Manager. This 
is already populated by the `release-build.sh publish-release` step. Log in, 
open Staging Repositories, find the one voted on (eg. orgapachespark-1257 for 
https://repository.apache.org/content/repositories/orgapachespark-1257/), 
select and click Release and confirm. If successful, it should show up under 
https://repository.apache.org/content/repositories/releases/org/apache/spark/spark-core_2.11/2.2.1/
diff --git a/site/release-process.html b/site/release-process.html
index 2a4b82b54..65f8fb498 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -334,7 +334,7 @@ Finalize the Release
 THIS STEP IS IRREVERSIBLE so make sure you selected the correct 
staging repository. Once you
 move the artifacts into the release folder, they cannot be 
removed.
 
-After the vote passes, to upload the binaries to Apache mirrors, you move 
the binaries from dev directory (this should be where they are voted) to 
release directory. This moving is the only way you can add stuff 
to the actual release directory.
+After the vote passes, to upload the binaries to Apache mirrors, you move 
the binaries from dev directory (this should be where they are voted) to 
release directory. This moving is the only way you can add stuff 
to the actual release directory. (Note: only PMC can move to release 
directory)
 
 # Move the sub-directory in "dev" to the
 # corresponding directory in "release"
@@ -349,7 +349,7 @@ Finalize the Release
 
 Verify that the resources are present in https://www.apache.org/dist/spark/;>https://www.apache.org/dist/spark/.
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network.
-There are a few remaining steps.
+Check the release checker result of the release at https://checker.apache.org/projs/spark.html;>https://checker.apache.org/projs/spark.html.
 
 For Maven Central Repository, you can Release from the https://repository.apache.org/;>Apache Nexus Repository Manager. This 
is already populated by the release-build.sh publish-release step. 
Log in, open Staging Repositories, find the one voted on (eg. 
orgapachespark-1257 for 
https://repository.apache.org/content/repositories/orgapachespark-1257/), 
select and click Release and confirm. If successful, it should show up under 
https://repository.apache.org/content/repositories/releases/org/apache/spark/spark-core_2.11/2.2.1/
 and the same under 
https://repository.apache.org/content/groups/maven-staging-group/org/apache/spark/spark-core_2.11/2.2.1/
 (look for the correct release version). After some time this will be 
syncd to https://search.maven.org/;>Maven Central 
automatically.


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[spark-website] branch asf-site updated: add checker note (#168)

2019-01-12 Thread felixcheung
This is an automated email from the ASF dual-hosted git repository.

felixcheung pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 70a6071  add checker note (#168)
70a6071 is described below

commit 70a60716fed85bfcfce4188b26faeb63ecbe2b79
Author: Felix Cheung 
AuthorDate: Sat Jan 12 20:07:14 2019 -0500

add checker note (#168)

dd note to check checker after release
---
 release-process.md| 4 ++--
 site/release-process.html | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/release-process.md b/release-process.md
index 14c9c16..31d395c 100644
--- a/release-process.md
+++ b/release-process.md
@@ -130,7 +130,7 @@ that looks something like `[RESULT] [VOTE]...`.
 **THIS STEP IS IRREVERSIBLE so make sure you selected the correct staging 
repository. Once you
 move the artifacts into the release folder, they cannot be removed.**
 
-After the vote passes, to upload the binaries to Apache mirrors, you move the 
binaries from dev directory (this should be where they are voted) to release 
directory. This "moving" is the only way you can add stuff to the actual 
release directory.
+After the vote passes, to upload the binaries to Apache mirrors, you move the 
binaries from dev directory (this should be where they are voted) to release 
directory. This "moving" is the only way you can add stuff to the actual 
release directory. (Note: only PMC can move to release directory)
 
 ```
 # Move the sub-directory in "dev" to the
@@ -146,7 +146,7 @@ curl "https://dist.apache.org/repos/dist/dev/spark/KEYS; > 
svn-spark/KEYS
 
 Verify that the resources are present in https://www.apache.org/dist/spark/;>https://www.apache.org/dist/spark/.
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network.
-There are a few remaining steps.
+Check the release checker result of the release at https://checker.apache.org/projs/spark.html;>https://checker.apache.org/projs/spark.html.
 
 
 For Maven Central Repository, you can Release from the https://repository.apache.org/;>Apache Nexus Repository Manager. This 
is already populated by the `release-build.sh publish-release` step. Log in, 
open Staging Repositories, find the one voted on (eg. orgapachespark-1257 for 
https://repository.apache.org/content/repositories/orgapachespark-1257/), 
select and click Release and confirm. If successful, it should show up under 
https://repository.apache.org/content/repositori [...]
diff --git a/site/release-process.html b/site/release-process.html
index 2a4b82b..65f8fb4 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -334,7 +334,7 @@ that looks something like [RESULT] 
[VOTE]
 THIS STEP IS IRREVERSIBLE so make sure you selected the correct 
staging repository. Once you
 move the artifacts into the release folder, they cannot be 
removed.
 
-After the vote passes, to upload the binaries to Apache mirrors, you move 
the binaries from dev directory (this should be where they are voted) to 
release directory. This moving is the only way you can add stuff 
to the actual release directory.
+After the vote passes, to upload the binaries to Apache mirrors, you move 
the binaries from dev directory (this should be where they are voted) to 
release directory. This moving is the only way you can add stuff 
to the actual release directory. (Note: only PMC can move to release 
directory)
 
 # Move the sub-directory in "dev" to the
 # corresponding directory in "release"
@@ -349,7 +349,7 @@ curl "https://dist.apache.org/repos/dist/dev/spark/KEYS; 
 svn-spark/KEYS
 
 Verify that the resources are present in https://www.apache.org/dist/spark/;>https://www.apache.org/dist/spark/.
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network.
-There are a few remaining steps.
+Check the release checker result of the release at https://checker.apache.org/projs/spark.html;>https://checker.apache.org/projs/spark.html.
 
 For Maven Central Repository, you can Release from the https://repository.apache.org/;>Apache Nexus Repository Manager. This 
is already populated by the release-build.sh publish-release step. 
Log in, open Staging Repositories, find the one voted on (eg. 
orgapachespark-1257 for 
https://repository.apache.org/content/repositories/orgapachespark-1257/), 
select and click Release and confirm. If successful, it should show up under 
https://repository.apache.org/cont [...]
 and the same under 
https://repository.apache.org/content/groups/maven-staging-group/org/apache/spark/spark-core_2.11/2.2.1/
 (look for the correct release version). After some time this will be 
syncd to https://search.maven.org/;>Maven Central 
automatically.


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional 

[spark] branch branch-2.3 updated: [SPARK-25572][SPARKR] test only if not cran

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new d397348  [SPARK-25572][SPARKR] test only if not cran
d397348 is described below

commit d397348b7bec20743f738694a135e4b67947fd99
Author: Felix Cheung 
AuthorDate: Sat Sep 29 14:48:32 2018 -0700

[SPARK-25572][SPARKR] test only if not cran

## What changes were proposed in this pull request?

CRAN doesn't seem to respect the system requirements as running tests - we 
have seen cases where SparkR is run on Java 10, which unfortunately Spark does 
not start on. For 2.4, lets attempt skipping all tests

## How was this patch tested?

manual, jenkins, appveyor

Author: Felix Cheung 

Closes #22589 from felixcheung/ralltests.

(cherry picked from commit f4b138082ff91be74b0f5bbe19cdb90dd9e5f131)
Signed-off-by: Takeshi Yamamuro 
---
 R/pkg/tests/run-all.R | 83 +++
 1 file changed, 44 insertions(+), 39 deletions(-)

diff --git a/R/pkg/tests/run-all.R b/R/pkg/tests/run-all.R
index 94d7518..1e96418 100644
--- a/R/pkg/tests/run-all.R
+++ b/R/pkg/tests/run-all.R
@@ -18,50 +18,55 @@
 library(testthat)
 library(SparkR)
 
-# Turn all warnings into errors
-options("warn" = 2)
+# SPARK-25572
+if (identical(Sys.getenv("NOT_CRAN"), "true")) {
 
-if (.Platform$OS.type == "windows") {
-  Sys.setenv(TZ = "GMT")
-}
+  # Turn all warnings into errors
+  options("warn" = 2)
 
-# Setup global test environment
-# Install Spark first to set SPARK_HOME
+  if (.Platform$OS.type == "windows") {
+Sys.setenv(TZ = "GMT")
+  }
 
-# NOTE(shivaram): We set overwrite to handle any old tar.gz files or 
directories left behind on
-# CRAN machines. For Jenkins we should already have SPARK_HOME set.
-install.spark(overwrite = TRUE)
+  # Setup global test environment
+  # Install Spark first to set SPARK_HOME
 
-sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
-sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
-invisible(lapply(sparkRWhitelistSQLDirs,
- function(x) { unlink(file.path(sparkRDir, x), recursive = 
TRUE, force = TRUE)}))
-sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
+  # NOTE(shivaram): We set overwrite to handle any old tar.gz files or 
directories left behind on
+  # CRAN machines. For Jenkins we should already have SPARK_HOME set.
+  install.spark(overwrite = TRUE)
 
-sparkRTestMaster <- "local[1]"
-sparkRTestConfig <- list()
-if (identical(Sys.getenv("NOT_CRAN"), "true")) {
-  sparkRTestMaster <- ""
-} else {
-  # Disable hsperfdata on CRAN
-  old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
-  Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
-  tmpDir <- tempdir()
-  tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
-  sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
-   spark.executor.extraJavaOptions = tmpArg)
-}
+  sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
+  sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
+  invisible(lapply(sparkRWhitelistSQLDirs,
+   function(x) { unlink(file.path(sparkRDir, x), recursive = 
TRUE, force = TRUE)}))
+  sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
 
-test_package("SparkR")
+  sparkRTestMaster <- "local[1]"
+  sparkRTestConfig <- list()
+  if (identical(Sys.getenv("NOT_CRAN"), "true")) {
+sparkRTestMaster <- ""
+  } else {
+# Disable hsperfdata on CRAN
+old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
+Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
+tmpDir <- tempdir()
+tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
+sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
+ spark.executor.extraJavaOptions = tmpArg)
+  }
 
-if (identical(Sys.getenv("NOT_CRAN"), "true")) {
-  # set random seed for predictable results. mostly for base's sample() in 
tree and classification
-  set.seed(42)
-  # for testthat 1.0.2 later, change reporter from "summary" to 
default_reporter()
-  testthat:::run_tests("SparkR",
-   file.path(sparkRDir, "pkg", "tests", "fulltests"),
-   NULL,
-   "summary")
-}
+  test_package("SparkR")
+
+  if (identical(Sys.getenv("NOT_CRAN"), "true")) {
+# set random seed for predictable results. mostly for base's sample() in 
tree and classification
+set.seed(42)
+# for testthat 1.0.2 later, change reporter from "summary" to 
default_reporter()
+testthat:::run_tests("SparkR",
+ file.path(sparkRDir, "pkg", "tests", "fulltests"),
+ NULL,
+ "summary")
+  }
 
-SparkR:::uninstallDownloadedSpark()
+  

[spark] branch branch-2.3 updated (6d063ee -> e46b0ed)

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6d063ee  [SPARK-26538][SQL] Set default precision and scale for 
elements of postgres numeric array
 add 0e3d5fd  Preparing Spark release v2.3.3-rc1
 new e46b0ed  Preparing development version 2.3.4-SNAPSHOT

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] 01/01: Preparing development version 2.3.4-SNAPSHOT

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git

commit e46b0edd1046329fa3e3a730d59a6a263f72cbd0
Author: Takeshi Yamamuro 
AuthorDate: Sun Jan 13 00:26:02 2019 +

Preparing development version 2.3.4-SNAPSHOT
---
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6ec4966..a82446e 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.3
+Version: 2.3.4
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 6a8cd4f..612a1b8 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 6010b6e..5547e97 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 8b5d3c8..119dde2 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index dd27a24..dba5224 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index aded5e7d..56902a3 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index a50f612..5302d95 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/tags/pom.xml b/common/tags/pom.xml
index 8112ca4..232ebfa 100644
--- a/common/tags/pom.xml
+++ b/common/tags/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+2.3.4-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/unsafe/pom.xml b/common/unsafe/pom.xml
index 0d5f61f..f0baa2a 100644
--- a/common/unsafe/pom.xml
+++ b/common/unsafe/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3
+

[spark] 01/01: Preparing Spark release v2.3.3-rc1

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to tag v2.3.3-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 0e3d5fd960927dd8ff1a909aba98b85fb9350c58
Author: Takeshi Yamamuro 
AuthorDate: Sun Jan 13 00:25:46 2019 +

Preparing Spark release v2.3.3-rc1
---
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 2 +-
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 40 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/assembly/pom.xml b/assembly/pom.xml
index f8b15cc..6a8cd4f 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index e412a47..6010b6e 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index d8f9a3d..8b5d3c8 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index a1a4f87..dd27a24 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index e650978..aded5e7d 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index 350e3cb..a50f612 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/tags/pom.xml b/common/tags/pom.xml
index e7fea41..8112ca4 100644
--- a/common/tags/pom.xml
+++ b/common/tags/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/common/unsafe/pom.xml b/common/unsafe/pom.xml
index 601cc5d..0d5f61f 100644
--- a/common/unsafe/pom.xml
+++ b/common/unsafe/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../../pom.xml
   
 
diff --git a/core/pom.xml b/core/pom.xml
index 2a7e644..930128d 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.3-SNAPSHOT
+2.3.3
 ../pom.xml
   
 
diff --git a/docs/_config.yml b/docs/_config.yml
index 7629f5f..8e9c3b5 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -14,7 +14,7 @@ include:
 
 # These allow 

[spark] tag v2.3.3-rc1 created (now 0e3d5fd)

2019-01-12 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to tag v2.3.3-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git.


  at 0e3d5fd  (commit)
This tag includes the following new commits:

 new 0e3d5fd  Preparing Spark release v2.3.3-rc1

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] dongjoon-hyun commented on issue #167: Add Spark 2.2.3 docs

2019-01-12 Thread GitBox
dongjoon-hyun commented on issue #167: Add Spark 2.2.3 docs
URL: https://github.com/apache/spark-website/pull/167#issuecomment-453791540
 
 
   Thank you, @felixcheung and @srowen .
   Yes. Right, @srowen . I'm working on them~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] srowen commented on issue #167: Add Spark 2.2.3 docs

2019-01-12 Thread GitBox
srowen commented on issue #167: Add Spark 2.2.3 docs
URL: https://github.com/apache/spark-website/pull/167#issuecomment-453791488
 
 
   @dongjoon-hyun there's a separate change to add a news item and the download 
link to come right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] felixcheung commented on issue #167: Add Spark 2.2.3 docs

2019-01-12 Thread GitBox
felixcheung commented on issue #167: Add Spark 2.2.3 docs
URL: https://github.com/apache/spark-website/pull/167#issuecomment-453791295
 
 
   I did a quick check after merging, looks good


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] felixcheung opened a new pull request #168: add checker note in release process

2019-01-12 Thread GitBox
felixcheung opened a new pull request #168: add checker note in release process
URL: https://github.com/apache/spark-website/pull/168
 
 
   add note to check checker after release


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] felixcheung commented on issue #167: Add Spark 2.2.3 docs

2019-01-12 Thread GitBox
felixcheung commented on issue #167: Add Spark 2.2.3 docs
URL: https://github.com/apache/spark-website/pull/167#issuecomment-453791131
 
 
   yeah, it's hard to review. if you could double check only site/ is changing 
then it should be ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r31929 - in /dev/spark/3.0.0-SNAPSHOT-2019_01_12_13_31-3bd77aa-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sat Jan 12 21:43:36 2019
New Revision: 31929

Log:
Apache Spark 3.0.0-SNAPSHOT-2019_01_12_13_31-3bd77aa docs


[This commit notification would consist of 1775 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] dongjoon-hyun commented on issue #167: Add Spark 2.2.3 docs

2019-01-12 Thread GitBox
dongjoon-hyun commented on issue #167: Add Spark 2.2.3 docs
URL: https://github.com/apache/spark-website/pull/167#issuecomment-453782582
 
 
   Could you review this, @srowen , @felixcheung , @dbtsai ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] dongjoon-hyun opened a new pull request #167: Add Spark 2.2.3 docs

2019-01-12 Thread GitBox
dongjoon-hyun opened a new pull request #167: Add Spark 2.2.3 docs
URL: https://github.com/apache/spark-website/pull/167
 
 
   This PR aims to add the voted Apache Spark 2.2.3 `docs` directory only. In 
the followup PR, I'll update the others (the links and announce and committer 
guide update items like https://checker.apache.org/projs/spark.html)
   
   ```
   site/docs/2.2.3
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (5b37092 -> 3bd77aa)

2019-01-12 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5b37092  [SPARK-26538][SQL] Set default precision and scale for 
elements of postgres numeric array
 add 3bd77aa  [SPARK-26564] Fix wrong assertions and error messages for 
parameter checking

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/SparkConf.scala| 2 +-
 .../src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala | 2 +-
 .../org/apache/spark/sql/execution/exchange/BroadcastExchangeExec.scala | 2 +-
 .../scala/org/apache/spark/sql/execution/joins/HashedRelation.scala | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r31927 - in /dev/spark/2.4.1-SNAPSHOT-2019_01_12_11_29-dde4d1d-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sat Jan 12 19:45:58 2019
New Revision: 31927

Log:
Apache Spark 2.4.1-SNAPSHOT-2019_01_12_11_29-dde4d1d docs


[This commit notification would consist of 1476 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r31926 - in /dev/spark/2.3.3-SNAPSHOT-2019_01_12_11_29-6d063ee-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sat Jan 12 19:44:26 2019
New Revision: 31926

Log:
Apache Spark 2.3.3-SNAPSHOT-2019_01_12_11_29-6d063ee docs


[This commit notification would consist of 1443 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-26538][SQL] Set default precision and scale for elements of postgres numeric array

2019-01-12 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new dde4d1d  [SPARK-26538][SQL] Set default precision and scale for 
elements of postgres numeric array
dde4d1d is described below

commit dde4d1d8c409a9ee5dbaae8c12b6f9de540b4198
Author: Oleksii Shkarupin 
AuthorDate: Sat Jan 12 11:06:39 2019 -0800

[SPARK-26538][SQL] Set default precision and scale for elements of postgres 
numeric array

## What changes were proposed in this pull request?

When determining CatalystType for postgres columns with type `numeric[]` 
set the type of array element to `DecimalType(38, 18)` instead of 
`DecimalType(0,0)`.

## How was this patch tested?

Tested with modified `org.apache.spark.sql.jdbc.JDBCSuite`.
Ran the `PostgresIntegrationSuite` manually.

Closes #23456 from a-shkarupin/postgres_numeric_array.

Lead-authored-by: Oleksii Shkarupin 
Co-authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
(cherry picked from commit 5b37092311bfc1255f1d4d81127ae4242ba1d1aa)
Signed-off-by: Dongjoon Hyun 
---
 .../org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala | 12 
 .../scala/org/apache/spark/sql/jdbc/PostgresDialect.scala|  5 -
 .../src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala |  3 +++
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
index be32cb8..e8d5b46 100644
--- 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
+++ 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
@@ -46,14 +46,15 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 conn.prepareStatement("CREATE TABLE bar (c0 text, c1 integer, c2 double 
precision, c3 bigint, "
   + "c4 bit(1), c5 bit(10), c6 bytea, c7 boolean, c8 inet, c9 cidr, "
   + "c10 integer[], c11 text[], c12 real[], c13 numeric(2,2)[], c14 
enum_type, "
-  + "c15 float4, c16 smallint)").executeUpdate()
+  + "c15 float4, c16 smallint, c17 numeric[])").executeUpdate()
 conn.prepareStatement("INSERT INTO bar VALUES ('hello', 42, 1.25, 
123456789012345, B'0', "
   + "B'1000100101', E'xDEADBEEF', true, '172.16.0.42', 
'192.168.0.0/16', "
-  + """'{1, 2}', '{"a", null, "b"}', '{0.11, 0.22}', '{0.11, 0.22}', 'd1', 
1.01, 1)"""
+  + """'{1, 2}', '{"a", null, "b"}', '{0.11, 0.22}', '{0.11, 0.22}', 'd1', 
1.01, 1, """
+  + "'{111., 333.}')"
 ).executeUpdate()
 conn.prepareStatement("INSERT INTO bar VALUES (null, null, null, null, 
null, "
   + "null, null, null, null, null, "
-  + "null, null, null, null, null, null, null)"
+  + "null, null, null, null, null, null, null, null)"
 ).executeUpdate()
 
 conn.prepareStatement("CREATE TABLE ts_with_timezone " +
@@ -85,7 +86,7 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(rows.length == 2)
 // Test the types, and values using the first row.
 val types = rows(0).toSeq.map(x => x.getClass)
-assert(types.length == 17)
+assert(types.length == 18)
 assert(classOf[String].isAssignableFrom(types(0)))
 assert(classOf[java.lang.Integer].isAssignableFrom(types(1)))
 assert(classOf[java.lang.Double].isAssignableFrom(types(2)))
@@ -103,6 +104,7 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(classOf[String].isAssignableFrom(types(14)))
 assert(classOf[java.lang.Float].isAssignableFrom(types(15)))
 assert(classOf[java.lang.Short].isAssignableFrom(types(16)))
+assert(classOf[Seq[BigDecimal]].isAssignableFrom(types(17)))
 assert(rows(0).getString(0).equals("hello"))
 assert(rows(0).getInt(1) == 42)
 assert(rows(0).getDouble(2) == 1.25)
@@ -123,6 +125,8 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(rows(0).getString(14) == "d1")
 assert(rows(0).getFloat(15) == 1.01f)
 assert(rows(0).getShort(16) == 1)
+assert(rows(0).getSeq(17) ==
+  Seq("111.00", 
"333.00").map(BigDecimal(_).bigDecimal))
 
 // Test reading null values using the second row.
 assert(0.until(16).forall(rows(1).isNullAt(_)))
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
index f8d2bc8..5be45c9 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
+++ 

[spark] branch branch-2.3 updated: [SPARK-26538][SQL] Set default precision and scale for elements of postgres numeric array

2019-01-12 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new 6d063ee  [SPARK-26538][SQL] Set default precision and scale for 
elements of postgres numeric array
6d063ee is described below

commit 6d063ee07c3ee591131d2ad1debdb9540428b5ff
Author: Oleksii Shkarupin 
AuthorDate: Sat Jan 12 11:06:39 2019 -0800

[SPARK-26538][SQL] Set default precision and scale for elements of postgres 
numeric array

## What changes were proposed in this pull request?

When determining CatalystType for postgres columns with type `numeric[]` 
set the type of array element to `DecimalType(38, 18)` instead of 
`DecimalType(0,0)`.

## How was this patch tested?

Tested with modified `org.apache.spark.sql.jdbc.JDBCSuite`.
Ran the `PostgresIntegrationSuite` manually.

Closes #23456 from a-shkarupin/postgres_numeric_array.

Lead-authored-by: Oleksii Shkarupin 
Co-authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
(cherry picked from commit 5b37092311bfc1255f1d4d81127ae4242ba1d1aa)
Signed-off-by: Dongjoon Hyun 
---
 .../org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala | 12 
 .../scala/org/apache/spark/sql/jdbc/PostgresDialect.scala|  5 -
 .../src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala |  3 +++
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
index be32cb8..e8d5b46 100644
--- 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
+++ 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
@@ -46,14 +46,15 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 conn.prepareStatement("CREATE TABLE bar (c0 text, c1 integer, c2 double 
precision, c3 bigint, "
   + "c4 bit(1), c5 bit(10), c6 bytea, c7 boolean, c8 inet, c9 cidr, "
   + "c10 integer[], c11 text[], c12 real[], c13 numeric(2,2)[], c14 
enum_type, "
-  + "c15 float4, c16 smallint)").executeUpdate()
+  + "c15 float4, c16 smallint, c17 numeric[])").executeUpdate()
 conn.prepareStatement("INSERT INTO bar VALUES ('hello', 42, 1.25, 
123456789012345, B'0', "
   + "B'1000100101', E'xDEADBEEF', true, '172.16.0.42', 
'192.168.0.0/16', "
-  + """'{1, 2}', '{"a", null, "b"}', '{0.11, 0.22}', '{0.11, 0.22}', 'd1', 
1.01, 1)"""
+  + """'{1, 2}', '{"a", null, "b"}', '{0.11, 0.22}', '{0.11, 0.22}', 'd1', 
1.01, 1, """
+  + "'{111., 333.}')"
 ).executeUpdate()
 conn.prepareStatement("INSERT INTO bar VALUES (null, null, null, null, 
null, "
   + "null, null, null, null, null, "
-  + "null, null, null, null, null, null, null)"
+  + "null, null, null, null, null, null, null, null)"
 ).executeUpdate()
 
 conn.prepareStatement("CREATE TABLE ts_with_timezone " +
@@ -85,7 +86,7 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(rows.length == 2)
 // Test the types, and values using the first row.
 val types = rows(0).toSeq.map(x => x.getClass)
-assert(types.length == 17)
+assert(types.length == 18)
 assert(classOf[String].isAssignableFrom(types(0)))
 assert(classOf[java.lang.Integer].isAssignableFrom(types(1)))
 assert(classOf[java.lang.Double].isAssignableFrom(types(2)))
@@ -103,6 +104,7 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(classOf[String].isAssignableFrom(types(14)))
 assert(classOf[java.lang.Float].isAssignableFrom(types(15)))
 assert(classOf[java.lang.Short].isAssignableFrom(types(16)))
+assert(classOf[Seq[BigDecimal]].isAssignableFrom(types(17)))
 assert(rows(0).getString(0).equals("hello"))
 assert(rows(0).getInt(1) == 42)
 assert(rows(0).getDouble(2) == 1.25)
@@ -123,6 +125,8 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(rows(0).getString(14) == "d1")
 assert(rows(0).getFloat(15) == 1.01f)
 assert(rows(0).getShort(16) == 1)
+assert(rows(0).getSeq(17) ==
+  Seq("111.00", 
"333.00").map(BigDecimal(_).bigDecimal))
 
 // Test reading null values using the second row.
 assert(0.until(16).forall(rows(1).isNullAt(_)))
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
index 13a2035..faaf20f 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
+++ 

[spark] branch master updated: [SPARK-26538][SQL] Set default precision and scale for elements of postgres numeric array

2019-01-12 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 5b37092  [SPARK-26538][SQL] Set default precision and scale for 
elements of postgres numeric array
5b37092 is described below

commit 5b37092311bfc1255f1d4d81127ae4242ba1d1aa
Author: Oleksii Shkarupin 
AuthorDate: Sat Jan 12 11:06:39 2019 -0800

[SPARK-26538][SQL] Set default precision and scale for elements of postgres 
numeric array

## What changes were proposed in this pull request?

When determining CatalystType for postgres columns with type `numeric[]` 
set the type of array element to `DecimalType(38, 18)` instead of 
`DecimalType(0,0)`.

## How was this patch tested?

Tested with modified `org.apache.spark.sql.jdbc.JDBCSuite`.
Ran the `PostgresIntegrationSuite` manually.

Closes #23456 from a-shkarupin/postgres_numeric_array.

Lead-authored-by: Oleksii Shkarupin 
Co-authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
---
 .../org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala | 12 
 .../scala/org/apache/spark/sql/jdbc/PostgresDialect.scala|  5 -
 .../src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala |  3 +++
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
index be32cb8..e8d5b46 100644
--- 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
+++ 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
@@ -46,14 +46,15 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 conn.prepareStatement("CREATE TABLE bar (c0 text, c1 integer, c2 double 
precision, c3 bigint, "
   + "c4 bit(1), c5 bit(10), c6 bytea, c7 boolean, c8 inet, c9 cidr, "
   + "c10 integer[], c11 text[], c12 real[], c13 numeric(2,2)[], c14 
enum_type, "
-  + "c15 float4, c16 smallint)").executeUpdate()
+  + "c15 float4, c16 smallint, c17 numeric[])").executeUpdate()
 conn.prepareStatement("INSERT INTO bar VALUES ('hello', 42, 1.25, 
123456789012345, B'0', "
   + "B'1000100101', E'xDEADBEEF', true, '172.16.0.42', 
'192.168.0.0/16', "
-  + """'{1, 2}', '{"a", null, "b"}', '{0.11, 0.22}', '{0.11, 0.22}', 'd1', 
1.01, 1)"""
+  + """'{1, 2}', '{"a", null, "b"}', '{0.11, 0.22}', '{0.11, 0.22}', 'd1', 
1.01, 1, """
+  + "'{111., 333.}')"
 ).executeUpdate()
 conn.prepareStatement("INSERT INTO bar VALUES (null, null, null, null, 
null, "
   + "null, null, null, null, null, "
-  + "null, null, null, null, null, null, null)"
+  + "null, null, null, null, null, null, null, null)"
 ).executeUpdate()
 
 conn.prepareStatement("CREATE TABLE ts_with_timezone " +
@@ -85,7 +86,7 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(rows.length == 2)
 // Test the types, and values using the first row.
 val types = rows(0).toSeq.map(x => x.getClass)
-assert(types.length == 17)
+assert(types.length == 18)
 assert(classOf[String].isAssignableFrom(types(0)))
 assert(classOf[java.lang.Integer].isAssignableFrom(types(1)))
 assert(classOf[java.lang.Double].isAssignableFrom(types(2)))
@@ -103,6 +104,7 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(classOf[String].isAssignableFrom(types(14)))
 assert(classOf[java.lang.Float].isAssignableFrom(types(15)))
 assert(classOf[java.lang.Short].isAssignableFrom(types(16)))
+assert(classOf[Seq[BigDecimal]].isAssignableFrom(types(17)))
 assert(rows(0).getString(0).equals("hello"))
 assert(rows(0).getInt(1) == 42)
 assert(rows(0).getDouble(2) == 1.25)
@@ -123,6 +125,8 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
 assert(rows(0).getString(14) == "d1")
 assert(rows(0).getFloat(15) == 1.01f)
 assert(rows(0).getShort(16) == 1)
+assert(rows(0).getSeq(17) ==
+  Seq("111.00", 
"333.00").map(BigDecimal(_).bigDecimal))
 
 // Test reading null values using the second row.
 assert(0.until(16).forall(rows(1).isNullAt(_)))
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
index f8d2bc8..5be45c9 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
@@ -60,7 +60,10 @@ private object PostgresDialect extends JdbcDialect {
 case "bytea" 

svn commit: r31923 - in /release/spark/spark-2.2.3: spark-2.2.3-bin-hadoop2.6.tgz.sha spark-2.2.3-bin-hadoop2.6.tgz.sha512

2019-01-12 Thread dbtsai
Author: dbtsai
Date: Sat Jan 12 18:28:45 2019
New Revision: 31923

Log:
Rename spark-2.2.3-bin-hadoop2.6.tgz.sha to spark-2.2.3-bin-hadoop2.6.tgz.sha512

Added:
release/spark/spark-2.2.3/spark-2.2.3-bin-hadoop2.6.tgz.sha512
  - copied unchanged from r31922, 
release/spark/spark-2.2.3/spark-2.2.3-bin-hadoop2.6.tgz.sha
Removed:
release/spark/spark-2.2.3/spark-2.2.3-bin-hadoop2.6.tgz.sha


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r31919 - in /dev/spark/3.0.0-SNAPSHOT-2019_01_12_01_19-3587a9a-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2019-01-12 Thread pwendell
Author: pwendell
Date: Sat Jan 12 09:31:46 2019
New Revision: 31919

Log:
Apache Spark 3.0.0-SNAPSHOT-2019_01_12_01_19-3587a9a docs


[This commit notification would consist of 1775 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org