spark git commit: [SPARK-25955][TEST] Porting JSON tests for CSV functions

2018-11-07 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 17449a2e6 -> ee03f760b [SPARK-25955][TEST] Porting JSON tests for CSV functions ## What changes were proposed in this pull request? In the PR, I propose to port existing JSON tests from `JsonFunctionsSuite` that are applicable for CSV, an

spark git commit: [SPARK-25952][SQL] Passing actual schema to JacksonParser

2018-11-07 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master d68f3a726 -> 17449a2e6 [SPARK-25952][SQL] Passing actual schema to JacksonParser ## What changes were proposed in this pull request? The PR fixes an issue when the corrupt record column specified via `spark.sql.columnNameOfCorruptRecord`

spark git commit: [SPARK-25676][FOLLOWUP][BUILD] Fix Scala 2.12 build error

2018-11-07 Thread dbtsai
Repository: spark Updated Branches: refs/heads/master 0025a8397 -> d68f3a726 [SPARK-25676][FOLLOWUP][BUILD] Fix Scala 2.12 build error ## What changes were proposed in this pull request? This PR fixes the Scala-2.12 build. ## How was this patch tested? Manual build with Scala-2.12 profile.

spark git commit: [SPARK-25908][CORE][SQL] Remove old deprecated items in Spark 3

2018-11-07 Thread srowen
Repository: spark Updated Branches: refs/heads/master a8e1c9815 -> 0025a8397 [SPARK-25908][CORE][SQL] Remove old deprecated items in Spark 3 ## What changes were proposed in this pull request? - Remove some AccumulableInfo .apply() methods - Remove non-label-specific multiclass precision/reca

spark git commit: [SPARK-25962][BUILD][PYTHON] Specify minimum versions for both pydocstyle and flake8 in 'lint-python' script

2018-11-07 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master e4561e1c5 -> a8e1c9815 [SPARK-25962][BUILD][PYTHON] Specify minimum versions for both pydocstyle and flake8 in 'lint-python' script ## What changes were proposed in this pull request? This PR explicitly specifies `flake8` and `pydocstyle`

svn commit: r30756 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_07_16_02-e4561e1-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-07 Thread pwendell
Author: pwendell Date: Thu Nov 8 00:17:13 2018 New Revision: 30756 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_07_16_02-e4561e1 docs [This commit notification would consist of 1471 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-25897][K8S] Hook up k8s integration tests to sbt build.

2018-11-07 Thread vanzin
Repository: spark Updated Branches: refs/heads/master 0a32238d0 -> e4561e1c5 [SPARK-25897][K8S] Hook up k8s integration tests to sbt build. The integration tests can now be run in sbt if the right profile is enabled, using the "test" task under the respective project. This avoids having to fa

svn commit: r30749 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_07_08_04-0a32238-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-07 Thread pwendell
Author: pwendell Date: Wed Nov 7 16:18:25 2018 New Revision: 30749 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_07_08_04-0a32238 docs [This commit notification would consist of 1471 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-25885][CORE][MINOR] HighlyCompressedMapStatus deserialization/construction optimization

2018-11-07 Thread srowen
Repository: spark Updated Branches: refs/heads/master 8fbc1830f -> 0a32238d0 [SPARK-25885][CORE][MINOR] HighlyCompressedMapStatus deserialization/construction optimization ## What changes were proposed in this pull request? Removal of intermediate structures in HighlyCompressedMapStatus will

spark git commit: [SPARK-25904][CORE] Allocate arrays smaller than Int.MaxValue

2018-11-07 Thread irashid
Repository: spark Updated Branches: refs/heads/master 9e9fa2f69 -> 8fbc1830f [SPARK-25904][CORE] Allocate arrays smaller than Int.MaxValue JVMs can't allocate arrays of length exactly Int.MaxValue, so ensure we never try to allocate an array that big. This commit changes some defaults & conf

svn commit: r30747 - in /dev/spark/3.0.0-SNAPSHOT-2018_11_07_00_02-9e9fa2f-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-11-07 Thread pwendell
Author: pwendell Date: Wed Nov 7 08:16:47 2018 New Revision: 30747 Log: Apache Spark 3.0.0-SNAPSHOT-2018_11_07_00_02-9e9fa2f docs [This commit notification would consist of 1471 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---