Repository: spark
Updated Branches:
refs/heads/master ee6a8d7ea -> f8aca5b4a
[SAPRK-15220][UI] add hyperlink to running application and completed application
## What changes were proposed in this pull request?
Add hyperlink to "running application" and "completed application", so user can
Repository: spark
Updated Branches:
refs/heads/master f8aca5b4a -> dfdcab00c
[SPARK-15210][SQL] Add missing @DeveloperApi annotation in sql.types
add DeveloperApi annotation for `AbstractDataType` `MapType` `UserDefinedType`
local build
Author: Zheng RuiFeng
Closes
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c6d23b660 -> f81d25139
[SPARK-15210][SQL] Add missing @DeveloperApi annotation in sql.types
add DeveloperApi annotation for `AbstractDataType` `MapType` `UserDefinedType`
local build
Author: Zheng RuiFeng
sts.
Author: Andrew Or <and...@databricks.com>
Closes #12941 from andrewor14/move-code.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7bf9b120
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7bf9b120
Diff: http:
Repository: spark
Updated Branches:
refs/heads/branch-2.0 e3f000a36 -> 40d24686a
[SPARK-10653][CORE] Remove unnecessary things from SparkEnv
## What changes were proposed in this pull request?
Removed blockTransferService and sparkFilesDir from SparkEnv since they're
rarely used and don't
Repository: spark
Updated Branches:
refs/heads/master 5c6b08557 -> cddb9da07
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cddb9da0
Tree:
get the `SparkContext` from it. This ends up creating 2
`SparkSession`s from one call, which is definitely not what we want.
## How was this patch tested?
Jenkins.
Author: Andrew Or <and...@databricks.com>
Closes #13031 from andrewor14/sql-test.
Project: http://git-wip-us.apache.org/
Repository: spark
Updated Branches:
refs/heads/master ed0b4070f -> 5c6b08557
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's do it
Repository: spark
Updated Branches:
refs/heads/master cddb9da07 -> db3b4a201
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Repository: spark
Updated Branches:
refs/heads/branch-2.0 42db140c5 -> bd7fd14c9
[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
This replaces `sparkSession` with `spark` in CatalogSuite.scala.
Pass the Jenkins tests.
Author: Dongjoon Hyun
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5bf74b44d -> 42db140c5
[SPARK-14603][SQL] Verification of Metadata Operations by Session Catalog
Since we cannot really trust if the underlying external catalog can throw
exceptions when there is an invalid metadata operation, let's
Repository: spark
Updated Branches:
refs/heads/master 9533f5390 -> da02d006b
[SPARK-15249][SQL] Use FunctionResource instead of (String, String) in
CreateFunction and CatalogFunction for resource
Use FunctionResource instead of (String, String) in CreateFunction and
CatalogFunction for
[SPARK-13522][CORE] Executor should kill itself when it's unable to heartbeat
to driver more than N times
## What changes were proposed in this pull request?
Sometimes, network disconnection event won't be triggered for other potential
race conditions that we may not have thought of, then the
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d1654864a -> ced71d353
[SPARK-13519][CORE] Driver should tell Executor to stop itself when cleaning
executor's state
## What changes were proposed in this pull request?
When the driver removes an executor's state, the connection
[SPARK-13522][CORE] Fix the exit log place for heartbeat
## What changes were proposed in this pull request?
Just fixed the log place introduced by #11401
## How was this patch tested?
unit tests.
Author: Shixiong Zhu
Closes #11432 from
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9098b1a17 -> b3f145442
[HOTFIX] SQL test compilation error from merge conflict
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b3f14544
Tree:
kes the `EXTERNAL` field optional. This is related to
#13032.
## How was this patch tested?
New test in `DDLCommandSuite`.
Author: Andrew Or <and...@databricks.com>
Closes #13060 from andrewor14/location-implies-external.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http:
kes the `EXTERNAL` field optional. This is related to
#13032.
## How was this patch tested?
New test in `DDLCommandSuite`.
Author: Andrew Or <and...@databricks.com>
Closes #13060 from andrewor14/location-implies-external.
(cherry picked from commit f14c4ba001fbdbcc9faa46896f1f9d08a7d06609)
S
Repository: spark
Updated Branches:
refs/heads/master f14c4ba00 -> 603f4453a
[SPARK-15264][SPARK-15274][SQL] CSV Reader Error on Blank Column Names
## What changes were proposed in this pull request?
When a CSV begins with:
- `,,`
OR
- `"","",`
meaning that the first column names are either
Repository: spark
Updated Branches:
refs/heads/branch-2.0 f8804bb10 -> 114be703d
[SPARK-15072][SQL][PYSPARK] FollowUp: Remove SparkSession.withHiveSupport in
PySpark
## What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/12851
Remove
Repository: spark
Updated Branches:
refs/heads/master 470de743e -> be617f3d0
[SPARK-14684][SPARK-15277][SQL] Partition Spec Validation in SessionCatalog and
Checking Partition Spec Existence Before Dropping
What changes were proposed in this pull request?
~~Currently, multiple
Repository: spark
Updated Branches:
refs/heads/branch-2.0 68617e1ad -> 9c5c9013d
[SPARK-14684][SPARK-15277][SQL] Partition Spec Validation in SessionCatalog and
Checking Partition Spec Existence Before Dropping
What changes were proposed in this pull request?
~~Currently, multiple
Repository: spark
Updated Branches:
refs/heads/branch-2.0 7d187539e -> 86acb5efd
[SPARK-15031][SPARK-15134][EXAMPLE][DOC] Use SparkSession and update indent in
examples
## What changes were proposed in this pull request?
1, Use `SparkSession` according to
Repository: spark
Updated Branches:
refs/heads/master ba5487c06 -> 9e266d07a
[SPARK-15031][SPARK-15134][EXAMPLE][DOC] Use SparkSession and update indent in
examples
## What changes were proposed in this pull request?
1, Use `SparkSession` according to
Repository: spark
Updated Branches:
refs/heads/branch-2.0 0d16b7f3a -> 5625b037a
[SPARK-14422][SQL] Improve handling of optional configs in SQLConf
## What changes were proposed in this pull request?
Create a new API for handling Optional Configs in SQLConf.
Right now `getConf` for
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5625b037a -> 4c7f5a74d
[SPARK-14645][MESOS] Fix python running on cluster mode mesos to have non local
uris
## What changes were proposed in this pull request?
Fix SparkSubmit to allow non-local python uris
## How was this patch
Repository: spark
Updated Branches:
refs/heads/master a8d56f538 -> c1839c991
[SPARK-14645][MESOS] Fix python running on cluster mode mesos to have non local
uris
## What changes were proposed in this pull request?
Fix SparkSubmit to allow non-local python uris
## How was this patch tested?
Repository: spark
Updated Branches:
refs/heads/master 0903a185c -> 9e4928b7e
[SPARK-15097][SQL] make Dataset.sqlContext a stable identifier for imports
## What changes were proposed in this pull request?
Make Dataset.sqlContext a lazy val so that its a stable identifier and can be
used for
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5e15615d1 -> 95d359abd
[SPARK-15097][SQL] make Dataset.sqlContext a stable identifier for imports
## What changes were proposed in this pull request?
Make Dataset.sqlContext a lazy val so that its a stable identifier and can be
used
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4c7f5a74d -> 5e15615d1
[SPARK-15084][PYTHON][SQL] Use builder pattern to create SparkSession in
PySpark.
## What changes were proposed in this pull request?
This is a python port of corresponding Scala builder pattern code. `sql.py`
Repository: spark
Updated Branches:
refs/heads/master c1839c991 -> 0903a185c
[SPARK-15084][PYTHON][SQL] Use builder pattern to create SparkSession in
PySpark.
## What changes were proposed in this pull request?
This is a python port of corresponding Scala builder pattern code. `sql.py` is
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c212307b9 -> 0d16b7f3a
[MINOR][DOC] Fixed some python snippets in mllib data types documentation.
## What changes were proposed in this pull request?
Some python snippets is using scala imports and comments.
## How was this patch
Repository: spark
Updated Branches:
refs/heads/master dbacd9998 -> c4e0fde87
[MINOR][DOC] Fixed some python snippets in mllib data types documentation.
## What changes were proposed in this pull request?
Some python snippets is using scala imports and comments.
## How was this patch tested?
et al.
Author: Andrew Or <and...@databricks.com>
Closes #12853 from andrewor14/make-exceptions-consistent.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6ba17cd1
Tree: http://git-wip-us.apache.org/repos/asf/s
ite` et al.
Author: Andrew Or <and...@databricks.com>
Closes #12853 from andrewor14/make-exceptions-consistent.
(cherry picked from commit 6ba17cd147277a20a7fbb244c040e694de486c36)
Signed-off-by: Andrew Or <and...@databricks.com>
Project: http://git-wip-us.apache.org/repos/asf/sp
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c2b100e50 -> b063d9b71
[MINOR][BUILD] Adds spark-warehouse/ to .gitignore
## What changes were proposed in this pull request?
Adds spark-warehouse/ to `.gitignore`.
## How was this patch tested?
N/A
Author: Cheng Lian
Repository: spark
Updated Branches:
refs/heads/master 6fcc9 -> 63db2bd28
[MINOR][BUILD] Adds spark-warehouse/ to .gitignore
## What changes were proposed in this pull request?
Adds spark-warehouse/ to `.gitignore`.
## How was this patch tested?
N/A
Author: Cheng Lian
Repository: spark
Updated Branches:
refs/heads/branch-2.0 59fa480b6 -> e78b31b72
[SPARK-15135][SQL] Make sure SparkSession thread safe
## What changes were proposed in this pull request?
Went through SparkSession and its members and fixed non-thread-safe classes
used by SparkSession
## How
Repository: spark
Updated Branches:
refs/heads/master ed6f3f8a5 -> bb9991dec
[SPARK-15135][SQL] Make sure SparkSession thread safe
## What changes were proposed in this pull request?
Went through SparkSession and its members and fixed non-thread-safe classes
used by SparkSession
## How was
Repository: spark
Updated Branches:
refs/heads/master bb9991dec -> 2c170dd3d
http://git-wip-us.apache.org/repos/asf/spark/blob/2c170dd3/examples/src/main/python/ml/vector_indexer_example.py
--
diff --git
Repository: spark
Updated Branches:
refs/heads/branch-2.0 e78b31b72 -> 8b4ab590c
http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/vector_indexer_example.py
--
diff --git
[SPARK-15134][EXAMPLE] Indent SparkSession builder patterns and update
binary_classification_metrics_example.py
## What changes were proposed in this pull request?
This issue addresses the comments in SPARK-15031 and also fix java-linter
errors.
- Use multiline format in SparkSession builder
[SPARK-15134][EXAMPLE] Indent SparkSession builder patterns and update
binary_classification_metrics_example.py
## What changes were proposed in this pull request?
This issue addresses the comments in SPARK-15031 and also fix java-linter
errors.
- Use multiline format in SparkSession builder
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1064a3303 -> a1887f213
[SPARK-15152][DOC][MINOR] Scaladoc and Code style Improvements
## What changes were proposed in this pull request?
Minor doc and code style fixes
## How was this patch tested?
local build
Author: Jacek
Repository: spark
Updated Branches:
refs/heads/branch-2.0 b063d9b71 -> fe268ee1e
[SPARK-14124][SQL][FOLLOWUP] Implement Database-related DDL Commands
What changes were proposed in this pull request?
First, a few test cases failed in mac OS X because the property value of
Repository: spark
Updated Branches:
refs/heads/master 8cba57a75 -> ed6f3f8a5
[SPARK-15072][SQL][REPL][EXAMPLES] Remove SparkSession.withHiveSupport
## What changes were proposed in this pull request?
Removing the `withHiveSupport` method of `SparkSession`, instead use
`enableHiveSupport`
##
Repository: spark
Updated Branches:
refs/heads/master 02c07e899 -> bbb777343
[SPARK-15152][DOC][MINOR] Scaladoc and Code style Improvements
## What changes were proposed in this pull request?
Minor doc and code style fixes
## How was this patch tested?
local build
Author: Jacek Laskowski
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a3aa22a59 -> ab006523b
[SPARK-13566][CORE] Avoid deadlock between BlockManager and Executor Thread
Temp patch for branch 1.6ï¼ avoid deadlock between BlockManager and Executor
Thread.
Author: cenyuhai
;
Closes #12917 from andrewor14/deprecate-hive-context-python.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fa79d346
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fa79d346
Diff: http://git-wip-us.apache.org/repos/
;
Closes #12917 from andrewor14/deprecate-hive-context-python.
(cherry picked from commit fa79d346e1a79ceda6ccd20e74eb850e769556ea)
Signed-off-by: Andrew Or <and...@databricks.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/a
Repository: spark
Updated Branches:
refs/heads/master a432a2b86 -> b28137764
[MINOR][SQL] Fix typo in DataFrameReader csv documentation
## What changes were proposed in this pull request?
Typo fix
## How was this patch tested?
No tests
My apologies for the tiny PR, but I stumbled across
Repository: spark
Updated Branches:
refs/heads/branch-2.0 701c66729 -> aca46ecf8
[MINOR][SQL] Fix typo in DataFrameReader csv documentation
## What changes were proposed in this pull request?
Typo fix
## How was this patch tested?
No tests
My apologies for the tiny PR, but I stumbled across
Repository: spark
Updated Branches:
refs/heads/master 08db49126 -> 02c07e899
[SPARK-14893][SQL] Re-enable HiveSparkSubmitSuite SPARK-8489 test after
HiveContext is removed
## What changes were proposed in this pull request?
Enable the test that was disabled when HiveContext was removed.
##
Repository: spark
Updated Branches:
refs/heads/branch-2.0 80a4bfa4d -> 1064a3303
[SPARK-14893][SQL] Re-enable HiveSparkSubmitSuite SPARK-8489 test after
HiveContext is removed
## What changes were proposed in this pull request?
Enable the test that was disabled when HiveContext was removed.
Repository: spark
Updated Branches:
refs/heads/master 63db2bd28 -> 8cba57a75
[SPARK-14124][SQL][FOLLOWUP] Implement Database-related DDL Commands
What changes were proposed in this pull request?
First, a few test cases failed in mac OS X because the property value of
`java.io.tmpdir`
Repository: spark
Updated Branches:
refs/heads/branch-2.0 19a14e841 -> 80a4bfa4d
[SPARK-9926] Parallelize partition logic in UnionRDD.
This patch has the new logic from #8512 that uses a parallel collection to
compute partitions in UnionRDD. The rest of #8512 added an alternative code
path
Repository: spark
Updated Branches:
refs/heads/master 2c170dd3d -> 5c47db065
[SPARK-15158][CORE] downgrade shouldRollover message to debug level
## What changes were proposed in this pull request?
set log level to debug when check shouldRollover
## How was this patch tested?
It's tested
Repository: spark
Updated Branches:
refs/heads/branch-2.0 8b4ab590c -> 19a14e841
[SPARK-15158][CORE] downgrade shouldRollover message to debug level
## What changes were proposed in this pull request?
set log level to debug when check shouldRollover
## How was this patch tested?
It's tested
Repository: spark
Updated Branches:
refs/heads/master 5c47db065 -> 08db49126
[SPARK-9926] Parallelize partition logic in UnionRDD.
This patch has the new logic from #8512 that uses a parallel collection to
compute partitions in UnionRDD. The rest of #8512 added an alternative code
path for
Repository: spark
Updated Branches:
refs/heads/branch-2.0 fe268ee1e -> 59fa480b6
[SPARK-15072][SQL][REPL][EXAMPLES] Remove SparkSession.withHiveSupport
## What changes were proposed in this pull request?
Removing the `withHiveSupport` method of `SparkSession`, instead use
`enableHiveSupport`
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a1887f213 -> 7dc3fb6ae
[HOTFIX] Fix MLUtils compile
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7dc3fb6a
Tree:
Repository: spark
Updated Branches:
refs/heads/master bbb777343 -> 7f5922aa4
[HOTFIX] Fix MLUtils compile
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7f5922aa
Tree:
Repository: spark
Updated Branches:
refs/heads/master 0b9cae424 -> bcfee153b
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
accumulators,
Repository: spark
Updated Branches:
refs/heads/branch-2.0 af12b0a50 -> 19a9c23c2
[SPARK-12837][CORE] reduce network IO for accumulators
Sending un-updated accumulators back to driver makes no sense, as merging a
zero value accumulator is a no-op. We should only send back updated
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
[SPARK-15037][SQL][MLLIB] Use SparkSession instead of SQLContext in Scala/Java
TestSuites
## What changes were proposed in this pull request?
Use SparkSession instead of SQLContext in Scala/Java TestSuites
as this PR already very big working Python TestSuites in a diff PR.
## How was this patch
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/java/test/org/apache/spark/sql/sources/JavaDatasetAggregatorSuiteBase.java
--
diff --git
Repository: spark
Updated Branches:
refs/heads/master bcfee153b -> ed0b4070f
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
--
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonParsingOptionsSuite.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
--
diff --git
a/mllib/src/test/scala/org/apache/spark/ml/feature/StopWordsRemoverSuite.scala
Repository: spark
Updated Branches:
refs/heads/branch-2.0 19a9c23c2 -> 5bf74b44d
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/ml/regression/JavaLinearRegressionSuite.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/5bf74b44/mllib/src/test/java/org/apache/spark/mllib/regression/JavaIsotonicRegressionSuite.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/ed0b4070/sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/branch-2.0 e868a15a7 -> 45862f6c9
[SPARK-15126][SQL] RuntimeConfig.set should return Unit
## What changes were proposed in this pull request?
Currently we return RuntimeConfig itself to facilitate chaining. However, it
makes the output in
Repository: spark
Updated Branches:
refs/heads/master 0fd3a4748 -> 6ae9fc00e
[SPARK-15126][SQL] RuntimeConfig.set should return Unit
## What changes were proposed in this pull request?
Currently we return RuntimeConfig itself to facilitate chaining. However, it
makes the output in
Repository: spark
Updated Branches:
refs/heads/master 6ae9fc00e -> 0c00391f7
[SPARK-15121] Improve logging of external shuffle handler
## What changes were proposed in this pull request?
Add more informative logging in the external shuffle service to aid in
debugging who is connecting to
Repository: spark
Updated Branches:
refs/heads/branch-2.0 45862f6c9 -> eeb18f6d7
[SPARK-15121] Improve logging of external shuffle handler
## What changes were proposed in this pull request?
Add more informative logging in the external shuffle service to aid in
debugging who is connecting
Repository: spark
Updated Branches:
refs/heads/branch-2.0 eeb18f6d7 -> c0715f33b
[SPARK-12299][CORE] Remove history serving functionality from Master
Remove history server functionality from standalone Master. Previously, the
Master process rebuilt a SparkUI once the application was
Repository: spark
Updated Branches:
refs/heads/master 0c00391f7 -> cf2e9da61
[SPARK-12299][CORE] Remove history serving functionality from Master
Remove history server functionality from standalone Master. Previously, the
Master process rebuilt a SparkUI once the application was completed
Repository: spark
Updated Branches:
refs/heads/branch-2.0 23789e358 -> 1e7d9bfb5
[SPARK-13001][CORE][MESOS] Prevent getting offers when reached max cores
Similar to https://github.com/apache/spark/pull/8639
This change rejects offers for 120s when reached `spark.cores.max` in
coarse-grained
Repository: spark
Updated Branches:
refs/heads/master cdce4e62a -> eb019af9a
[SPARK-13001][CORE][MESOS] Prevent getting offers when reached max cores
Similar to https://github.com/apache/spark/pull/8639
This change rejects offers for 120s when reached `spark.cores.max` in
coarse-grained
Repository: spark
Updated Branches:
refs/heads/master eb019af9a -> a432a2b86
[SPARK-15116] In REPL we should create SparkSession first and get SparkContext
from it
## What changes were proposed in this pull request?
see https://github.com/apache/spark/pull/12873#discussion_r61993910. The
Repository: spark
Updated Branches:
refs/heads/master cf2e9da61 -> cdce4e62a
http://git-wip-us.apache.org/repos/asf/spark/blob/cdce4e62/examples/src/main/scala/org/apache/spark/examples/ml/NGramExample.scala
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/cdce4e62/examples/src/main/python/ml/naive_bayes_example.py
--
diff --git a/examples/src/main/python/ml/naive_bayes_example.py
b/examples/src/main/python/ml/naive_bayes_example.py
http://git-wip-us.apache.org/repos/asf/spark/blob/cdce4e62/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
--
diff --git
http://git-wip-us.apache.org/repos/asf/spark/blob/23789e35/examples/src/main/python/ml/naive_bayes_example.py
--
diff --git a/examples/src/main/python/ml/naive_bayes_example.py
b/examples/src/main/python/ml/naive_bayes_example.py
[SPARK-15031][EXAMPLE] Use SparkSession in Scala/Python/Java example.
## What changes were proposed in this pull request?
This PR aims to update Scala/Python/Java examples by replacing `SQLContext`
with newly added `SparkSession`.
- Use **SparkSession Builder Pattern** in 154(Scala 55, Java
Repository: spark
Updated Branches:
refs/heads/branch-2.0 c0715f33b -> 23789e358
http://git-wip-us.apache.org/repos/asf/spark/blob/23789e35/examples/src/main/scala/org/apache/spark/examples/ml/NGramExample.scala
--
diff --git
[SPARK-15031][EXAMPLE] Use SparkSession in Scala/Python/Java example.
## What changes were proposed in this pull request?
This PR aims to update Scala/Python/Java examples by replacing `SQLContext`
with newly added `SparkSession`.
- Use **SparkSession Builder Pattern** in 154(Scala 55, Java
http://git-wip-us.apache.org/repos/asf/spark/blob/23789e35/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
--
diff --git
Repository: spark
Updated Branches:
refs/heads/master c971aee40 -> 28efdd3fd
[SPARK-14592][SQL] Native support for CREATE TABLE LIKE DDL command
## What changes were proposed in this pull request?
JIRA: https://issues.apache.org/jira/browse/SPARK-14592
This patch adds native support for DDL
Repository: spark
Updated Branches:
refs/heads/master 699a4dfd8 -> 7de06a646
Revert "[SPARK-14647][SQL] Group SQLContext/HiveContext state into SharedState"
This reverts commit 5cefecc95a5b8418713516802c416cfde5a94a2d.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/master 9fa43a33b -> dac40b68d
[SPARK-14619] Track internal accumulators (metrics) by stage attempt
## What changes were proposed in this pull request?
When there are multiple attempts for a stage, we currently only reset internal
accumulator
1301 - 1400 of 1431 matches
Mail list logo