Repository: spark
Updated Branches:
refs/heads/master df0e31815 -> 4fe99c72c
[SPARK-11191][SQL] Looks up temporary function using execution Hive client
When looking up Hive temporary functions, we should always use the
`SessionState` within the execution Hive client, since temporary
Repository: spark
Updated Branches:
refs/heads/branch-1.5 b478ee374 -> 3676d4c4d
Fixed error in scaladoc of convertToCanonicalEdges
The code convertToCanonicalEdges is such that srcIds are smaller than dstIds
but the scaladoc suggested otherwise. Have fixed the same.
Author: Gaurav Kumar
Repository: spark
Updated Branches:
refs/heads/master 08660a0bc -> df0e31815
Fixed error in scaladoc of convertToCanonicalEdges
The code convertToCanonicalEdges is such that srcIds are smaller than dstIds
but the scaladoc suggested otherwise. Have fixed the same.
Author: Gaurav Kumar
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c3a0c7728 -> 4aacbe9e6
Fixed error in scaladoc of convertToCanonicalEdges
The code convertToCanonicalEdges is such that srcIds are smaller than dstIds
but the scaladoc suggested otherwise. Have fixed the same.
Author: Gaurav Kumar
Repository: spark
Updated Branches:
refs/heads/branch-1.3 b90e5cba2 -> 1bfa00d54
Fixed error in scaladoc of convertToCanonicalEdges
The code convertToCanonicalEdges is such that srcIds are smaller than dstIds
but the scaladoc suggested otherwise. Have fixed the same.
Author: Gaurav Kumar
Repository: spark
Updated Branches:
refs/heads/branch-1.4 72ab06e8a -> 149c4a06d
Fixed error in scaladoc of convertToCanonicalEdges
The code convertToCanonicalEdges is such that srcIds are smaller than dstIds
but the scaladoc suggested otherwise. Have fixed the same.
Author: Gaurav Kumar
Repository: spark
Updated Branches:
refs/heads/branch-1.6 68fa5c713 -> 6853ba6bf
[SPARK-11420] Updating Stddev support via Imperative Aggregate
switched stddev support from DeclarativeAggregate to ImperativeAggregate.
Author: JihongMa
Closes #9380 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 ecf027edd -> 68fa5c713
[SPARK-10113][SQL] Explicit error message for unsigned Parquet logical types
Parquet supports some unsigned datatypes. However, Since Spark does not support
unsigned datatypes, it needs to emit an exception with
Repository: spark
Updated Branches:
refs/heads/master f5a9526fe -> d292f7483
[SPARK-11420] Updating Stddev support via Imperative Aggregate
switched stddev support from DeclarativeAggregate to ImperativeAggregate.
Author: JihongMa
Closes #9380 from
Repository: spark
Updated Branches:
refs/heads/master e2957bc08 -> 14cf75370
[SPARK-11661][SQL] Still pushdown filters returned by unhandledFilters.
https://issues.apache.org/jira/browse/SPARK-11661
Author: Yin Huai
Closes #9634 from yhuai/unhandledFilters.
Project:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 57f281c1a -> 83906411c
[SPARK-11661][SQL] Still pushdown filters returned by unhandledFilters.
https://issues.apache.org/jira/browse/SPARK-11661
Author: Yin Huai
Closes #9634 from yhuai/unhandledFilters.
Repository: spark
Updated Branches:
refs/heads/master dcb896fd8 -> 41bbd2300
[SPARK-11654][SQL] add reduce to GroupedDataset
This PR adds a new method, `reduce`, to `GroupedDataset`, which allows similar
operations to `reduceByKey` on a traditional `PairRDD`.
```scala
val ds = Seq("abc",
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f061d2539 -> 6c1bf19e8
[SPARK-11654][SQL] add reduce to GroupedDataset
This PR adds a new method, `reduce`, to `GroupedDataset`, which allows similar
operations to `reduceByKey` on a traditional `PairRDD`.
```scala
val ds =
Repository: spark
Updated Branches:
refs/heads/master 41bbd2300 -> 0f1d00a90
[SPARK-11663][STREAMING] Add Java API for trackStateByKey
TODO
- [x] Add Java API
- [x] Add API tests
- [x] Add a function test
Author: Shixiong Zhu
Closes #9636 from zsxwing/java-track.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6c1bf19e8 -> 05666e09b
[SPARK-11663][STREAMING] Add Java API for trackStateByKey
TODO
- [x] Add Java API
- [x] Add API tests
- [x] Add a function test
Author: Shixiong Zhu
Closes #9636 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 4aacbe9e6 -> ecf027edd
[SPARK-11191][SQL] Looks up temporary function using execution Hive client
When looking up Hive temporary functions, we should always use the
`SessionState` within the execution Hive client, since temporary
Repository: spark
Updated Branches:
refs/heads/master d292f7483 -> 767d288b6
[SPARK-11655][CORE] Fix deadlock in handling of launcher stop().
The stop() callback was trying to close the launcher connection in the
same thread that handles connection data, which ended up causing a
deadlock. So
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6853ba6bf -> f5c66d163
[SPARK-11655][CORE] Fix deadlock in handling of launcher stop().
The stop() callback was trying to close the launcher connection in the
same thread that handles connection data, which ended up causing a
deadlock.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 0dd6c2987 -> 3df6238bd
[SPARK-11671] documentation code example typo
Example for sqlContext.createDataDrame from pandas.DataFrame has a typo
Author: Chris Snow
Closes #9639 from snowch/patch-2.
Project:
Repository: spark
Updated Branches:
refs/heads/master 68ef61bb6 -> bc092966f
[SPARK-11709] include creation site info in SparkContext.assertNotStopped error
message
This helps debug issues caused by multiple SparkContext instances. JoshRosen
andrewor14
~~~
scala> sc.stop()
scala>
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3df6238bd -> 838f956c9
[SPARK-11709] include creation site info in SparkContext.assertNotStopped error
message
This helps debug issues caused by multiple SparkContext instances. JoshRosen
andrewor14
~~~
scala> sc.stop()
scala>
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f5c66d163 -> 340ca9e76
[SPARK-11290][STREAMING][TEST-MAVEN] Fix the test for maven build
Should not create SparkContext in the constructor of `TrackStateRDDSuite`. This
is a follow up PR for #9256 to fix the test for maven build.
Repository: spark
Updated Branches:
refs/heads/master 767d288b6 -> f0d3b58d9
[SPARK-11290][STREAMING][TEST-MAVEN] Fix the test for maven build
Should not create SparkContext in the constructor of `TrackStateRDDSuite`. This
is a follow up PR for #9256 to fix the test for maven build.
Author:
Repository: spark
Updated Branches:
refs/heads/master 74c30049a -> cf38fc755
[SPARK-11670] Fix incorrect kryo buffer default value in docs
https://cloud.githubusercontent.com/assets/2133137/11108261/35d183d4-889a-11e5-9572-85e9d6cebd26.png;>
Author: Andrew Or
Closes
Repository: spark
Updated Branches:
refs/heads/branch-1.6 069591799 -> a98cac26f
[SPARK-11670] Fix incorrect kryo buffer default value in docs
https://cloud.githubusercontent.com/assets/2133137/11108261/35d183d4-889a-11e5-9572-85e9d6cebd26.png;>
Author: Andrew Or
Repository: spark
Updated Branches:
refs/heads/master f0d3b58d9 -> 380dfcc0d
[SPARK-11671] documentation code example typo
Example for sqlContext.createDataDrame from pandas.DataFrame has a typo
Author: Chris Snow
Closes #9639 from snowch/patch-2.
Project:
Repository: spark
Updated Branches:
refs/heads/master cf38fc755 -> 12a0784ac
[SPARK-11667] Update dynamic allocation docs to reflect supported cluster
managers
Author: Andrew Or
Closes #9637 from andrewor14/update-da-docs.
Project:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a98cac26f -> 782885786
[SPARK-11667] Update dynamic allocation docs to reflect supported cluster
managers
Author: Andrew Or
Closes #9637 from andrewor14/update-da-docs.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master e71c07557 -> ed04846e1
http://git-wip-us.apache.org/repos/asf/spark/blob/ed04846e/R/pkg/R/context.R
--
diff --git a/R/pkg/R/context.R b/R/pkg/R/context.R
index
[SPARK-11263][SPARKR] lintr Throws Warnings on Commented Code in Documentation
Clean out hundreds of `style: Commented code should be removed.` from lintr
Like these:
```
/opt/spark-1.6.0-bin-hadoop2.6/R/pkg/R/DataFrame.R:513:3: style: Commented code
should be removed.
# sc <- sparkR.init()
Repository: spark
Updated Branches:
refs/heads/branch-1.6 874cd29f2 -> ea9f7c580
http://git-wip-us.apache.org/repos/asf/spark/blob/ea9f7c58/R/pkg/R/context.R
--
diff --git a/R/pkg/R/context.R b/R/pkg/R/context.R
index
Repository: spark
Updated Branches:
refs/heads/master e4e46b20f -> e71c07557
[SPARK-11672][ML] flaky spark.ml read/write tests
We set `sqlContext = null` in `afterAll`. However, this doesn't change
`SQLContext.activeContext` and then `SQLContext.getOrCreate` might use the
`SparkContext`
Repository: spark
Updated Branches:
refs/heads/branch-1.6 46a536e45 -> 874cd29f2
[SPARK-11672][ML] flaky spark.ml read/write tests
We set `sqlContext = null` in `afterAll`. However, this doesn't change
`SQLContext.activeContext` and then `SQLContext.getOrCreate` might use the
Repository: spark
Updated Branches:
refs/heads/master ea5ae2705 -> ad960885b
[SPARK-8029] Robust shuffle writer
Currently, all the shuffle writer will write to target path directly, the file
could be corrupted by other attempt of the same partition on the same executor.
They should write to
Repository: spark
Updated Branches:
refs/heads/branch-1.6 55faab532 -> aff44f9a8
[SPARK-8029] Robust shuffle writer
Currently, all the shuffle writer will write to target path directly, the file
could be corrupted by other attempt of the same partition on the same executor.
They should
Repository: spark
Updated Branches:
refs/heads/branch-1.6 966fe1f09 -> 46a536e45
Preparing Spark release v1.6-snapshot0-test
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/609d6e87
Tree:
Preparing development version 1.6.0-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/46a536e4
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/46a536e4
Diff:
Repository: spark
Updated Tags: refs/tags/v1.6-snapshot0-test [created] 609d6e87a
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
[SPARK-11263][SPARKR] lintr Throws Warnings on Commented Code in Documentation
Clean out hundreds of `style: Commented code should be removed.` from lintr
Like these:
```
/opt/spark-1.6.0-bin-hadoop2.6/R/pkg/R/DataFrame.R:513:3: style: Commented code
should be removed.
# sc <- sparkR.init()
Repository: spark
Updated Branches:
refs/heads/branch-1.6 05666e09b -> 199e4cb21
[SPARK-11419][STREAMING] Parallel recovery for FileBasedWriteAheadLog + minor
recovery tweaks
The support for closing WriteAheadLog files after writes was just merged in.
Closing every file after a write is a
Repository: spark
Updated Branches:
refs/heads/master 0f1d00a90 -> 7786f9cc0
[SPARK-11419][STREAMING] Parallel recovery for FileBasedWriteAheadLog + minor
recovery tweaks
The support for closing WriteAheadLog files after writes was just merged in.
Closing every file after a write is a very
Repository: spark
Updated Branches:
refs/heads/master 7786f9cc0 -> e4e46b20f
[SPARK-11681][STREAMING] Correctly update state timestamp even when state is
not updated
Bug: Timestamp is not updated if there is data but the corresponding state is
not updated. This is wrong, and timeout is
Repository: spark
Updated Branches:
refs/heads/branch-1.6 199e4cb21 -> 966fe1f09
[SPARK-11681][STREAMING] Correctly update state timestamp even when state is
not updated
Bug: Timestamp is not updated if there is data but the corresponding state is
not updated. This is wrong, and timeout is
Repository: spark
Updated Branches:
refs/heads/master ed04846e1 -> 2035ed392
[SPARK-11717] Ignore R session and history files from git
see: https://issues.apache.org/jira/browse/SPARK-11717
SparkR generates R session data and history files under current directory.
It might be useful to
Repository: spark
Updated Branches:
refs/heads/branch-1.6 ea9f7c580 -> 55faab532
[SPARK-11629][ML][PYSPARK][DOC] Python example code for Multilayer Perceptron
Classification
Add Python example code for Multilayer Perceptron Classification, and make
example code in user guide document
Repository: spark
Updated Branches:
refs/heads/master 2035ed392 -> ea5ae2705
[SPARK-11629][ML][PYSPARK][DOC] Python example code for Multilayer Perceptron
Classification
Add Python example code for Multilayer Perceptron Classification, and make
example code in user guide document testable.
Repository: spark
Updated Branches:
refs/heads/branch-1.5 6e823b4d7 -> b478ee374
[SPARK-11595][SQL][BRANCH-1.5] Fixes ADD JAR when the input path contains URL
scheme
This PR backports #9569 to branch-1.5.
Author: Cheng Lian
Closes #9570 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 83906411c -> ed048763b
[SPARK-11673][SQL] Remove the normal Project physical operator (and keep
TungstenProject)
Also make full outer join being able to produce UnsafeRows.
Author: Reynold Xin
Closes #9643 from
Repository: spark
Updated Branches:
refs/heads/master 14cf75370 -> 30e743364
[SPARK-11673][SQL] Remove the normal Project physical operator (and keep
TungstenProject)
Also make full outer join being able to produce UnsafeRows.
Author: Reynold Xin
Closes #9643 from
Repository: spark
Updated Branches:
refs/heads/master 30e743364 -> 08660a0bc
[BUILD][MINOR] Remove non-exist yarnStable module in Sbt project
Remove some old yarn related building codes, please review, thanks a lot.
Author: jerryshao
Closes #9625 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 ed048763b -> c3a0c7728
[BUILD][MINOR] Remove non-exist yarnStable module in Sbt project
Remove some old yarn related building codes, please review, thanks a lot.
Author: jerryshao
Closes #9625 from
Repository: spark
Updated Branches:
refs/heads/master 380dfcc0d -> 74c30049a
[SPARK-2533] Add locality levels on stage summary view
Author: Jean-Baptiste Onofré
Closes #9487 from jbonofre/SPARK-2533-2.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Repository: spark
Updated Branches:
refs/heads/branch-1.6 340ca9e76 -> 069591799
[SPARK-2533] Add locality levels on stage summary view
Author: Jean-Baptiste Onofré
Closes #9487 from jbonofre/SPARK-2533-2.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/branch-1.6 782885786 -> 0dd6c2987
[SPARK-11658] simplify documentation for PySpark combineByKey
Author: Chris Snow
Closes #9640 from snowch/patch-3.
(cherry picked from commit 68ef61bb656bd9c08239726913ca8ab271d52786)
Repository: spark
Updated Branches:
refs/heads/master 12a0784ac -> 68ef61bb6
[SPARK-11658] simplify documentation for PySpark combineByKey
Author: Chris Snow
Closes #9640 from snowch/patch-3.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 838f956c9 -> f061d2539
[SPARK-11712][ML] Make spark.ml LDAModel be abstract
Per discussion in the initial Pipelines LDA PR
[https://github.com/apache/spark/pull/9513], we should make LDAModel abstract
and create a LocalLDAModel. This
Repository: spark
Updated Branches:
refs/heads/master bc092966f -> dcb896fd8
[SPARK-11712][ML] Make spark.ml LDAModel be abstract
Per discussion in the initial Pipelines LDA PR
[https://github.com/apache/spark/pull/9513], we should make LDAModel abstract
and create a LocalLDAModel. This
57 matches
Mail list logo