Repository: spark
Updated Branches:
refs/heads/master 453dae567 -> c00744e60
[SQL][MINOR] Fix one little mismatched comment according to the codes in
interface.scala
Author: proflin
Closes #10824 from proflin/master.
Project:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 68265ac23 -> 30f55e523
[SQL][MINOR] Fix one little mismatched comment according to the codes in
interface.scala
Author: proflin
Closes #10824 from proflin/master.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master c00744e60 -> d8c4b00a2
[SPARK-7683][PYSPARK] Confusing behavior of fold function of RDD in pyspark
Fix order of arguments that Pyspark RDD.fold passes to its op - should be
(acc, obj) like other implementations.
Obviously, this is a
Repository: spark
Updated Branches:
refs/heads/branch-1.6 30f55e523 -> 962e618ec
[MLLIB] Fix CholeskyDecomposition assertion's message
Change assertion's message so it's consistent with the code. The old message
says that the invoked method was lapack.dports, where in fact it was
Repository: spark
Updated Branches:
refs/heads/master d8c4b00a2 -> ebd9ce0f1
[MLLIB] Fix CholeskyDecomposition assertion's message
Change assertion's message so it's consistent with the code. The old message
says that the invoked method was lapack.dports, where in fact it was
lapack.dppsv
Repository: spark
Updated Branches:
refs/heads/master efd7eed32 -> 43f1d59e1
[SPARK-2750][WEB UI] Add https support to the Web UI
Author: scwf
Author: Marcelo Vanzin
Author: WangTaoTheTonic
Author: w00228970
Repository: spark
Updated Branches:
refs/heads/master 43f1d59e1 -> f6f7ca9d2
[SPARK-9716][ML] BinaryClassificationEvaluator should accept Double prediction
column
This PR aims to allow the prediction column of `BinaryClassificationEvaluator`
to be of double type.
Author: BenFradet
Repository: spark
Updated Branches:
refs/heads/master 4dbd31612 -> c78e2080e
[SPARK-12816][SQL] De-alias type when generating schemas
Call `dealias` on local types to fix schema generation for abstract type
members, such as
```scala
type KeyValue = (Int, String)
```
Add simple test
Repository: spark
Updated Branches:
refs/heads/master c6f971b4a -> efd7eed32
[BUILD] Runner for spark packages
This is a convenience method added to the SBT build for developers, though if
people think its useful we could consider adding a official script that runs
using the assembly
Repository: spark
Updated Branches:
refs/heads/master b72e01e82 -> 4dbd31612
[SPARK-12560][SQL] SqlTestUtils.stripSparkFilter needs to copy utf8strings
See https://issues.apache.org/jira/browse/SPARK-12560
This isn't causing any problems currently because the tests for string
predicate
Repository: spark
Updated Branches:
refs/heads/master 2388de519 -> b72e01e82
[SPARK-12867][SQL] Nullability of Intersect can be stricter
JIRA: https://issues.apache.org/jira/browse/SPARK-12867
When intersecting one nullable column with one non-nullable column, the result
will not contain
Repository: spark
Updated Branches:
refs/heads/master e14817b52 -> b122c861c
[SPARK-12887] Do not expose var's in TaskMetrics
This is a step in implementing SPARK-10620, which migrates TaskMetrics to
accumulators.
TaskMetrics has a bunch of var's, some are fully public, some are
Repository: spark
Updated Branches:
refs/heads/master 0ddba6d88 -> e14817b52
[SPARK-12870][SQL] better format bucket id in file name
for normal parquet file without bucket, it's file name ends with a jobUUID
which maybe all numbers and mistakeny regarded as bucket id. This PR improves
the
Repository: spark
Updated Branches:
refs/heads/master b122c861c -> 2388de519
[SPARK-12804][ML] Fix LogisticRegression with FitIntercept on all same label
training data
CC jkbradley mengxr dbtsai
Author: Feynman Liang
Closes #10743 from feynmanliang/SPARK-12804.
Repository: spark
Updated Branches:
refs/heads/master 3ac648289 -> beda90142
Revert "[SPARK-11295] Add packages to JUnit output for Python tests"
This reverts commit c6f971b4aeca7265ab374fa46c5c452461d9b6a7.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/master 37fefa66c -> 3ac648289
[SPARK-12337][SPARKR] Implement dropDuplicates() method of DataFrame in SparkR.
Author: Sun Rui
Closes #10309 from sun-rui/SPARK-12337.
Project:
Repository: spark
Updated Branches:
refs/heads/master 3e84ef0a5 -> 37fefa66c
[SPARK-12168][SPARKR] Add automated tests for conflicted function in R
Currently this is reported when loading the SparkR package in R (probably would
add is.nan)
```
Loading required package: methods
Attaching
Repository: spark
Updated Branches:
refs/heads/master beda90142 -> 488bbb216
[SPARK-12232][SPARKR] New R API for read.table to avoid name conflict
shivaram sorry it took longer to fix some conflicts, this is the change to add
an alias for `table`
Author: felixcheung
Repository: spark
Updated Branches:
refs/heads/master 488bbb216 -> 6844d36ae
[SPARK-12871][SQL] Support to specify the option for compression codec.
https://issues.apache.org/jira/browse/SPARK-12871
This PR added an option to support to specify compression codec.
This adds the option `codec`
19 matches
Mail list logo