Repository: spark
Updated Branches:
refs/heads/master b93830126 -> cba1d6b65
[SPARK-12631][PYSPARK][DOC] PySpark clustering parameter desc to consistent
format
Part of task for
[SPARK-11219](https://issues.apache.org/jira/browse/SPARK-11219) to make
PySpark MLlib parameter description
Repository: spark
Updated Branches:
refs/heads/branch-1.6 9a3d1bd09 -> 53f518a6e
[SPARK-12629][SPARKR] Fixes for DataFrame saveAsTable method
I've tried to solve some of the issues mentioned in:
https://issues.apache.org/jira/browse/SPARK-12629
Please, let me know what do you think.
Thanks!
Repository: spark
Updated Branches:
refs/heads/master 22ba21348 -> 12a20c144
[SPARK-10820][SQL] Support for the continuous execution of structured queries
This is a follow up to 9aadcffabd226557174f3ff566927f873c71672e that extends
Spark SQL to allow users to _repeatedly_ optimize and
Repository: spark
Updated Branches:
refs/heads/branch-1.6 4c28b4c8f -> 9c0cf22f7
[SPARK-12711][ML] ML StopWordsRemover does not protect itself from column name
duplication
Fixes problem and verifies fix by test suite.
Also - adds optional parameter: nullable (Boolean) to:
Repository: spark
Updated Branches:
refs/heads/master 358300c79 -> b1835d727
[SPARK-12711][ML] ML StopWordsRemover does not protect itself from column name
duplication
Fixes problem and verifies fix by test suite.
Also - adds optional parameter: nullable (Boolean) to:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 bd8efba8f -> 99594b213
[SPARK-13094][SQL] Add encoders for seq/array of primitives
Author: Michael Armbrust
Closes #11014 from marmbrus/seqEncoders.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master 29d92181d -> b93830126
[SPARK-13114][SQL] Add a test for tokens more than the fields in schema
https://issues.apache.org/jira/browse/SPARK-13114
This PR adds a test for tokens more than the fields in schema.
Author: hyukjinkwon
Repository: spark
Updated Branches:
refs/heads/branch-1.6 99594b213 -> 9a3d1bd09
[SPARK-12780][ML][PYTHON][BACKPORT] Inconsistency returning value of ML python
models' properties
Backport of [SPARK-12780] for branch-1.6
Original PR for master: https://github.com/apache/spark/pull/10724
Repository: spark
Updated Branches:
refs/heads/master d0df2ca40 -> b377b0353
[DOCS] Update StructType.scala
The example will throw error like
:20: error: not found: value StructType
Need to add this line:
import org.apache.spark.sql.types._
Author: Kevin (Sangwoo) Kim
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3c92333ee -> e81333be0
[DOCS] Update StructType.scala
The example will throw error like
:20: error: not found: value StructType
Need to add this line:
import org.apache.spark.sql.types._
Author: Kevin (Sangwoo) Kim
Repository: spark
Updated Branches:
refs/heads/master be5dd881f -> d0df2ca40
[SPARK-13121][STREAMING] java mapWithState mishandles scala Option
Already merged into 1.6 branch, this PR is to commit to master the same change
Author: Gabriele Nizzoli
Closes #11028 from
Repository: spark
Updated Branches:
refs/heads/master 6de6a9772 -> 672032d0a
[SPARK-13020][SQL][TEST] fix random generator for map type
when we generate map, we first randomly pick a length, then create a seq of key
value pair with the expected length, and finally call `toMap`. However,
Repository: spark
Updated Branches:
refs/heads/master b377b0353 -> 6de6a9772
[SPARK-13150] [SQL] disable two flaky tests
Author: Davies Liu
Closes #11037 from davies/disable_flaky.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/master 672032d0a -> 21112e8a1
[SPARK-12992] [SQL] Update parquet reader to support more types when decoding
to ColumnarBatch.
This patch implements support for more types when doing the vectorized decode.
There are
a few more types remaining
Repository: spark
Updated Branches:
refs/heads/branch-1.6 9c0cf22f7 -> 3c92333ee
[SPARK-13056][SQL] map column would throw NPE if value is null
Jira:
https://issues.apache.org/jira/browse/SPARK-13056
Create a map like
{ "a": "somestring", "b": null}
Query like
SELECT col["b"] FROM t1;
NPE
Repository: spark
Updated Branches:
refs/heads/master 335f10eda -> e86f8f63b
[SPARK-13147] [SQL] improve readability of generated code
1. try to avoid the suffix (unique id)
2. remove the comment if there is no code generated.
3. re-arrange the order of functions
4. trop the new line for
Repository: spark
Updated Branches:
refs/heads/branch-1.6 53f518a6e -> 4c28b4c8f
[SPARK-13121][STREAMING] java mapWithState mishandles scala Option
java mapwithstate with Function3 has wrong conversion of java `Optional` to
scala `Option`, fixed code uses same conversion used in the
Repository: spark
Updated Branches:
refs/heads/master cba1d6b65 -> 358300c79
[SPARK-13056][SQL] map column would throw NPE if value is null
Jira:
https://issues.apache.org/jira/browse/SPARK-13056
Create a map like
{ "a": "somestring", "b": null}
Query like
SELECT col["b"] FROM t1;
NPE would
Repository: spark
Updated Branches:
refs/heads/master 99a6e3c1e -> 055714661
[SPARK-12732][ML] bug fix in linear regression train
Fixed the bug in linear regression train for the case when the target variable
is constant. The two cases for `fitIntercept=true` or `fitIntercept=false`
should
Repository: spark
Updated Branches:
refs/heads/master 055714661 -> 335f10eda
[SPARK-7997][CORE] Add rpcEnv.awaitTermination() back to SparkEnv
`rpcEnv.awaitTermination()` was not added in #10854 because some Streaming
Python tests hung forever.
This patch fixed the hung issue and added
Repository: spark
Updated Branches:
refs/heads/branch-1.6 70fcbf68e -> bd8efba8f
[SPARK-13087][SQL] Fix group by function for sort based aggregation
It is not valid to call `toAttribute` on a `NamedExpression` unless we know for
sure that the child produced that `NamedExpression`. The
Repository: spark
Updated Branches:
refs/heads/master b8666fd0e -> 22ba21348
[SPARK-13087][SQL] Fix group by function for sort based aggregation
It is not valid to call `toAttribute` on a `NamedExpression` unless we know for
sure that the child produced that `NamedExpression`. The current
Repository: spark
Updated Branches:
refs/heads/master e86f8f63b -> 138c300f9
[SPARK-12957][SQL] Initial support for constraint propagation in SparkSQL
Based on the semantics of a query, we can derive a number of data constraints
on output of each (logical or physical) operator. For instance,
Repository: spark
Updated Branches:
refs/heads/master 21112e8a1 -> ff71261b6
[SPARK-13122] Fix race condition in MemoryStore.unrollSafely()
https://issues.apache.org/jira/browse/SPARK-13122
A race condition can occur in MemoryStore's unrollSafely() method if two
threads that
return the same
Repository: spark
Updated Branches:
refs/heads/branch-1.6 e81333be0 -> 2f8abb4af
[SPARK-13122] Fix race condition in MemoryStore.unrollSafely()
https://issues.apache.org/jira/browse/SPARK-13122
A race condition can occur in MemoryStore's unrollSafely() method if two
threads that
return the
Repository: spark
Updated Branches:
refs/heads/master ff71261b6 -> 99a6e3c1e
[SPARK-12951] [SQL] support spilling in generated aggregate
This PR add spilling support for generated TungstenAggregate.
If spilling happened, it's not that bad to do the iterator based
sort-merge-aggregate (not
26 matches
Mail list logo