Repository: spark
Updated Branches:
refs/heads/branch-1.6 113410c12 -> 2ae1fa074
[MINOR] [SQL] Fix randomly generated ArrayData in RowEncoderSuite
The randomly generated ArrayData used for the UDT `ExamplePoint` in
`RowEncoderSuite` sometimes doesn't have enough elements. In this case, this
Repository: spark
Updated Branches:
refs/heads/master e01865af0 -> d79d8b08f
[MINOR] [SQL] Fix randomly generated ArrayData in RowEncoderSuite
The randomly generated ArrayData used for the UDT `ExamplePoint` in
`RowEncoderSuite` sometimes doesn't have enough elements. In this case, this
test
Repository: spark
Updated Branches:
refs/heads/branch-1.6 07ac8e950 -> 113410c12
[SPARK-11447][SQL] change NullType to StringType during binaryComparison
between NullType and StringType
During executing PromoteStrings rule, if one side of binaryComparison is
StringType and the other side is
Repository: spark
Updated Branches:
refs/heads/master 75d202073 -> e01865af0
[SPARK-11447][SQL] change NullType to StringType during binaryComparison
between NullType and StringType
During executing PromoteStrings rule, if one side of binaryComparison is
StringType and the other side is not
Repository: spark
Updated Branches:
refs/heads/master fbad920db -> 75d202073
[SPARK-11694][FOLLOW-UP] Clean up imports, use a common function for metadata
and add a test for FIXED_LEN_BYTE_ARRAY
As discussed https://github.com/apache/spark/pull/9660
https://github.com/apache/spark/pull/9060,
Repository: spark
Updated Branches:
refs/heads/branch-1.6 5a6f40459 -> 07ac8e950
[SPARK-11768][SPARK-9196][SQL] Support now function in SQL (alias for
current_timestamp).
This patch adds an alias for current_timestamp (now function).
Also fixes SPARK-9196 to re-enable the test case for curre
Repository: spark
Updated Branches:
refs/heads/master 540bf58f1 -> fbad920db
[SPARK-11768][SPARK-9196][SQL] Support now function in SQL (alias for
current_timestamp).
This patch adds an alias for current_timestamp (now function).
Also fixes SPARK-9196 to re-enable the test case for current_t
Preparing development version 1.6.0-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/5a6f4045
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/5a6f4045
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/5a6
Repository: spark
Updated Tags: refs/tags/v1.6.0-preview [created] 31db36100
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/branch-1.6 e12ecfa36 -> 5a6f40459
Preparing Spark release v1.6.0-preview
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/31db3610
Tree: http://git-wip-us.apache.org/repos/asf/
Repository: spark
Updated Branches:
refs/heads/branch-1.6 505eceef3 -> e12ecfa36
[SPARK-11617][NETWORK] Fix leak in TransportFrameDecoder.
The code was using the wrong API to add data to the internal composite
buffer, causing buffers to leak in certain situations. Use the right
API and enhance
Repository: spark
Updated Branches:
refs/heads/master 1c5475f14 -> 540bf58f1
[SPARK-11617][NETWORK] Fix leak in TransportFrameDecoder.
The code was using the wrong API to add data to the internal composite
buffer, causing buffers to leak in certain situations. Use the right
API and enhance the
Repository: spark
Updated Branches:
refs/heads/branch-1.6 32a69e4c1 -> 505eceef3
[SPARK-11612][ML] Pipeline and PipelineModel persistence
Pipeline and PipelineModel extend Readable and Writable. Persistence succeeds
only when all stages are Writable.
Note: This PR reinstates tests for other
Repository: spark
Updated Branches:
refs/heads/master bd10eb81c -> 1c5475f14
[SPARK-11612][ML] Pipeline and PipelineModel persistence
Pipeline and PipelineModel extend Readable and Writable. Persistence succeeds
only when all stages are Writable.
Note: This PR reinstates tests for other rea
Repository: spark
Updated Branches:
refs/heads/master 33a0ec937 -> bd10eb81c
[EXAMPLE][MINOR] Add missing awaitTermination in click stream example
Author: jerryshao
Closes #9730 from jerryshao/clickstream-fix.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wi
Repository: spark
Updated Branches:
refs/heads/master 30f3cfda1 -> 33a0ec937
[SPARK-11710] Document new memory management model
Author: Andrew Or
Closes #9676 from andrewor14/memory-management-docs.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache
Repository: spark
Updated Branches:
refs/heads/branch-1.6 e4abfe932 -> 32a69e4c1
[SPARK-11710] Document new memory management model
Author: Andrew Or
Closes #9676 from andrewor14/memory-management-docs.
(cherry picked from commit 33a0ec93771ef5c3b388165b07cfab9014918d3b)
Signed-off-by: Andr
Repository: spark
Updated Branches:
refs/heads/master ea6f53e48 -> 30f3cfda1
[SPARK-11480][CORE][WEBUI] Wrong callsite is displayed when using
AsyncRDDActions#takeAsync
When we call AsyncRDDActions#takeAsync, actually another DAGScheduler#runJob is
called from another thread so we cannot get
Repository: spark
Updated Branches:
refs/heads/branch-1.6 bb044ec22 -> e4abfe932
[SPARK-11480][CORE][WEBUI] Wrong callsite is displayed when using
AsyncRDDActions#takeAsync
When we call AsyncRDDActions#takeAsync, actually another DAGScheduler#runJob is
called from another thread so we cannot
Repository: spark
Updated Branches:
refs/heads/branch-1.6 4f8c7e18f -> bb044ec22
[SPARKR][HOTFIX] Disable flaky SparkR package build test
See https://github.com/apache/spark/pull/9390#issuecomment-157160063 and
https://gist.github.com/shivaram/3a2fecce60768a603dac for more information
Author
Repository: spark
Updated Branches:
refs/heads/master fd14936be -> ea6f53e48
[SPARKR][HOTFIX] Disable flaky SparkR package build test
See https://github.com/apache/spark/pull/9390#issuecomment-157160063 and
https://gist.github.com/shivaram/3a2fecce60768a603dac for more information
Author: Sh
Repository: spark
Updated Branches:
refs/heads/branch-1.6 e042780cd -> 4f8c7e18f
[SPARK-11625][SQL] add java test for typed aggregate
Author: Wenchen Fan
Closes #9591 from cloud-fan/agg-test.
(cherry picked from commit fd14936be7beff543dbbcf270f2f9749f7a803c4)
Signed-off-by: Michael Armbrus
Repository: spark
Updated Branches:
refs/heads/master 75ee12f09 -> fd14936be
[SPARK-11625][SQL] add java test for typed aggregate
Author: Wenchen Fan
Closes #9591 from cloud-fan/agg-test.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6c8e0c0ff -> e042780cd
[SPARK-8658][SQL] AttributeReference's equals method compares all the members
This fix is to change the equals method to check all of the specified fields
for equality of AttributeReference.
Author: gatorsmile
Repository: spark
Updated Branches:
refs/heads/master 31296628a -> 75ee12f09
[SPARK-8658][SQL] AttributeReference's equals method compares all the members
This fix is to change the equals method to check all of the specified fields
for equality of AttributeReference.
Author: gatorsmile
Clo
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3bd72eafc -> 6c8e0c0ff
[SPARK-11553][SQL] Primitive Row accessors should not convert null to default
value
Invocation of getters for type extending AnyVal returns default value (if field
value is null) instead of throwing NPE. Please
Repository: spark
Updated Branches:
refs/heads/master bcea0bfda -> 31296628a
[SPARK-11553][SQL] Primitive Row accessors should not convert null to default
value
Invocation of getters for type extending AnyVal returns default value (if field
value is null) instead of throwing NPE. Please chec
Repository: spark
Updated Branches:
refs/heads/master 3c025087b -> bcea0bfda
[SPARK-11742][STREAMING] Add the failure info to the batch lists
https://cloud.githubusercontent.com/assets/1000778/11162322/9b88e204-8a51-11e5-8c57-a44889cab713.png";>
Author: Shixiong Zhu
Closes #9711 from zsxwin
Repository: spark
Updated Branches:
refs/heads/branch-1.6 64439f7d6 -> 3bd72eafc
[SPARK-11742][STREAMING] Add the failure info to the batch lists
https://cloud.githubusercontent.com/assets/1000778/11162322/9b88e204-8a51-11e5-8c57-a44889cab713.png";>
Author: Shixiong Zhu
Closes #9711 from zs
Repository: spark
Updated Branches:
refs/heads/branch-1.6 90d71bff0 -> 64439f7d6
Revert "[SPARK-11271][SPARK-11016][CORE] Use Spark BitSet instead of
RoaringBitmap to reduce memory usage"
This reverts commit e209fa271ae57dc8849f8b1241bf1ea7d6d3d62c.
Project: http://git-wip-us.apache.org/rep
Repository: spark
Updated Branches:
refs/heads/master 985b38dd2 -> 3c025087b
Revert "[SPARK-11271][SPARK-11016][CORE] Use Spark BitSet instead of
RoaringBitmap to reduce memory usage"
This reverts commit e209fa271ae57dc8849f8b1241bf1ea7d6d3d62c.
Project: http://git-wip-us.apache.org/repos/a
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fbe65c592 -> 90d71bff0
[SPARK-11390][SQL] Query plan with/without filterPushdown indistinguishable
â¦ishable
Propagate pushed filters to PhyicalRDD in DataSourceStrategy.apply
Author: Zee Chen
Closes #9679 from zeocio/spark-11390.
Repository: spark
Updated Branches:
refs/heads/master b1a966262 -> 985b38dd2
[SPARK-11390][SQL] Query plan with/without filterPushdown indistinguishable
â¦ishable
Propagate pushed filters to PhyicalRDD in DataSourceStrategy.apply
Author: Zee Chen
Closes #9679 from zeocio/spark-11390.
Pr
Repository: spark
Updated Branches:
refs/heads/branch-1.6 38fe092ff -> fbe65c592
[SPARK-11754][SQL] consolidate `ExpressionEncoder.tuple` and `Encoders.tuple`
These 2 are very similar, we can consolidate them into one.
Also add tests for it and fix a bug.
Author: Wenchen Fan
Closes #9729 f
Repository: spark
Updated Branches:
refs/heads/master 24477d270 -> b1a966262
[SPARK-11754][SQL] consolidate `ExpressionEncoder.tuple` and `Encoders.tuple`
These 2 are very similar, we can consolidate them into one.
Also add tests for it and fix a bug.
Author: Wenchen Fan
Closes #9729 from
Repository: spark
Updated Branches:
refs/heads/master ace0db471 -> 24477d270
[SPARK-11718][YARN][CORE] Fix explicitly killed executor dies silently issue
Currently if dynamic allocation is enabled, explicitly killing executor will
not get response, so the executor metadata is wrong in driver
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c83177d30 -> 38fe092ff
[SPARK-11718][YARN][CORE] Fix explicitly killed executor dies silently issue
Currently if dynamic allocation is enabled, explicitly killing executor will
not get response, so the executor metadata is wrong in dri
Repository: spark
Updated Branches:
refs/heads/branch-1.6 38673d7e6 -> c83177d30
[SPARK-6328][PYTHON] Python API for StreamingListener
Author: Daniel Jalova
Closes #9186 from djalova/SPARK-6328.
(cherry picked from commit ace0db47141ffd457c2091751038fc291f6d5a8b)
Signed-off-by: Tathagata Da
Repository: spark
Updated Branches:
refs/heads/master de5e531d3 -> ace0db471
[SPARK-6328][PYTHON] Python API for StreamingListener
Author: Daniel Jalova
Closes #9186 from djalova/SPARK-6328.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/rep
Repository: spark
Updated Branches:
refs/heads/master b0c3fd34e -> de5e531d3
[SPARK-11731][STREAMING] Enable batching on Driver WriteAheadLog by default
Using batching on the driver for the WriteAheadLog should be an improvement for
all environments and use cases. Users will be able to scale
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f14fb291d -> 38673d7e6
[SPARK-11731][STREAMING] Enable batching on Driver WriteAheadLog by default
Using batching on the driver for the WriteAheadLog should be an improvement for
all environments and use cases. Users will be able to sc
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1887fa228 -> f14fb291d
[SPARK-11044][SQL] Parquet writer version fixed as version1
https://issues.apache.org/jira/browse/SPARK-11044
Spark writes a parquet file only with writer version1 ignoring the writer
version given by user.
So,
Repository: spark
Updated Branches:
refs/heads/branch-1.5 bf79a171e -> 51fc152b7
[SPARK-10181][SQL] Do kerberos login for credentials during hive client
initialization
On driver process start up, UserGroupInformation.loginUserFromKeytab is called
with the principal and keytab passed in, and
Repository: spark
Updated Branches:
refs/heads/master 06f1fdba6 -> b0c3fd34e
[SPARK-11743] [SQL] Add UserDefinedType support to RowEncoder
JIRA: https://issues.apache.org/jira/browse/SPARK-11743
RowEncoder doesn't support UserDefinedType now. We should add the support for
it.
Author: Liang-
Repository: spark
Updated Branches:
refs/heads/branch-1.6 949c9b7c6 -> 1887fa228
[SPARK-11743] [SQL] Add UserDefinedType support to RowEncoder
JIRA: https://issues.apache.org/jira/browse/SPARK-11743
RowEncoder doesn't support UserDefinedType now. We should add the support for
it.
Author: Li
Repository: spark
Updated Branches:
refs/heads/branch-1.5 b767ceeb2 -> bf79a171e
[SPARK-11752] [SQL] fix timezone problem for DateTimeUtils.getSeconds
code snippet to reproduce it:
```
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
val t = Timestamp.valueOf("1900-06-11 12:14:50.789
Repository: spark
Updated Branches:
refs/heads/master 0e79604ae -> 06f1fdba6
[SPARK-11752] [SQL] fix timezone problem for DateTimeUtils.getSeconds
code snippet to reproduce it:
```
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
val t = Timestamp.valueOf("1900-06-11 12:14:50.789")
v
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c37ed52ec -> 949c9b7c6
[SPARK-11752] [SQL] fix timezone problem for DateTimeUtils.getSeconds
code snippet to reproduce it:
```
TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
val t = Timestamp.valueOf("1900-06-11 12:14:50.789
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a0f9cd77a -> c37ed52ec
[SPARK-11522][SQL] input_file_name() returns "" for external tables
When computing partition for non-parquet relation, `HadoopRDD.compute` is used.
but it does not set the thread local variable `inputFileName` in
Repository: spark
Updated Branches:
refs/heads/master e388b39d1 -> 0e79604ae
[SPARK-11522][SQL] input_file_name() returns "" for external tables
When computing partition for non-parquet relation, `HadoopRDD.compute` is used.
but it does not set the thread local variable `inputFileName` in
`N
Repository: spark
Updated Branches:
refs/heads/master 7f8eb3bf6 -> e388b39d1
[SPARK-11692][SQL] Support for Parquet logical types, JSON and BSON (embedded
types)
Parquet supports some JSON and BSON datatypes. They are represented as binary
for BSON and string (UTF-8) for JSON internally.
I
Repository: spark
Updated Branches:
refs/heads/master 42de5253f -> 7f8eb3bf6
[SPARK-11044][SQL] Parquet writer version fixed as version1
https://issues.apache.org/jira/browse/SPARK-11044
Spark writes a parquet file only with writer version1 ignoring the writer
version given by user.
So, in
Repository: spark
Updated Branches:
refs/heads/branch-1.6 053c63ecf -> a0f9cd77a
[SPARK-11745][SQL] Enable more JSON parsing options
This patch adds the following options to the JSON data source, for dealing with
non-standard JSON files:
* `allowComments` (default `false`): ignores Java/C++ s
Repository: spark
Updated Branches:
refs/heads/master fd50fa4c3 -> 42de5253f
[SPARK-11745][SQL] Enable more JSON parsing options
This patch adds the following options to the JSON data source, for dealing with
non-standard JSON files:
* `allowComments` (default `false`): ignores Java/C++ style
54 matches
Mail list logo