Repository: spark
Updated Branches:
refs/heads/master f2f4e7afe -> 1ee472eec
[SPARK-25621][SPARK-25622][TEST] Reduce test time of
BucketedReadWithHiveSupportSuite
## What changes were proposed in this pull request?
By replacing loops with random possible value.
- `read partitioning bucketed
Repository: spark
Updated Branches:
refs/heads/master 17781d753 -> f2f4e7afe
[SPARK-25600][SQL][MINOR] Make use of TypeCoercion.findTightestCommonType while
inferring CSV schema.
## What changes were proposed in this pull request?
Current the CSV's infer schema code inlines
`TypeCoercion.fin
Repository: spark
Updated Branches:
refs/heads/master 44cf800c8 -> 17781d753
[SPARK-25202][SQL] Implements split with limit sql function
## What changes were proposed in this pull request?
Adds support for the setting limit in the sql split function
## How was this patch tested?
1. Updated
Repository: spark
Updated Branches:
refs/heads/master 58287a398 -> 44cf800c8
[SPARK-25655][BUILD] Add -Pspark-ganglia-lgpl to the scala style check.
## What changes were proposed in this pull request?
Our lint failed due to the following errors:
```
[INFO] --- scalastyle-maven-plugin:1.0.0:che
Author: pwendell
Date: Sat Oct 6 05:16:47 2018
New Revision: 29904
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_10_05_22_02-a2991d2 docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.4 0a70afdc0 -> a2991d233
[SPARK-25646][K8S] Fix docker-image-tool.sh on dev build.
The docker file was referencing a path that only existed in the
distribution tarball; it needs to be parameterized so that the
right path can be used in a
Repository: spark
Updated Branches:
refs/heads/master 2c6f4d61b -> 58287a398
[SPARK-25646][K8S] Fix docker-image-tool.sh on dev build.
The docker file was referencing a path that only existed in the
distribution tarball; it needs to be parameterized so that the
right path can be used in a dev
Author: pwendell
Date: Sat Oct 6 03:17:28 2018
New Revision: 29903
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_05_20_02-2c6f4d6 docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/master bbd038d24 -> 2c6f4d61b
[SPARK-25610][SQL][TEST] Improve execution time of DatasetCacheSuite: cache UDF
result correctly
## What changes were proposed in this pull request?
In this test case, we are verifying that the result of an UDF is c
Repository: spark
Updated Branches:
refs/heads/master 1c9486c1a -> bbd038d24
[SPARK-25653][TEST] Add tag ExtendedHiveTest for HiveSparkSubmitSuite
## What changes were proposed in this pull request?
The total run time of `HiveSparkSubmitSuite` is about 10 minutes.
While the related code is st
Repository: spark
Updated Branches:
refs/heads/master a433fbcee -> 1c9486c1a
[SPARK-25635][SQL][BUILD] Support selective direct encoding in native ORC write
## What changes were proposed in this pull request?
Before ORC 1.5.3, `orc.dictionary.key.threshold` and
`hive.exec.orc.dictionary.key.
Author: pwendell
Date: Fri Oct 5 23:17:20 2018
New Revision: 29902
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_05_16_03-a433fbc docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/master 7dcc90fbb -> a433fbcee
[SPARK-25626][SQL][TEST] Improve the test execution time of HiveClientSuites
## What changes were proposed in this pull request?
Improve the runtime by reducing the number of partitions created in the test.
The numbe
Author: pwendell
Date: Fri Oct 5 21:17:19 2018
New Revision: 29901
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_10_05_14_03-0a70afd docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Author: pwendell
Date: Fri Oct 5 19:17:42 2018
New Revision: 29900
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_05_12_03-7dcc90f docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.4 2c700ee30 -> 0a70afdc0
[SPARK-25644][SS] Fix java foreachBatch in DataStreamWriter
## What changes were proposed in this pull request?
The java `foreachBatch` API in `DataStreamWriter` should accept
`java.lang.Long` rather `scala.Long
Repository: spark
Updated Branches:
refs/heads/master 434ada12a -> 7dcc90fbb
[SPARK-25644][SS] Fix java foreachBatch in DataStreamWriter
## What changes were proposed in this pull request?
The java `foreachBatch` API in `DataStreamWriter` should accept
`java.lang.Long` rather `scala.Long`.
Author: pwendell
Date: Fri Oct 5 17:17:28 2018
New Revision: 29898
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_10_05_10_02-2c700ee docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Author: pwendell
Date: Fri Oct 5 11:17:07 2018
New Revision: 29892
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_05_04_02-434ada1 docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.4 c9bb83a7d -> 2c700ee30
[SPARK-25521][SQL] Job id showing null in the logs when insert into command Job
is finished.
## What changes were proposed in this pull request?
``As part of insert command in FileFormatWriter, a job context is
Repository: spark
Updated Branches:
refs/heads/master ab1650d29 -> 434ada12a
[SPARK-17952][SQL] Nested Java beans support in createDataFrame
## What changes were proposed in this pull request?
When constructing a DataFrame from a Java bean, using nested beans throws an
error despite
[docume
Repository: spark
Updated Branches:
refs/heads/master 459700727 -> ab1650d29
[SPARK-24601] Update Jackson to 2.9.6
Hi all,
Jackson is incompatible with upstream versions, therefore bump the Jackson
version to a more recent one. I bumped into some issues with Azure CosmosDB
that is using a m
Author: pwendell
Date: Fri Oct 5 07:17:16 2018
New Revision: 29888
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_05_00_02-4597007 docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
23 matches
Mail list logo