Repository: spark
Updated Branches:
refs/heads/master 8b727994e -> f309b28bd
[SPARK-25485][SQL][TEST] Refactor UnsafeProjectionBenchmark to use main method
## What changes were proposed in this pull request?
Refactor `UnsafeProjectionBenchmark` to use main method.
Generate benchmark result:
Author: pwendell
Date: Thu Sep 27 05:18:42 2018
New Revision: 29732
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_09_26_22_03-01c000b5 docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
--
Author: pwendell
Date: Thu Sep 27 05:17:10 2018
New Revision: 29731
Log:
Apache Spark 2.3.3-SNAPSHOT-2018_09_26_22_02-f40e4c7 docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.4 01c000b52 -> 0cf4c5bbe
[SPARK-25468][WEBUI] Highlight current page index in the spark UI
## What changes were proposed in this pull request?
This PR is highlight current page index in the spark UI and history server UI,
https://issues.
Repository: spark
Updated Branches:
refs/heads/master ee214ef3a -> 8b727994e
[SPARK-25468][WEBUI] Highlight current page index in the spark UI
## What changes were proposed in this pull request?
This PR is highlight current page index in the spark UI and history server UI,
https://issues.apac
Repository: spark
Updated Branches:
refs/heads/branch-2.4 f12769e73 -> 01c000b52
Revert "[SPARK-25540][SQL][PYSPARK] Make HiveContext in PySpark behave as the
same as Scala."
This reverts commit 7656358adc39eb8eb881368ab5a066fbf86149c8.
Project: http://git-wip-us.apache.org/repos/asf/spark/
Repository: spark
Updated Branches:
refs/heads/master 5def10e61 -> ee214ef3a
[SPARK-25525][SQL][PYSPARK] Do not update conf for existing SparkContext in
SparkSession.getOrCreate.
## What changes were proposed in this pull request?
In [SPARK-20946](https://issues.apache.org/jira/browse/SPARK-
Repository: spark
Updated Branches:
refs/heads/branch-2.3 26d893a4f -> f40e4c71c
[SPARK-25536][CORE] metric value for METRIC_OUTPUT_RECORDS_WRITTEN is incorrect
## What changes were proposed in this pull request?
changed metric value of METRIC_OUTPUT_RECORDS_WRITTEN from
'task.metrics.inputMe
Repository: spark
Updated Branches:
refs/heads/branch-2.4 7656358ad -> f12769e73
[SPARK-25536][CORE] metric value for METRIC_OUTPUT_RECORDS_WRITTEN is incorrect
## What changes were proposed in this pull request?
changed metric value of METRIC_OUTPUT_RECORDS_WRITTEN from
'task.metrics.inputMe
Repository: spark
Updated Branches:
refs/heads/master 9063b17f3 -> 5def10e61
[SPARK-25536][CORE] metric value for METRIC_OUTPUT_RECORDS_WRITTEN is incorrect
## What changes were proposed in this pull request?
changed metric value of METRIC_OUTPUT_RECORDS_WRITTEN from
'task.metrics.inputMetric
Repository: spark
Updated Branches:
refs/heads/master c3c45cbd7 -> 9063b17f3
[SPARK-25481][SQL][TEST] Refactor ColumnarBatchBenchmark to use main method
## What changes were proposed in this pull request?
Refactor `ColumnarBatchBenchmark` to use main method.
Generate benchmark result:
```
SPA
Author: pwendell
Date: Thu Sep 27 03:17:46 2018
New Revision: 29728
Log:
Apache Spark 2.5.0-SNAPSHOT-2018_09_26_20_03-c3c45cb docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.4 2ff91f213 -> 7656358ad
[SPARK-25540][SQL][PYSPARK] Make HiveContext in PySpark behave as the same as
Scala.
## What changes were proposed in this pull request?
In Scala, `HiveContext` sets a config `spark.sql.catalogImplementation` of
Repository: spark
Updated Branches:
refs/heads/master d0990e3df -> c3c45cbd7
[SPARK-25540][SQL][PYSPARK] Make HiveContext in PySpark behave as the same as
Scala.
## What changes were proposed in this pull request?
In Scala, `HiveContext` sets a config `spark.sql.catalogImplementation` of the
Author: pwendell
Date: Thu Sep 27 01:19:10 2018
New Revision: 29727
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_09_26_18_02-2ff91f2 docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Author: pwendell
Date: Thu Sep 27 01:17:30 2018
New Revision: 29726
Log:
Apache Spark 2.3.3-SNAPSHOT-2018_09_26_18_02-26d893a docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.3 2381d60a2 -> 26d893a4f
[SPARK-25454][SQL] add a new config for picking minimum precision for integral
literals
## What changes were proposed in this pull request?
https://github.com/apache/spark/pull/20023 proposed to allow precision
Repository: spark
Updated Branches:
refs/heads/master 51540c2fa -> d0990e3df
[SPARK-25454][SQL] add a new config for picking minimum precision for integral
literals
## What changes were proposed in this pull request?
https://github.com/apache/spark/pull/20023 proposed to allow precision lose
Repository: spark
Updated Branches:
refs/heads/branch-2.4 8d1720079 -> 2ff91f213
[SPARK-25454][SQL] add a new config for picking minimum precision for integral
literals
## What changes were proposed in this pull request?
https://github.com/apache/spark/pull/20023 proposed to allow precision
Repository: spark
Updated Branches:
refs/heads/master 5ee216618 -> 51540c2fa
[SPARK-25372][YARN][K8S] Deprecate and generalize keytab / principal config
## What changes were proposed in this pull request?
SparkSubmit already logs in the user if a keytab is provided, the only issue is
that it
Author: pwendell
Date: Wed Sep 26 21:16:43 2018
New Revision: 29720
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_09_26_14_02-8d17200 docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Author: pwendell
Date: Wed Sep 26 19:17:17 2018
New Revision: 29712
Log:
Apache Spark 2.5.0-SNAPSHOT-2018_09_26_12_02-5ee2166 docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/master e702fb1d5 -> 5ee216618
[SPARK-25533][CORE][WEBUI] AppSummary should hold the information about
succeeded Jobs and completed stages only
## What changes were proposed in this pull request?
Currently, In the spark UI, when there are failed j
Repository: spark
Updated Branches:
refs/heads/branch-2.4 dc6047613 -> 8d1720079
[SPARK-24519][CORE] Compute SHUFFLE_MIN_NUM_PARTS_TO_HIGHLY_COMPRESS only once
## What changes were proposed in this pull request?
Previously SPARK-24519 created a modifiable config
SHUFFLE_MIN_NUM_PARTS_TO_HIGHL
Author: pwendell
Date: Wed Sep 26 17:19:32 2018
New Revision: 29711
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_09_26_10_03-dc60476 docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Author: pwendell
Date: Wed Sep 26 17:17:31 2018
New Revision: 29710
Log:
Apache Spark 2.3.3-SNAPSHOT-2018_09_26_10_02-2381d60 docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/master bd2ae857d -> e702fb1d5
[SPARK-24519][CORE] Compute SHUFFLE_MIN_NUM_PARTS_TO_HIGHLY_COMPRESS only once
## What changes were proposed in this pull request?
Previously SPARK-24519 created a modifiable config
SHUFFLE_MIN_NUM_PARTS_TO_HIGHLY_CO
Repository: spark
Updated Branches:
refs/heads/branch-2.4 99698279d -> dc6047613
[SPARK-25318] Add exception handling when wrapping the input stream during the
the fetch or stage retry in response to a corrupted block
SPARK-4105 provided a solution to block corruption issue by retrying the fe
Repository: spark
Updated Branches:
refs/heads/master a2ac5a72c -> bd2ae857d
[SPARK-25318] Add exception handling when wrapping the input stream during the
the fetch or stage retry in response to a corrupted block
SPARK-4105 provided a solution to block corruption issue by retrying the fetch
Repository: spark
Updated Branches:
refs/heads/branch-2.3 cbb228e48 -> 2381d60a2
[SPARK-25509][CORE] Windows doesn't support POSIX permissions
SHS V2 cannot enabled in Windows, because windows doesn't support POSIX
permission.
test case fails in windows without this fix.
org.apache.spark.dep
Repository: spark
Updated Branches:
refs/heads/branch-2.4 d44b863a2 -> 99698279d
[SPARK-25509][CORE] Windows doesn't support POSIX permissions
## What changes were proposed in this pull request?
SHS V2 cannot enabled in Windows, because windows doesn't support POSIX
permission.
## How was t
Repository: spark
Updated Branches:
refs/heads/master cf5c9c4b5 -> a2ac5a72c
[SPARK-25509][CORE] Windows doesn't support POSIX permissions
## What changes were proposed in this pull request?
SHS V2 cannot enabled in Windows, because windows doesn't support POSIX
permission.
## How was this
Author: pwendell
Date: Wed Sep 26 15:17:01 2018
New Revision: 29708
Log:
Apache Spark 2.5.0-SNAPSHOT-2018_09_26_08_02-cf5c9c4 docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark
Updated Branches:
refs/heads/branch-2.4 3f203050a -> d44b863a2
[SPARK-20937][DOCS] Describe spark.sql.parquet.writeLegacyFormat property in
Spark SQL, DataFrames and Datasets Guide
## What changes were proposed in this pull request?
Describe spark.sql.parquet.writeLegacyForm
Repository: spark
Updated Branches:
refs/heads/master 44a71741d -> cf5c9c4b5
[SPARK-20937][DOCS] Describe spark.sql.parquet.writeLegacyFormat property in
Spark SQL, DataFrames and Datasets Guide
## What changes were proposed in this pull request?
Describe spark.sql.parquet.writeLegacyFormat p
Repository: spark
Updated Branches:
refs/heads/master b39e228ce -> 44a71741d
[SPARK-25379][SQL] Improve AttributeSet and ColumnPruning performance
## What changes were proposed in this pull request?
This PR contains 3 optimizations:
1) it improves significantly the operation `--` on `Attrib
Repository: spark
Updated Branches:
refs/heads/master 81cbcca60 -> b39e228ce
[SPARK-25541][SQL] CaseInsensitiveMap should be serializable after '-' or
'filterKeys'
## What changes were proposed in this pull request?
`CaseInsensitiveMap` is declared as Serializable. However, it is no
seriali
Update some missing changes for 2.3.2 release
`downloads.md` and `_post` should be updated to use new 2.3.2 release. Sorry
about missing it.
Author: jerryshao
Closes #150 from jerryshao/update-2.3.2.
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us
Repository: spark-website
Updated Branches:
refs/heads/asf-site 546f35143 -> 74d902cdc
http://git-wip-us.apache.org/repos/asf/spark-website/blob/74d902cd/site/releases/spark-release-1-3-0.html
--
diff --git a/site/releases/spar
http://git-wip-us.apache.org/repos/asf/spark-website/blob/74d902cd/site/news/spark-2-3-0-released.html
--
diff --git a/site/news/spark-2-3-0-released.html
b/site/news/spark-2-3-0-released.html
index 022adba..e36b8b3 100644
--- a/s
Author: pwendell
Date: Wed Sep 26 09:16:34 2018
New Revision: 29701
Log:
Apache Spark 2.3.3-SNAPSHOT-2018_09_26_02_02-cbb228e docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
Repository: spark-website
Updated Branches:
refs/heads/asf-site 04a27dbf1 -> 546f35143
Empty commit to trigger asf to github sync
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/546f3514
Tree: http://git-wi
Author: pwendell
Date: Wed Sep 26 07:18:22 2018
New Revision: 29699
Log:
Apache Spark 2.5.0-SNAPSHOT-2018_09_26_00_02-81cbcca docs
[This commit notification would consist of 1485 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
---
43 matches
Mail list logo