Repository: spark
Updated Branches:
refs/heads/branch-1.1 e7672f196 -> 6f82a4b13
HOTFIX: Minor typo in conf template
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6f82a4b1
Tree: http://git-wip-us.apache.org/repos/asf/spa
Repository: spark
Updated Branches:
refs/heads/master 7557c4cfe -> 9d65f2712
HOTFIX: Minor typo in conf template
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9d65f271
Tree: http://git-wip-us.apache.org/repos/asf/spark/t
Repository: spark
Updated Branches:
refs/heads/branch-1.1 2381e90dc -> e7672f196
[SPARK-3167] Handle special driver configs in Windows (Branch 1.1)
This is an effort to bring the Windows scripts up to speed after recent
splashing changes in #1845.
Author: Andrew Or
Closes #2156 from andrew
Repository: spark
Updated Branches:
refs/heads/master bf719056b -> 7557c4cfe
[SPARK-3167] Handle special driver configs in Windows
This is an effort to bring the Windows scripts up to speed after recent
splashing changes in #1845.
Author: Andrew Or
Closes #2129 from andrewor14/windows-conf
Repository: spark
Updated Branches:
refs/heads/master e70aff6c2 -> bf719056b
[SPARK-3224] FetchFailed reduce stages should only show up once in failed
stages (in UI)
This is a HOTFIX for 1.1.
Author: Reynold Xin
Author: Kay Ousterhout
Closes #2127 from rxin/SPARK-3224 and squashes the fol
Repository: spark
Updated Branches:
refs/heads/branch-1.1 7726e566c -> 2381e90dc
[SPARK-3224] FetchFailed reduce stages should only show up once in failed
stages (in UI)
This is a HOTFIX for 1.1.
Author: Reynold Xin
Author: Kay Ousterhout
Closes #2127 from rxin/SPARK-3224 and squashes the
Repository: spark
Updated Branches:
refs/heads/master ee91eb8c5 -> e70aff6c2
Manually close old pull requests
Closes #671, Closes #515
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e70aff6c
Tree: http://git-wip-us.apach
Repository: spark
Updated Branches:
refs/heads/master d8345471c -> ee91eb8c5
Manually close some old pull requests
Closes #530, Closes #223, Closes #738, Closes #546
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ee91eb8
Repository: spark
Updated Branches:
refs/heads/branch-1.1 8b5af6f74 -> 7726e566c
Fix unclosed HTML tag in Yarn docs.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7726e566
Tree: http://git-wip-us.apache.org/repos/asf/spa
Repository: spark
Updated Branches:
refs/heads/master be043e3f2 -> d8345471c
Fix unclosed HTML tag in Yarn docs.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d8345471
Tree: http://git-wip-us.apache.org/repos/asf/spark/tr
Repository: spark
Updated Branches:
refs/heads/master 727cb25bc -> be043e3f2
[SPARK-3240] Adding known issue for MESOS-1688
When using Mesos with the fine-grained mode, a Spark job can run into a dead
lock on low allocatable memory on Mesos slaves. As a work-around 32 MB (= Mesos
MIN_MEM) ar
Repository: spark
Updated Branches:
refs/heads/master 73b3089b8 -> 727cb25bc
[SPARK-3036][SPARK-3037][SQL] Add MapType/ArrayType containing null value
support to Parquet.
JIRA:
- https://issues.apache.org/jira/browse/SPARK-3036
- https://issues.apache.org/jira/browse/SPARK-3037
Currently thi
Repository: spark
Updated Branches:
refs/heads/branch-1.1 0d9723309 -> 8b5af6f74
[SPARK-3036][SPARK-3037][SQL] Add MapType/ArrayType containing null value
support to Parquet.
JIRA:
- https://issues.apache.org/jira/browse/SPARK-3036
- https://issues.apache.org/jira/browse/SPARK-3037
Currently
Repository: spark
Updated Branches:
refs/heads/branch-1.1 c0e1f99f5 -> 0d9723309
[Docs] Run tests like in contributing guide
The Contributing to Spark guide
[recommends](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting)
running test
Repository: spark
Updated Branches:
refs/heads/master faeb9c0e1 -> 73b3089b8
[Docs] Run tests like in contributing guide
The Contributing to Spark guide
[recommends](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting)
running tests by
Repository: spark
Updated Branches:
refs/heads/branch-1.1 a308a1624 -> c0e1f99f5
[SPARK-2964] [SQL] Remove duplicated code from spark-sql and
start-thriftserver.sh
Author: Cheng Lian
Author: Kousuke Saruta
Closes #1886 from sarutak/SPARK-2964 and squashes the following commits:
8ef8751 [K
Repository: spark
Updated Branches:
refs/heads/master 2ffd3290f -> faeb9c0e1
[SPARK-2964] [SQL] Remove duplicated code from spark-sql and
start-thriftserver.sh
Author: Cheng Lian
Author: Kousuke Saruta
Closes #1886 from sarutak/SPARK-2964 and squashes the following commits:
8ef8751 [Kousu
Repository: spark
Updated Branches:
refs/heads/master f1e71d4c3 -> 2ffd3290f
[SPARK-3225]Typo in script
use_conf_dir => user_conf_dir in load-spark-env.sh.
Author: WangTao
Closes #1926 from WangTaoTheTonic/TypoInScript and squashes the following
commits:
0c104ad [WangTao] Typo in script
Repository: spark
Updated Branches:
refs/heads/master c4787a369 -> f1e71d4c3
[SPARK-3073] [PySpark] use external sort in sortBy() and sortByKey()
Using external sort to support sort large datasets in reduce stage.
Author: Davies Liu
Closes #1978 from davies/sort and squashes the following c
Repository: spark
Updated Branches:
refs/heads/branch-1.1 2715eb77b -> a308a1624
[SPARK-3194][SQL] Add AttributeSet to fix bugs with invalid comparisons of
AttributeReferences
It is common to want to describe sets of attributes that are in various parts
of a query plan. However, the semanti
Repository: spark
Updated Branches:
refs/heads/master 1208f72ac -> c4787a369
[SPARK-3194][SQL] Add AttributeSet to fix bugs with invalid comparisons of
AttributeReferences
It is common to want to describe sets of attributes that are in various parts
of a query plan. However, the semantics o
Repository: spark
Updated Branches:
refs/heads/branch-1.1 5ff900086 -> 2715eb77b
[SPARK-2839][MLlib] Stats Toolkit documentation updated
Documentation updated for the Statistics Toolkit of MLlib. mengxr atalwalkar
https://issues.apache.org/jira/browse/SPARK-2839
P.S. Accidentally closed #212
Repository: spark
Updated Branches:
refs/heads/master adbd5c163 -> 1208f72ac
[SPARK-2839][MLlib] Stats Toolkit documentation updated
Documentation updated for the Statistics Toolkit of MLlib. mengxr atalwalkar
https://issues.apache.org/jira/browse/SPARK-2839
P.S. Accidentally closed #2123. N
Repository: spark
Updated Branches:
refs/heads/branch-1.1 5d981a49c -> 5ff900086
[SPARK-3226][MLLIB] doc update for native libraries
to mention `-Pnetlib-lgpl` option. atalwalkar
Author: Xiangrui Meng
Closes #2128 from mengxr/mllib-native and squashes the following commits:
4cbba57 [Xiangr
Repository: spark
Updated Branches:
refs/heads/master 6b5584ef1 -> adbd5c163
[SPARK-3226][MLLIB] doc update for native libraries
to mention `-Pnetlib-lgpl` option. atalwalkar
Author: Xiangrui Meng
Closes #2128 from mengxr/mllib-native and squashes the following commits:
4cbba57 [Xiangrui M
Repository: spark
Updated Branches:
refs/heads/master 98c2bb0bb -> 6b5584ef1
[SPARK-3063][SQL] ExistingRdd should convert Map to catalyst Map.
Currently `ExistingRdd.convertToCatalyst` doesn't convert `Map` value.
Author: Takuya UESHIN
Closes #1963 from ueshin/issues/SPARK-3063 and squashes
Repository: spark
Updated Branches:
refs/heads/branch-1.1 35a585355 -> 5d981a49c
[SPARK-3063][SQL] ExistingRdd should convert Map to catalyst Map.
Currently `ExistingRdd.convertToCatalyst` doesn't convert `Map` value.
Author: Takuya UESHIN
Closes #1963 from ueshin/issues/SPARK-3063 and squa
Repository: spark
Updated Branches:
refs/heads/branch-1.1 83d273023 -> 35a585355
[SPARK-2969][SQL] Make ScalaReflection be able to handle ArrayType.containsNull
and MapType.valueContainsNull.
Make `ScalaReflection` be able to handle like:
- `Seq[Int]` as `ArrayType(IntegerType, containsNull
Repository: spark
Updated Branches:
refs/heads/master 3cedc4f4d -> 98c2bb0bb
[SPARK-2969][SQL] Make ScalaReflection be able to handle ArrayType.containsNull
and MapType.valueContainsNull.
Make `ScalaReflection` be able to handle like:
- `Seq[Int]` as `ArrayType(IntegerType, containsNull = fa
Repository: spark
Updated Branches:
refs/heads/branch-1.1 3a9d874d7 -> 83d273023
[SPARK-2871] [PySpark] add histgram() API
RDD.histogram(buckets)
Compute a histogram using the provided buckets. The buckets
are all open to the right except for the last which is closed.
Repository: spark
Updated Branches:
refs/heads/master 8856c3d86 -> 3cedc4f4d
[SPARK-2871] [PySpark] add histgram() API
RDD.histogram(buckets)
Compute a histogram using the provided buckets. The buckets
are all open to the right except for the last which is closed.
e.g.
Repository: spark
Updated Branches:
refs/heads/master b21ae5bbb -> 8856c3d86
[SPARK-3131][SQL] Allow user to set parquet compression codec for writing
ParquetFile in SQLContext
There are 4 different compression codec available for ```ParquetOutputFormat```
in Spark SQL, it was set as a hard-
Repository: spark
Updated Branches:
refs/heads/branch-1.1 0f947f123 -> 3a9d874d7
[SPARK-3131][SQL] Allow user to set parquet compression codec for writing
ParquetFile in SQLContext
There are 4 different compression codec available for ```ParquetOutputFormat```
in Spark SQL, it was set as a h
33 matches
Mail list logo