Repository: spark
Updated Branches:
refs/heads/master 441cdcca6 -> e4899a253
[SPARK-2254] [SQL] ScalaRefection should mark primitive types as non-nullable.
Author: Takuya UESHIN
Closes #1193 from ueshin/issues/SPARK-2254 and squashes the following commits:
cfd6088 [Takuya UESHIN] Modify Sca
Repository: spark
Updated Branches:
refs/heads/branch-1.0 c445b3af3 -> 47f8829e0
[SPARK-2254] [SQL] ScalaRefection should mark primitive types as non-nullable.
Author: Takuya UESHIN
Closes #1193 from ueshin/issues/SPARK-2254 and squashes the following commits:
cfd6088 [Takuya UESHIN] Modify
Repository: spark
Updated Branches:
refs/heads/master 4a346e242 -> 441cdcca6
[SPARK-2172] PySpark cannot import mllib modules in YARN-client mode
Include pyspark/mllib python sources as resources in the mllib.jar.
This way they will be included in the final assembly
Author: Szul, Piotr
Clos
Repository: spark
Updated Branches:
refs/heads/branch-1.0 fa167194c -> c445b3af3
[SPARK-2284][UI] Mark all failed tasks as failures.
Previously only tasks failed with ExceptionFailure reason was marked as failure.
Author: Reynold Xin
Closes #1224 from rxin/SPARK-2284 and squashes the follow
Repository: spark
Updated Branches:
refs/heads/master b88a59a66 -> 4a346e242
[SPARK-2284][UI] Mark all failed tasks as failures.
Previously only tasks failed with ExceptionFailure reason was marked as failure.
Author: Reynold Xin
Closes #1224 from rxin/SPARK-2284 and squashes the following
Repository: spark
Updated Branches:
refs/heads/branch-1.0 92b012502 -> fa167194c
[SPARK-2172] PySpark cannot import mllib modules in YARN-client mode
Include pyspark/mllib python sources as resources in the mllib.jar.
This way they will be included in the final assembly
Author: Szul, Piotr
Repository: spark
Updated Branches:
refs/heads/branch-1.0 5869f8bf1 -> 92b012502
[SPARK-1749] Job cancellation when SchedulerBackend does not implement killTask
This is a fixed up version of #686 (cc @markhamstra @pwendell). The last
commit (the only one I authored) reflects the changes I ma
Repository: spark
Updated Branches:
refs/heads/master 7f196b009 -> b88a59a66
[SPARK-1749] Job cancellation when SchedulerBackend does not implement killTask
This is a fixed up version of #686 (cc @markhamstra @pwendell). The last
commit (the only one I authored) reflects the changes I made f
Repository: spark
Updated Branches:
refs/heads/branch-1.0 abb62f0b9 -> 5869f8bf1
[SPARK-2283][SQL] Reset test environment before running PruningSuite
JIRA issue: [SPARK-2283](https://issues.apache.org/jira/browse/SPARK-2283)
If `PruningSuite` is run right after `HiveCompatibilitySuite`, the f
Repository: spark
Updated Branches:
refs/heads/master 9d824fed8 -> 7f196b009
[SPARK-2283][SQL] Reset test environment before running PruningSuite
JIRA issue: [SPARK-2283](https://issues.apache.org/jira/browse/SPARK-2283)
If `PruningSuite` is run right after `HiveCompatibilitySuite`, the first
Repository: spark
Updated Branches:
refs/heads/master 1132e472e -> 9d824fed8
[SQL] SPARK-1800 Add broadcast hash join operator & associated hints.
This PR is based off Michael's [PR
734](https://github.com/apache/spark/pull/734) and includes a bunch of cleanups.
Moreover, this PR also
- make
Repository: spark
Updated Branches:
refs/heads/branch-0.9 ef8501d33 -> 950981971
[SPARK-1912] fix compress memory issue during reduce
When we need to read a compressed block, we will first create a compress stream
instance(LZF or Snappy) and use it to wrap that block.
Let's say a reducer task
Repository: spark
Updated Branches:
refs/heads/branch-1.0 b4b0a54cf -> abb62f0b9
[SPARK-1912] fix compress memory issue during reduce
When we need to read a compressed block, we will first create a compress stream
instance(LZF or Snappy) and use it to wrap that block.
Let's say a reducer task
Repository: spark
Updated Branches:
refs/heads/master 7ff2c754f -> 1132e472e
[SPARK-2204] Launch tasks on the proper executors in mesos fine-grained mode
The scheduler for Mesos in fine-grained mode launches tasks on the wrong
executors. `MesosSchedulerBackend.resourceOffers(SchedulerDriver,
Repository: spark
Updated Branches:
refs/heads/branch-1.0 15fd9f2bb -> b4b0a54cf
[SPARK-2204] Launch tasks on the proper executors in mesos fine-grained mode
The scheduler for Mesos in fine-grained mode launches tasks on the wrong
executors. `MesosSchedulerBackend.resourceOffers(SchedulerDriv
Repository: spark
Updated Branches:
refs/heads/master 9aa603296 -> 7ff2c754f
[SPARK-2270] Kryo cannot serialize results returned by asJavaIterable
and thus groupBy/cogroup are broken in Java APIs when Kryo is used).
@pwendell this should be merged into 1.0.1.
Thanks @sorenmacbeth for reporti
Repository: spark
Updated Branches:
refs/heads/branch-1.0 bb0b1645d -> 15fd9f2bb
[SPARK-2270] Kryo cannot serialize results returned by asJavaIterable
and thus groupBy/cogroup are broken in Java APIs when Kryo is used).
@pwendell this should be merged into 1.0.1.
Thanks @sorenmacbeth for rep
Repository: spark
Updated Branches:
refs/heads/branch-1.0 731a788eb -> bb0b1645d
[SPARK-2258 / 2266] Fix a few worker UI bugs
**SPARK-2258.** Worker UI displays zombie processes if the executor throws an
exception before a process is launched. This is because we only inform the
Worker of the
Repository: spark
Updated Branches:
refs/heads/master 5603e4c47 -> 9aa603296
[SPARK-2258 / 2266] Fix a few worker UI bugs
**SPARK-2258.** Worker UI displays zombie processes if the executor throws an
exception before a process is launched. This is because we only inform the
Worker of the cha
Repository: spark
Updated Branches:
refs/heads/master ac06a85da -> 5603e4c47
[SPARK-2242] HOTFIX: pyspark shell hangs on simple job
This reverts a change introduced in 3870248740d83b0292ccca88a494ce19783847f0,
which redirected all stderr to the OS pipe instead of directly to the
`bin/pyspark
Repository: spark
Updated Branches:
refs/heads/branch-1.0 c68be53d0 -> 731a788eb
Replace doc reference to Shark with Spark SQL.
(cherry picked from commit ac06a85da59db8f2654cdf6601d186348da09c01)
Signed-off-by: Reynold Xin
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/master acc01ab32 -> ac06a85da
Replace doc reference to Shark with Spark SQL.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ac06a85d
Tree: http://git-wip-us.apache.org/repos/
Repository: spark
Updated Branches:
refs/heads/master 22036aeb1 -> acc01ab32
SPARK-2038: rename "conf" parameters in the saveAsHadoop functions with
source-compatibility
https://issues.apache.org/jira/browse/SPARK-2038
to differentiate with SparkConf object and at the same time keep the sour
Repository: spark
Updated Branches:
refs/heads/branch-1.0 65a559cfc -> c68be53d0
[SPARK-2267] Log exception when TaskResultGetter fails to fetch/deserialze task
result
Note that this is only for branch-1.0 because master's been fixed.
Author: Reynold Xin
Closes #1202 from rxin/SPARK-2267 a
Repository: spark
Updated Branches:
refs/heads/master 8fade8973 -> 22036aeb1
[BUGFIX][SQL] Should match java.math.BigDecimal when wnrapping Hive output
The `BigDecimal` branch in `unwrap` matches to `scala.math.BigDecimal` rather
than `java.math.BigDecimal`.
Author: Cheng Lian
Closes #1199
Repository: spark
Updated Branches:
refs/heads/branch-1.0 a31def10a -> 65a559cfc
[BUGFIX][SQL] Should match java.math.BigDecimal when wnrapping Hive output
The `BigDecimal` branch in `unwrap` matches to `scala.math.BigDecimal` rather
than `java.math.BigDecimal`.
Author: Cheng Lian
Closes #
Repository: spark
Updated Branches:
refs/heads/branch-1.0 d3dbaf5a7 -> a31def10a
[SPARK-2263][SQL] Support inserting MAP to Hive tables
JIRA issue: [SPARK-2263](https://issues.apache.org/jira/browse/SPARK-2263)
Map objects were not converted to Hive types before inserting into Hive tables.
A
Repository: spark
Updated Branches:
refs/heads/master b6b44853c -> 8fade8973
[SPARK-2263][SQL] Support inserting MAP to Hive tables
JIRA issue: [SPARK-2263](https://issues.apache.org/jira/browse/SPARK-2263)
Map objects were not converted to Hive types before inserting into Hive tables.
Autho
28 matches
Mail list logo