Repository: spark
Updated Branches:
refs/heads/branch-1.2 18fbed5b5 -> 5cea859fd
[SPARK-4808] Removing minimum number of elements read before spill check
In the general case, Spillable's heuristic of checking for memory stress
on every 32nd item after 1000 items are read is good enough. In gen
Repository: spark
Updated Branches:
refs/heads/master 0cfd2cebd -> 3be92cdac
[SPARK-4808] Removing minimum number of elements read before spill check
In the general case, Spillable's heuristic of checking for memory stress
on every 32nd item after 1000 items are read is good enough. In general
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ba941ceb1 -> 0382dcc0a
[SPARK-4808] Removing minimum number of elements read before spill check
In the general case, Spillable's heuristic of checking for memory stress
on every 32nd item after 1000 items are read is good enough. In gen
Repository: spark
Updated Branches:
refs/heads/branch-1.3 c5f3b9e02 -> ba941ceb1
[SPARK-5900][MLLIB] make PIC and FPGrowth Java-friendly
In the previous version, PIC stores clustering assignments as an `RDD[(Long,
Int)]`. This is mapped to `RDD>` in Java and hence Java
users have to cast typ
Repository: spark
Updated Branches:
refs/heads/master 6bddc4035 -> 0cfd2cebd
[SPARK-5900][MLLIB] make PIC and FPGrowth Java-friendly
In the previous version, PIC stores clustering assignments as an `RDD[(Long,
Int)]`. This is mapped to `RDD>` in Java and hence Java
users have to cast types m
Repository: spark
Updated Branches:
refs/heads/master 34b7c3538 -> 6bddc4035
SPARK-5570: No docs stating that `new SparkConf().set("spark.driver.memory",
...) will not work
I've updated documentation to reflect true behavior of this setting in client
vs. cluster mode.
Author: Ilya Ganelin
Repository: spark
Updated Branches:
refs/heads/branch-1.3 bd49e8b96 -> c5f3b9e02
SPARK-5570: No docs stating that `new SparkConf().set("spark.driver.memory",
...) will not work
I've updated documentation to reflect true behavior of this setting in client
vs. cluster mode.
Author: Ilya Ganel
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ff8976ec7 -> bd49e8b96
http://git-wip-us.apache.org/repos/asf/spark/blob/bd49e8b9/streaming/src/test/scala/org/apache/spark/streaming/ReceivedBlockHandlerSuite.scala
--
Repository: spark
Updated Branches:
refs/heads/master ad6b169de -> 34b7c3538
http://git-wip-us.apache.org/repos/asf/spark/blob/34b7c353/streaming/src/test/scala/org/apache/spark/streaming/ReceivedBlockHandlerSuite.scala
--
diff
SPARK-4682 [CORE] Consolidate various 'Clock' classes
Another one from JoshRosen 's wish list. The first commit is much smaller and
removes 2 of the 4 Clock classes. The second is much larger, necessary for
consolidating the streaming one. I put together implementations in the way that
seemed s
SPARK-4682 [CORE] Consolidate various 'Clock' classes
Another one from JoshRosen 's wish list. The first commit is much smaller and
removes 2 of the 4 Clock classes. The second is much larger, necessary for
consolidating the streaming one. I put together implementations in the way that
seemed s
Repository: spark
Updated Branches:
refs/heads/branch-1.2 856fdcb65 -> 18fbed5b5
[Spark-5889] Remove pid file after stopping service.
Currently the pid file is not deleted, and potentially may cause some problem
after service is stopped. The fix remove the pid file after service stopped.
Aut
Repository: spark
Updated Branches:
refs/heads/master a5fed3435 -> ad6b169de
[Spark-5889] Remove pid file after stopping service.
Currently the pid file is not deleted, and potentially may cause some problem
after service is stopped. The fix remove the pid file after service stopped.
Author:
Repository: spark
Updated Branches:
refs/heads/branch-1.3 0c494cf9a -> ff8976ec7
[Spark-5889] Remove pid file after stopping service.
Currently the pid file is not deleted, and potentially may cause some problem
after service is stopped. The fix remove the pid file after service stopped.
Aut
Repository: spark
Updated Branches:
refs/heads/master 8ca3418e1 -> a5fed3435
[SPARK-5902] [ml] Made PipelineStage.transformSchema public instead of private
to ml
For users to implement their own PipelineStages, we need to make
PipelineStage.transformSchema be public instead of private to ml.
Repository: spark
Updated Branches:
refs/heads/branch-1.3 55d91d92b -> 0c494cf9a
[SPARK-5902] [ml] Made PipelineStage.transformSchema public instead of private
to ml
For users to implement their own PipelineStages, we need to make
PipelineStage.transformSchema be public instead of private to
Repository: spark
Updated Branches:
refs/heads/branch-1.3 fe00eb66e -> 55d91d92b
[SPARK-5904][SQL] DataFrame API fixes.
1. Column is no longer a DataFrame to simplify class hierarchy.
2. Don't use varargs on abstract methods (see Scala compiler bug SI-9013).
Author: Reynold Xin
Closes #4686
Repository: spark
Updated Branches:
refs/heads/master 94cdb05ff -> 8ca3418e1
[SPARK-5904][SQL] DataFrame API fixes.
1. Column is no longer a DataFrame to simplify class hierarchy.
2. Don't use varargs on abstract methods (see Scala compiler bug SI-9013).
Author: Reynold Xin
Closes #4686 fro
Repository: spark
Updated Branches:
refs/heads/master 90095bf3c -> 94cdb05ff
[SPARK-5825] [Spark Submit] Remove the double checking instance name when
stopping the service
`spark-daemon.sh` will confirm the process id by fuzzy matching the class name
while stopping the service, however, it w
Repository: spark
Updated Branches:
refs/heads/branch-1.2 61bde0049 -> 856fdcb65
[SPARK-5825] [Spark Submit] Remove the double checking instance name when
stopping the service
`spark-daemon.sh` will confirm the process id by fuzzy matching the class name
while stopping the service, however,
Repository: spark
Updated Branches:
refs/heads/branch-1.3 25fae8e7e -> fe00eb66e
[SPARK-5825] [Spark Submit] Remove the double checking instance name when
stopping the service
`spark-daemon.sh` will confirm the process id by fuzzy matching the class name
while stopping the service, however,
Repository: spark
Updated Branches:
refs/heads/branch-1.2 f6ee80b18 -> 61bde0049
[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure
deleting the temp file
This PR adds a `finalize` method in DiskMapIterator to clean up the resources
even if some exception happens duri
Repository: spark
Updated Branches:
refs/heads/branch-1.1 651ceaeb3 -> 36f3c499f
[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure
deleting the temp file
This PR adds a `finalize` method in DiskMapIterator to clean up the resources
even if some exception happens duri
Repository: spark
Updated Branches:
refs/heads/branch-1.3 f93d4d992 -> 25fae8e7e
[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure
deleting the temp file
This PR adds a `finalize` method in DiskMapIterator to clean up the resources
even if some exception happens duri
Repository: spark
Updated Branches:
refs/heads/master 38e624a73 -> 90095bf3c
[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure
deleting the temp file
This PR adds a `finalize` method in DiskMapIterator to clean up the resources
even if some exception happens during p
Repository: spark
Updated Branches:
refs/heads/master fb87f4492 -> 38e624a73
[SPARK-5816] Add huge compatibility warning in DriverWrapper
The stability of the new submission gateway assumes that the arguments in
`DriverWrapper` are consistent across multiple Spark versions. However, this is
Repository: spark
Updated Branches:
refs/heads/branch-1.3 fbcb949c5 -> f93d4d992
[SPARK-5816] Add huge compatibility warning in DriverWrapper
The stability of the new submission gateway assumes that the arguments in
`DriverWrapper` are consistent across multiple Spark versions. However, this
Repository: spark
Updated Branches:
refs/heads/branch-1.3 092b45f69 -> fbcb949c5
SPARK-5548: Fix for AkkaUtilsSuite failure - attempt 2
Author: Jacek Lewandowski
Closes #4653 from jacek-lewandowski/SPARK-5548-2-master and squashes the
following commits:
0e199b6 [Jacek Lewandowski] SPARK-55
Repository: spark
Updated Branches:
refs/heads/master e945aa613 -> fb87f4492
SPARK-5548: Fix for AkkaUtilsSuite failure - attempt 2
Author: Jacek Lewandowski
Closes #4653 from jacek-lewandowski/SPARK-5548-2-master and squashes the
following commits:
0e199b6 [Jacek Lewandowski] SPARK-5548:
29 matches
Mail list logo