Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/5609#issuecomment-96144023
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does no
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29098432
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,17 +17,172 @@
package org.apache.spark.ui.jobs
-import
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29098430
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,17 +17,170 @@
package org.apache.spark.ui.jobs
-import
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29098423
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/timeline-view.js ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29098420
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/timeline-view.js ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/5609#issuecomment-96141048
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5697#issuecomment-96135474
[Test build #30951 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30951/consoleFull)
for PR 5697 at commit
[`1ebad60`](https://githu
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-96135291
Jenkins. Make it so! Oh right...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does n
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-96135280
@andrewor14 all good suggestions. I've captured them all in this round of
commits. I'm still not sold on the naming of the Util object. So I've left it
for now.
Github user AiHe commented on the pull request:
https://github.com/apache/spark/pull/5687#issuecomment-96133857
Get your point and change all toString methods in MLLIB.
There are a large number of uses of 'old' way in the statement like
logInfo, which makes it hard to change all t
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/5697
[SPARK-7140][MLLIB] only scan the first 16 nonzeros in Vector.hashCode
The Python SerDe calls `Object.hashCode`, which is very expensive for
Vectors. It is not necessary to scan the whole vector, esp
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29098031
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29098013
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/param/shared/sharedParams.scala ---
@@ -256,4 +256,38 @@ trait HasFitIntercept extends Params {
/** @
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/5636#issuecomment-96131461
Roger - I'll add tests to the suite.
Sent with Good (www.good.com)
-Original Message-
From: Imran Rashid
[notificati...@githu
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/5636#issuecomment-96129881
No Imran - they don't. However I see the same on he master branch. I don't
think they have anything to do with my changes.
Sent with Good (www.good.com
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097845
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097799
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r2909
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097743
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097732
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96127284
Looks like a great start! Left some comments mostly about Python style and
organization. Will take a closer look next week at the actual logic and flow.
---
If your pro
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097719
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097716
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097702
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097705
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097697
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097695
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR=
Github user shenh062326 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5608#discussion_r29097662
--- Diff: core/src/main/scala/org/apache/spark/util/SizeEstimator.scala ---
@@ -204,25 +204,36 @@ private[spark] object SizeEstimator extends Logging {
Github user His-name-is-Joof commented on the pull request:
https://github.com/apache/spark/pull/5667#issuecomment-96124482
Thanks for your patience shivaram.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5693
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5684
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/5683#issuecomment-96124413
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/5693#issuecomment-96124398
Thanks. I've merged this in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/5684#issuecomment-96124133
Thanks. I've merged this in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/5430#issuecomment-96123516
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5685#discussion_r29097065
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -77,6 +80,9 @@ private[spark] object ClosureCleaner extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5696#issuecomment-96121428
[Test build #30948 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30948/consoleFull)
for PR 5696 at commit
[`3b554b5`](https://gith
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/5636#issuecomment-96120931
Hi @ilganeli thanks for updating this. Not sure if you are still working
on this or not, but we definitely need tests for the new behavior as well.
There are tests aroun
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/5636#issuecomment-96120951
btw I have no idea what is going on in those test failures ... do the tests
pass when you run them locally?
---
If your project is set up for it, you can reply to this em
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/4062#issuecomment-96119987
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/5677#issuecomment-96119961
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user hellertime commented on a diff in the pull request:
https://github.com/apache/spark/pull/3074#discussion_r29096430
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to t
Github user WangTaoTheTonic commented on a diff in the pull request:
https://github.com/apache/spark/pull/5609#discussion_r29096305
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitCommandBuilder.java
---
@@ -190,6 +190,10 @@
firstNonEmptyValue(Spar
Github user WangTaoTheTonic commented on a diff in the pull request:
https://github.com/apache/spark/pull/5609#discussion_r29096296
--- Diff:
launcher/src/test/java/org/apache/spark/launcher/SparkSubmitCommandBuilderSuite.java
---
@@ -59,6 +59,8 @@ public void testClusterCmdBuilde
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-96114471
@doctapp actually in that example Dockerfile, the implication was that the
container had been run with a flag such as `-v
/usr/local/lib:/host/usr/local/lib:ro`, so th
Github user ArcherShao closed the pull request at:
https://github.com/apache/spark/pull/5676
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/5687#discussion_r29095837
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/model/Predict.scala ---
@@ -29,9 +29,7 @@ class Predict(
val predict: Double,
v
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5695#issuecomment-96108798
[Test build #30949 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30949/consoleFull)
for PR 5695 at commit
[`a7a4cb9`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96108696
[Test build #30950 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30950/consoleFull)
for PR 5694 at commit
[`3c53a1a`](https://gith
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5685#issuecomment-96108534
The main issue with grabbing the values into the closure is that we'll need
to do this everywhere. My intention is to wrap many existing methods (in
SparkContext and o
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5685#issuecomment-96107937
Yes, this also affects user programs. For example I modified SparkPi to
follow the pattern I described in the PR description
```
...
val slices = if
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5609#discussion_r29095264
--- Diff:
launcher/src/test/java/org/apache/spark/launcher/SparkSubmitCommandBuilderSuite.java
---
@@ -59,6 +59,8 @@ public void testClusterCmdBuilder() throw
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5609#discussion_r29095255
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitCommandBuilder.java
---
@@ -190,6 +190,10 @@
firstNonEmptyValue(SparkLauncher
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5696#discussion_r29095193
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -95,14 +95,8 @@ private[spark] class ApplicationMaster(
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96105072
@vanzin Sounds reasonable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5696#discussion_r29095078
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -95,14 +95,8 @@ private[spark] class ApplicationMaster(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5696#issuecomment-96104378
[Test build #30948 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30948/consoleFull)
for PR 5696 at commit
[`3b554b5`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5695#issuecomment-96104382
[Test build #30949 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30949/consoleFull)
for PR 5695 at commit
[`a7a4cb9`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96104392
[Test build #30950 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30950/consoleFull)
for PR 5694 at commit
[`3c53a1a`](https://githu
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/5685#issuecomment-96103803
BTW the general workaround for this kind of stuff is to grab the values
from outside the closure into a local val before you call something. I'd look
into how to do that b
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/5685#issuecomment-96103510
Does this happen in user programs or just in SparkContext? This is exactly
what ClosureCleaner was designed to deal with, so I'm surprised that it's a
problem.
---
If yo
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5687#issuecomment-96103225
... this still is not using string interpolation everywhere. It's a small
thing, but if we're bothering to update these `toString` methods, go all the
way.
---
If your p
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5696#issuecomment-96102941
Note that as is this will cause an exception if the hook actually runs;
#5672 has a change that fixes that.
---
If your project is set up for it, you can reply to this em
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/5696
[SPARK-3090] [core] Stop SparkContext if user forgets to.
Set up a shutdown hook to try to stop the Spark context in
case the user forgets to do it. The main effect is that any
open logs files
Github user AiHe commented on the pull request:
https://github.com/apache/spark/pull/5687#issuecomment-96102493
Okay, sounds better to use a modern style.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project do
Github user rakeshchalasani commented on the pull request:
https://github.com/apache/spark/pull/5692#issuecomment-96101855
FWIW, I think LR with SGD should be made public and LR with LBFGS should be
left as it is.
---
If your project is set up for it, you can reply to this email and
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/5626#issuecomment-96101280
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96094433
Hooks have a priority so that they can choose the order they're executed
in. e.g. if some hook really needs to execute before another one, that's
possible. Similar to how
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96093469
While we are at it: any particular reason for using PriorityQueue or can we
replace it by ConcurrentLinkedQueue or something and get rid of a few
synchronized's ?
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/5677#issuecomment-96093322
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user rakeshchalasani closed the pull request at:
https://github.com/apache/spark/pull/5692
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96090439
@nishkamravi2 just adding `Try()` around the `logUncaughtExceptions` call
should be sufficient.
---
If your project is set up for it, you can reply to this email and have
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96089855
@vanzin Are we suggesting adding a try-catch block inside the while loop
in runAll or deleting throw's in logUncaughtExceptions ?
---
If your project is set up for
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96089151
@srowen I think I'd approach this one as why-not (especially with logging
enabled). This particular call site has warranted two separate PRs attempting
to add a try-
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/5526#discussion_r29092105
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/FSBasedRelationSuite.scala
---
@@ -0,0 +1,425 @@
+/*
+ * Licensed to the Apache Soft
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/5574#issuecomment-96087350
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/5636#issuecomment-96087392
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/5692#issuecomment-96086100
Yes, we are deprecating the static train methods. See
https://issues.apache.org/jira/browse/SPARK-6682
---
If your project is set up for it, you can reply to this email
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091503
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -97,3 +193,153 @@ class LinearRegressionModel private[ml] (
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091451
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091469
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -97,3 +193,153 @@ class LinearRegressionModel private[ml] (
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091500
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -97,3 +193,153 @@ class LinearRegressionModel private[ml] (
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091459
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091453
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091441
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/param/shared/sharedParams.scala ---
@@ -256,4 +256,38 @@ trait HasFitIntercept extends Params {
/** @
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091473
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -97,3 +193,153 @@ class LinearRegressionModel private[ml] (
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091447
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091448
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091445
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091461
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29091444
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5692#issuecomment-96082557
I think we've discussed this before, and we don't want to add more `object`
methods as it's getting out of hand. I think it's an accident that the L-BFGS
object is public.
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/5695
[SPARK-7138][Streaming] Add method to BlockGenerator to add multiple
records to BlockGenerator with single callback
This is to ensure that receivers that receive data in small batches (like
Kinesis) a
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4937#issuecomment-96081013
This is a cool feature, but I'm not sure Spark needs its own in-house
expression language. It seems like it belongs in, for example, the Typesafe
Config project that Spark
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-96081014
Alright, I think I finished my review. Looking good, just a bunch of nits
and some dependency stuff to sort out. You could probably also remove the "WIP"
from the title at
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r29090285
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -14,22 +14,170 @@
* See the License for the specific langua
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r29090297
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -14,22 +14,170 @@
* See the License for the specific langua
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4435#discussion_r29090268
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -14,22 +14,170 @@
* See the License for the specific langua
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96080643
Since we're talking exceptions, `SparkShutdownHookManager.runAll` could be
modified to catch exceptions for individual hooks and ignore them. It already
logs them, but `lo
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5672#issuecomment-96080347
In general I'd argue for not just catching exceptions broadly as a defense.
This is a `stop()` method though, and often the right behavior is to just keep
going even if st
1 - 100 of 360 matches
Mail list logo