Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/771#issuecomment-54020889
Hey @aarondav, Do you think its worth having in its current condition ? I
can rebase it ofcourse. I was actually unsure of changing it further.
---
If your project
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2221#issuecomment-54022209
Hey @srowen Thanks for fixing this. I feel your argument is plausible, so I
am not verifying it. The change looks reasonable too.
Looks good to me.
---
If
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/#issuecomment-54022844
In conclusion, this is a good change !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2189#issuecomment-54023734
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2215#discussion_r16942016
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -90,10 +90,9 @@ case class SetCommand(
throw new
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2215#issuecomment-54024310
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19550/consoleFull)
for PR 2215 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2218#issuecomment-54025736
I just checked, `--queue` is a valid option in spark submit. And thanks for
updating the docs.
Looks good.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2198#issuecomment-54025962
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19552/consoleFull)
for PR 2198 at commit
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2216#discussion_r16942761
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/DStream.scala ---
@@ -603,14 +603,14 @@ abstract class DStream[T: ClassTag] (
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2216#issuecomment-54028823
This change is okay to have, but to print N elements from Dstream you can
do something like dstream.foreachRDD(rdd = println(rdd.take(N).mkString)). I
will let
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2225#issuecomment-54030562
FYI: It looks like branch-1.1 has an out-of-date version of
`run-tests-jenkins`. Might want to update that.
---
If your project is set up for it, you can reply to this
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2199#issuecomment-54031780
I was in an impression that mesos native libs are set up on Jenkins.
But anyway this seems to be a good change.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2198#issuecomment-54031723
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19552/consoleFull)
for PR 2198 at commit
GitHub user baishuo opened a pull request:
https://github.com/apache/spark/pull/2226
[SPARK-3007][SQL]Add Dynamic Partition support to Spark Sql hive
a new PR base on new master. changes are the same as
https://github.com/apache/spark/pull/1919
You can merge this pull request
Github user baishuo commented on the pull request:
https://github.com/apache/spark/pull/1919#issuecomment-54032088
Hi @
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-54032115
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2215#issuecomment-54033083
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19550/consoleFull)
for PR 2215 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2189#issuecomment-54033709
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19551/consoleFull)
for PR 2189 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2138#discussion_r16945208
--- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
@@ -285,7 +286,8 @@ object SparkEnv extends Logging {
sparkFilesDir,
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/2179#issuecomment-54034960
It's much neater and simpler :+1:
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2204#discussion_r16945967
--- Diff:
yarn/common/src/test/scala/org/apache/spark/deploy/yarn/ClientBaseSuite.scala
---
@@ -232,6 +233,15 @@ class ClientBaseSuite extends FunSuite
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2196#issuecomment-54037509
This is okay, but I did not understand the need for a separate PR and JIRA
for same issue. Issue simply applies to both pyspark and spark-shell.
---
If your project
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2216#discussion_r16946191
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/DStream.scala ---
@@ -603,14 +603,14 @@ abstract class DStream[T: ClassTag] (
*
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2194#discussion_r16946230
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -186,6 +186,56 @@ trait JavaRDDLike[T, This : JavaRDDLike[T, This]]
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2221#issuecomment-54037958
Yes, PS, I did verify that this was the cause, by changing the code to
print the stderr from the command that fails in SparkSubmitSuite. It was due to
multiple
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2216#discussion_r16946373
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/DStream.scala ---
@@ -603,14 +603,14 @@ abstract class DStream[T: ClassTag] (
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2158#issuecomment-54038679
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19553/consoleFull)
for PR 2158 at commit
Github user BigCrunsh commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54040903
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54041405
QA tests have started for PR 2137. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19554/consoleFull
---
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2194#issuecomment-54043344
It might be good to add a test suite for this in `JavaAPISuite.java`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2158#issuecomment-54044103
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19553/consoleFull)
for PR 2158 at commit
Github user darabos commented on a diff in the pull request:
https://github.com/apache/spark/pull/2081#discussion_r16948878
--- Diff: ec2/spark_ec2.py ---
@@ -342,6 +343,15 @@ def launch_cluster(conn, opts, cluster_name):
device.delete_on_termination = True
Github user darabos commented on the pull request:
https://github.com/apache/spark/pull/2081#issuecomment-54044479
I've tested this now with `ec2/spark-ec2 -s 1 --instance-type m3.2xlarge
--region=us-east-1 launch` and the machines have mounted the SSDs. Thanks!
---
If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2081#issuecomment-54044709
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19555/consoleFull)
for PR 2081 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1230#issuecomment-54045207
@adamosloizou Incase we agree on `:quit` is the correct way. Can you close
this PR ?
---
If your project is set up for it, you can reply to this email and have your
Github user adamosloizou commented on the pull request:
https://github.com/apache/spark/pull/1230#issuecomment-54045637
Fair enough. Closing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user adamosloizou closed the pull request at:
https://github.com/apache/spark/pull/1230
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54048245
QA results for PR 2137:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2081#issuecomment-54049490
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19555/consoleFull)
for PR 2081 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2134#issuecomment-54050427
There is something simillar in #791
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2138#issuecomment-54050485
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19556/consoleFull)
for PR 2138 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2138#issuecomment-54050554
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19556/consoleFull)
for PR 2138 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2134#issuecomment-54050560
And some of the comment there applies to this patch as well..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user bbejeck opened a pull request:
https://github.com/apache/spark/pull/2227
[CORE] SPARK-3178 setting SPARK_WORKER_MEMORY to a value without a label (m
or g) sets the worker memory limit to zero
Now the worker will fail fast if the memory is set to zero by leaving off
the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2138#issuecomment-54051264
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19557/consoleFull)
for PR 2138 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2227#issuecomment-54051575
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user prudhvije opened a pull request:
https://github.com/apache/spark/pull/2228
SPARK-3328 fixed make-distribution script --with-tachyon option.
Directory path for dependencies jar and resources in Tachyon 0.5.0 has been
changed.
You can merge this pull request into a Git
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2228#issuecomment-54051959
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2228#issuecomment-54052034
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2228#issuecomment-54052144
Actually running jenkins test on this is wasteful.
@pwendell This looks like a good fix to me, may be this should go in 1.1.0
too ?
---
If your project is
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/2229
SPARK-3337 Paranoid quoting in shell to allow install dirs with spaces
within.
...
Still testing it out, with spark install dir name having spaces in it.
You can merge this pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2229#issuecomment-54053703
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19558/consoleFull)
for PR 2229 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2229#issuecomment-54053690
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19558/consoleFull)
for PR 2229 at commit
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/2230
SPARK-2096 Correctly parse dot notations
First let me write down the current `projections` grammar of spark sql:
expression: orExpression
orExpression
Github user BigCrunsh commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54055010
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54055383
QA tests have started for PR 2137. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19559/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54055470
QA results for PR 2137:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user loachli commented on the pull request:
https://github.com/apache/spark/pull/2102#issuecomment-54056189
I have created SPARK-3191(https://issues.apache.org/jira/browse/SPARK-3191)
for it. Do you think it is enough for this PR?
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2138#issuecomment-54056581
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19557/consoleFull)
for PR 2138 at commit
Github user BigCrunsh commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54057841
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54058061
QA tests have started for PR 2137. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19560/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54058346
QA results for PR 2137:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-54059387
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19561/consoleFull)
for PR 2078 at commit
Github user mattf commented on the pull request:
https://github.com/apache/spark/pull/2183#issuecomment-54061965
What's the problem without this patch? I remember that the JVM will
shutdown itself after shell exited.
davies, i went back and tried to reproduce the shell
Github user mattf commented on the pull request:
https://github.com/apache/spark/pull/2183#issuecomment-54062440
Is it better to put atexit.register() in context.py? So all the pyspark
jobs can have this.
i think it's a question of who owns the context. the owner is whomever
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-54064930
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19562/consoleFull)
for PR 1983 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-54065121
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19562/consoleFull)
for PR 1983 at commit
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-54065708
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-54066312
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19563/consoleFull)
for PR 1983 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-54066507
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19561/consoleFull)
for PR 2078 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/1541#issuecomment-54067191
ping @JoshRosen, could you help take a look at this one?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2196#issuecomment-54067786
Yeah, that's what I suggested on the linked PR. It makes the set of related
changes atomic, too. But I guess it's not a big deal.
---
If your project is set up for
Github user BigCrunsh commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54067988
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2083#issuecomment-54068545
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19565/consoleFull)
for PR 2083 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54068540
QA tests have started for PR 2137. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19564/consoleFull
---
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2199#issuecomment-54071586
yep, this error produced when mesos native lib is on
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-54072638
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19563/consoleFull)
for PR 1983 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2083#issuecomment-54074910
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19565/consoleFull)
for PR 2083 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/991#issuecomment-54077338
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19566/consoleFull)
for PR 991 at commit
Github user BigCrunsh commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54079294
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54079494
QA tests have started for PR 2137. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19567/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/991#issuecomment-54081074
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19566/consoleFull)
for PR 991 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54084205
QA results for PR 2137:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
GitHub user BigCrunsh opened a pull request:
https://github.com/apache/spark/pull/2231
Use SquaredL2Updater in LogisticRegressionWithSGD
SimpleUpdater ignores the regularizer, which leads to an unregularized
LogReg. To enable the common L2 regularizer (and the corresponding
Github user BigCrunsh commented on the pull request:
https://github.com/apache/spark/pull/2137#issuecomment-54092745
@mengxr, do you agree with this modification?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user chesterxgchen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2204#discussion_r16964789
--- Diff:
yarn/common/src/test/scala/org/apache/spark/deploy/yarn/ClientBaseSuite.scala
---
@@ -232,6 +233,15 @@ class ClientBaseSuite extends
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/2232
[SPARK-3319 / 3338] Resolve Spark submit config paths
**SPARK-3319.** There is currently a divergence in behavior when the user
passes in additional jars through `--jars` and through setting
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2232#issuecomment-54096247
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19569/consoleFull)
for PR 2232 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2194#issuecomment-54097265
@ChengXiangLi could you describe a bit more what the context is being used
for? This is an unstable API so I'm a bit hesitant to expose this in its
current form. It
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2211#discussion_r16966248
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -965,11 +966,9 @@ class SparkILoop(in0: Option[BufferedReader],
protected val
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2211#issuecomment-54097710
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19570/consoleFull)
for PR 2211 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2211#issuecomment-54097997
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19571/consoleFull)
for PR 2211 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2232#issuecomment-54098222
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19569/consoleFull)
for PR 2232 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1541#issuecomment-54098544
Thanks for the reminder.
@kayousterhout I looked over @zsxwing's example and I agree that there's a
thread-safety issue here.We can definitely have
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1541#issuecomment-54098674
Actually, it looks like the `fetchedStatuses` vs `statuses` synchronization
is correct, since it's guarding against modification to that statuses array
while reading
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/845#issuecomment-54099382
@nikhils05 Can you close this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2211#issuecomment-54099820
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19570/consoleFull)
for PR 2211 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2211#issuecomment-54100241
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19571/consoleFull)
for PR 2211 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/901#issuecomment-54100620
@ankurdave @rxin could you guys come to a decision on this one way or the
other? Also @npanj mind adding `[GRAPHX]` to the title here? Right now this is
getting sorted
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/81#issuecomment-54100664
I believed this was fixed by the larger change in #1777 so we can close
this issue for now.
---
If your project is set up for it, you can reply to this email and have
1 - 100 of 158 matches
Mail list logo