Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22686489
--- Diff: ec2/spark_ec2.py ---
@@ -39,10 +39,24 @@
from optparse import OptionParser
from sys import stderr
+VALID_SPARK_VERSIONS = set
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3939#issuecomment-69258425
@andrewor14 Yeah, I've tested launching:
* valid Spark release
* invalid Spark release
* valid Spark hash
* invalid Spark hash
They all get
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3957#issuecomment-69260948
cc @marmbrus
Btw @alexbaretta, when/if this PR gets merged into the codebase, your full
name (not just your GitHub username) will be used as the author, so
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3939#issuecomment-69261120
Btw @shivaram or @andrewor14, can you confirm that the wiki page I created
(linked to in the PR body) is in the right place?
---
If your project is set up for it, you
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3855#issuecomment-69264886
cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22689661
--- Diff: ec2/spark_ec2.py ---
@@ -236,6 +252,26 @@ def get_or_make_group(conn, name, vpc_id):
return conn.create_security_group(name, Spark EC2
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3939#issuecomment-69266853
The failed test is a Kafka streaming test and is unrelated to this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22670523
--- Diff: ec2/spark_ec2.py ---
@@ -983,6 +969,12 @@ def real_main():
(opts, action, cluster_name) = parse_args()
# Input parameter
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22623181
--- Diff: ec2/spark_ec2.py ---
@@ -983,6 +969,12 @@ def real_main():
(opts, action, cluster_name) = parse_args()
# Input parameter
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1572#issuecomment-69109231
@lianhuiwang This PR has gone stale. Do you mind updating it so that tests
pass and it merges cleanly, or are you waiting for feedback from a committer?
---
If your
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22623112
--- Diff: ec2/spark_ec2.py ---
@@ -815,13 +804,11 @@ def deploy_files(conn, root_dir, opts, master_nodes,
slave_nodes, modules):
cluster_url = %s
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/731#issuecomment-69109022
@CodingCat Do we need to ping anyone specific to look at this PR? It's been
many months since the last update.
---
If your project is set up for it, you can reply
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22624980
--- Diff: ec2/spark_ec2.py ---
@@ -815,13 +804,11 @@ def deploy_files(conn, root_dir, opts, master_nodes,
slave_nodes, modules):
cluster_url = %s
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3939#issuecomment-69103523
cc @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3939#discussion_r22621704
--- Diff: ec2/spark_ec2.py ---
@@ -706,9 +697,7 @@ def wait_for_cluster_state(conn, opts,
cluster_instances, cluster_state):
sys.stdout.flush
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3939
[SPARK-5122] Remove Shark
I moved the Spark-Shark version map [to the
wiki](https://cwiki.apache.org/confluence/display/SPARK/Spark-Shark+version+mapping).
You can merge this pull request
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-68976789
Sounds good. Thanks for following up; I haven't had a chance to look over
this PR in a while.
(Btw, I rebased for kicks. You can disregard the coming test run
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68980974
Also @zhzhan, could you squash your commits into one? Right now there
appear to be a large number of merge commits that are not relevant to this PR.
These will show up
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/339#issuecomment-68647110
@CodingCat @kayousterhout What's the status of this PR? It hasn't been
updated in several months.
---
If your project is set up for it, you can reply to this email
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68189975
@nchammas I spoke too soon earlier regarding it correctly handling
relative paths. I fixed it and is now pwd-preserving.
:+1:
On average, it would
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2634#issuecomment-68190043
@mengxr Now that 1.2.0 is out, can we schedule a rough timeframe for
reviewing this patch?
---
If your project is set up for it, you can reply to this email and have
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68158971
@brennonyork Does this handle relative paths passed to Maven correctly (if
that's a valid potential use case)? We had this problem with the `spark-ec2`
script which
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2348#issuecomment-68159339
Clickable link for the lazy: [Spark Packages](http://spark-packages.org/)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68111992
cc @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3804
[EC2] Update mesos/spark-ec2 branch to branch-1.3
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nchammas/spark patch-2
Alternatively you
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3793
[EC2] Update default Spark version to 1.2.0
Now that 1.2.0 is out, let's update the default Spark version.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3793#issuecomment-68072359
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3793#issuecomment-68073619
Hmm, master is [already on
1.3.0](https://github.com/apache/spark/blob/199e59aacd540e17b31f38e0e32a3618870e9055/docs/_config.yml#L16)
in that config file in dfb8c65
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3793#issuecomment-68076809
Oh, hold up, we need to update [this
map](https://github.com/nchammas/spark/blob/ec0e904608eaa65bbbf35b2558a0116387abaecf/ec2/spark_ec2.py#L257),
too...
---
If your
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3377#issuecomment-68077832
It generally looks good to me, with a couple of exceptions: In general [we
should use `$(...)` in place of
backticks](http://mywiki.wooledge.org/BashFAQ/082), and we
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3377#discussion_r22264729
--- Diff: bin/run-example ---
@@ -35,17 +35,29 @@ else
fi
if [ -f $FWDIR/RELEASE ]; then
- export SPARK_EXAMPLES_JAR=`ls $FWDIR/lib
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3377#discussion_r22264776
--- Diff: bin/run-example ---
@@ -35,17 +35,29 @@ else
fi
if [ -f $FWDIR/RELEASE ]; then
- export SPARK_EXAMPLES_JAR=`ls $FWDIR/lib
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3377#discussion_r22264804
--- Diff: bin/run-example ---
@@ -35,17 +35,29 @@ else
fi
if [ -f $FWDIR/RELEASE ]; then
- export SPARK_EXAMPLES_JAR=`ls $FWDIR/lib
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3739#issuecomment-68080259
I'm not in a position to review this PR this week, but I just wanted to say
nice work! I'm looking forward to trying out these refactored tests as part of
#3564
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3707#discussion_r22268444
--- Diff: build/mvn ---
@@ -0,0 +1,130 @@
+#!/usr/bin/env bash
+
+# Determine the current working directory
+_DIR=$( cd $( dirname
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1297#issuecomment-68019141
@ankurdave Does this mean IndexedRDD will not become part of Spark Core, or
is that still potentially happening in the near future?
---
If your project is set up
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-68024562
FYI: The commit message at 317e114 now incorrectly states that the linear
backoff was removed. Is there any way to fix that?
---
If your project is set up for it, you
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3707#discussion_r22247187
--- Diff: build/mvn ---
@@ -0,0 +1,130 @@
+#!/usr/bin/env bash
+
+# Determine the current working directory
+_DIR=$( cd $( dirname
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-68031420
@JoshRosen There are a couple things that we may want to change about this
new behavior in the future. The first is that `--help` now requires a download
to work, which
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3707#discussion_r22194990
--- Diff: build/mvn ---
@@ -0,0 +1,119 @@
+#!/usr/bin/env bash
+
+# Determine the current working directory
+_DIR=$( cd $( dirname
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3770
[SPARK-4890] Ignore downloaded EC2 libs
PR #3737 changed `spark-ec2` to automatically download boto from PyPI. This
PR tell git to ignore those downloaded library files.
You can merge this pull
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3770#issuecomment-67920359
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3772
[Docs] Minor typo fixes
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nchammas/spark patch-1
Alternatively you can review and apply
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3772#issuecomment-67922385
I don't think it's worth creating a JIRA for this, so it's OK with me if
this contribution is not credited in the release notes.
---
If your project is set up
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3751#issuecomment-67783704
@srowen Are you able to trigger Jenkins builds for PRs from non-whitelisted
authors?
---
If your project is set up for it, you can reply to this email and have your
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-67786877
Yeah, I looked briefly at ways of parallelizing the Python tests, because
those take around 10-12 minutes in total, which will become a significant
fraction
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-67788245
Hmm, taking a second look at [how the Python tests are
invoked](https://github.com/apache/spark/blob/c6a3c0d5052e5bf6f981e5f91e05cba38b707237/python/run-tests#L38),
I
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-67804058
By the way, I opened [a question on Stack
Overflow](http://stackoverflow.com/q/27588350/877069) about some kind of show
execution plan feature in sbt. It would make
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67729750
I suspect the `instance ID 'i-471a82b9' does not exist` errors stem from
tagging the instances in a separate call from the one that launches them. The
time between
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3564#discussion_r22146321
--- Diff: project/SparkBuild.scala ---
@@ -397,15 +427,44 @@ object TestSettings {
javaOptions in Test +=
-Dspark.executor.extraClassPath
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67613960
Nice work, Josh.
I took a quick look at the coverage report. It looks like most of it is
covered. If we want to be extra thorough, I think there are a few more
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67665194
Yes, I've been getting this type of error with 1.1.1. I haven't had time to
look into it, but I suspect there is some subtle thing about the EC2 API that
we
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67665414
I'm curious: After calling `--resume`, did the instances in the EC2 web
console get tagged with friendly names? That's one thing I noticed the last
time I got
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67566737
Patch looks good to me, but I'll try to test it out later this week.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-67566925
@JoshRosen Are there any other PRs we should wait on merging before picking
this work up again?
---
If your project is set up for it, you can reply to this email
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67568623
If you run Python with the `-Wdefault` flag it should enable the display of
deprecation warnings. They're suppressed by default. I remember catching one
such warning
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3737#issuecomment-67569885
That sounds like a good idea. That way if we change something in the future
to rely on a deprecated feature, we'll immediately notice during testing.
---
If your
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3717#issuecomment-67284928
The build was broken, but it's been fixed now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3717#issuecomment-67284904
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3564#discussion_r21859149
--- Diff: project/SparkBuild.scala ---
@@ -397,15 +427,44 @@ object TestSettings {
javaOptions in Test +=
-Dspark.executor.extraClassPath
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-66895881
@JoshRosen This work was originally supposed to be in for 1.2.0, but since
it's not a critical piece of work I don't think there is a need to backport it
for 1.2.1
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66743763
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66747900
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66750344
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66754395
OK, some progress. Tests are running in parallel; the forked JVMs are
getting the correct options; and I can successfully run tests for individual
projects (e.g
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66835945
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66838083
Thanks for sending the update to `SparkSubmitSuite` @JoshRosen.
I also made a change in 79d3c38fc7730d4c7f9753e7631d7d100df18c8f to how the
forked JVMs buffer
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66838119
Okie doke @shaneknapp.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66841320
Hmm, [lots of
failures](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24419/testReport/)
in this latest test, mostly related to `streaming
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66853045
FYI, [I've posted a question on Stack
Overflow](http://stackoverflow.com/questions/27453882/how-can-run-tests-in-parallel-but-get-neatly-ordered-test-output)
about
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66865164
:+1: Thank you for getting to the bottom of these flaky tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3157#issuecomment-66866588
+1 on this kind of cleanup work.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66866615
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66739082
Yeah, I'm ignoring the problem of interleaved output for now since I'm
hitting two more critical problems first: 1) the tests aren't running
successfully, or 2
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-66416442
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3651#discussion_r21573086
--- Diff: pom.xml ---
@@ -941,19 +950,38 @@
forktrue/fork
/configuration
/plugin
+!-- Surefire runs
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1240#issuecomment-65974642
@pwendell @mateiz What is the status of this PR? It's been 5 months since
the last update.
Should @CodingCat continue to work on this or close it and move
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1240#issuecomment-65974769
Okie doke!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2661#issuecomment-65974844
@sarutak It looks like this code has now been
[changed](https://github.com/apache/spark/blob/8817fc7fe8785d7b11138ca744f22f7e70f1f0a0/dev/audit-release/blank_sbt_build
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-65881199
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-65882473
Hmm, this error from the latest test is interesting:
```
[info] - Read with RegexSerDe *** FAILED *** (2 seconds, 339 milliseconds)
[info] Failed
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3564
[SPARK-3431] [WIP] Parallelize test execution
This is currently a work in progress to experiment with various options for
parallelizing tests.
You can merge this pull request into a Git
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-64938212
OK @JoshRosen, I've added the backoff back in.
I didn't notice any regressions without the linear backoff, so I'll make a
note to remove it in a future version
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3404#issuecomment-64826343
cc @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-64711993
Yeah, I removed it specifically to be more aggressive and shave off some
seconds from the launch time. Do you think that's OK? Or would you prefer
the back off
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-63516554
Pinging @JoshRosen and @shivaram again.
This PR re-introduces a commit that somehow disappeared from #2339, and I
think it should be good for inclusion in 1.2.0
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3195#discussion_r20527728
--- Diff: ec2/spark_ec2.py ---
@@ -655,33 +663,44 @@ def wait_for_cluster_state(cluster_instances,
cluster_state, opts):
(would be nice
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-63524432
`spark-ec2` currently blocks on SSH availability correctly, but it tests
SSH more often than is necessary. As [you suggested
here](https://github.com/apache/spark/pull
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3195#discussion_r20528480
--- Diff: ec2/spark_ec2.py ---
@@ -655,33 +663,44 @@ def wait_for_cluster_state(cluster_instances,
cluster_state, opts):
(would be nice
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3195#discussion_r20529472
--- Diff: ec2/spark_ec2.py ---
@@ -655,33 +663,44 @@ def wait_for_cluster_state(cluster_instances,
cluster_state, opts):
(would be nice
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-63541152
@shivaram I think this PR is good to go. Did my response to your questions
make sense?
---
If your project is set up for it, you can reply to this email and have your
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2947#issuecomment-63428920
Do you mind closing this PR @adampingel now that we have a replacement for
it?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3195#issuecomment-62612650
cc @JoshRosen @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3195
[SPARK-3398] [SPARK-4325] [EC2] Use EC2 status checks.
This PR re-introduces
[0e648bc](https://github.com/apache/spark/commit/0e648bc2bedcbeb55fce5efac04f6dbad9f063b4)
from PR #2339, which
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3165#discussion_r20050584
--- Diff: dev/fetch-pr ---
@@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3165#issuecomment-62249680
Whoops, I missed Josh's comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2988#discussion_r19922788
--- Diff: ec2/spark_ec2.py ---
@@ -718,12 +726,16 @@ def get_num_disks(instance_type):
return 1
-# Deploy the configuration file
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2988#discussion_r19923159
--- Diff: ec2/spark_ec2.py ---
@@ -718,12 +726,16 @@ def get_num_disks(instance_type):
return 1
-# Deploy the configuration file
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2988#discussion_r19923233
--- Diff: ec2/spark_ec2.py ---
@@ -718,12 +726,16 @@ def get_num_disks(instance_type):
return 1
-# Deploy the configuration file
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2988#discussion_r19746328
--- Diff: ec2/spark_ec2.py ---
@@ -718,12 +726,16 @@ def get_num_disks(instance_type):
return 1
-# Deploy the configuration file
501 - 600 of 933 matches
Mail list logo