Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/6065#issuecomment-101819201
@mengxr I'm gonna start performance tests for before/after this PR now.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-101726519
could I get a few eyes on this one?
Doesn't look like you've made any big changes since my last review, so I
can say this patch LGTM overall.
You
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-101722759
Python 2.6 is the oldest version of Python that Spark officially supports.
We also added Python 3 support recently, so ideally this script should be
able
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2849#issuecomment-100686174
when will Spark finish to verify it?
@maidh91 - Please follow the discussion on the JIRA issue to get this kind
of information: [SPARK-3561](https
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5971#issuecomment-100519429
@rxin - Shouldn't this also be merged into `branch-1.3`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5818#issuecomment-100366865
Does this PR also cover the Python DataFrame API?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4929#issuecomment-100312545
Does this patch include support for [recursive
CTEs](https://technet.microsoft.com/en-us/library/ms186243) (very useful for
hierarchical data)?
---
If your project
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/6014#issuecomment-100377512
I wish there was a way to simplify this code.
As it is, we are still prone to silent duplicate entries in these
dictionaries (something which has happened
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5818#issuecomment-100371588
Can do, but probably not soon. :)
I wasn't sure if we had some magic that automatically exposed this in
Python. Guess not. (Well, thinking about it for a bit
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3407#issuecomment-99924989
@tkyaw - People are reporting issues with this patch on Spark 1.3.1 in [the
JIRA issue](https://issues.apache.org/jira/browse/SPARK-3928). You might want
to take a look
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5955#issuecomment-99643326
cc @brennonyork who is converting this whole thing into Python as part of
#5694.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-98234698
Just an FYI: I can't speak on their behalf, but it's likely that the
committers will want to hold off on merging this until after the 1.4 feature
freeze (which I think
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-97314078
@brennonyork In the commit that says removed license file for
SparkContext, shouldn't we have seen a specific error about RAT checks
failed as opposed to the generic
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-97586883
This patch fails RAT tests.
There we go. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-97120034
I think as an initial refactoring, this is great. With this in place, we'll
be able to refactor other lengthy Bash scripts involved in the build into
Python, and add
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96724927
Yup, that sounds fine. We definitely don't have to tackle everything in one
go, but eventually yeah we should phase out all the Bash-isms.
By the way, I don't
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29160206
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29161079
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96726958
https://amplab.cs.berkeley.edu/jenkins/view/Spark/
Looks like planned maintenance?
---
If your project is set up for it, you can reply to this email and have
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29162008
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,413 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96753595
Oi Jenkins! Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96749616
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1217#issuecomment-96775409
@jegonzal @ankurdave - This PR has gone stale. Do we want to update it or
close it for later?
---
If your project is set up for it, you can reply to this email
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96776081
Jenkinmensch, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29180270
--- Diff: dev/run-tests ---
@@ -17,239 +17,7 @@
# limitations under the License.
#
-# Go to the Spark project root directory
FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29107445
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29111863
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29111872
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29111967
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29111609
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29111716
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29111886
--- Diff: dev/run-tests.py ---
@@ -0,0 +1,417 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96410469
In the near future, I guess we want to move towards also converting the
[`run-tests-jenkins`](https://github.com/apache/spark/blob/master/dev/run-tests-jenkins)
script
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097697
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097702
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097705
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097695
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097719
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5694#issuecomment-96127284
Looks like a great start! Left some comments mostly about Python style and
organization. Will take a closer look next week at the actual logic and flow.
---
If your
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097732
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097743
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097799
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097716
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r2909
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5694#discussion_r29097845
--- Diff: dev/run-tests ---
@@ -17,239 +17,394 @@
# limitations under the License.
#
-# Go to the Spark project root directory
-FWDIR
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/5561
[SPARK-6219] Reuse pep8.py
Per the discussion in the comments on [this
commit](https://github.com/apache/spark/commit/f17d43b033d928dbc46aef8e367aa08902e698ad#commitcomment-10780649),
this PR
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5027#issuecomment-93452826
Yeah, I asked about that some time ago, and I believe the concern was about
surprising users (by changing defaults) + the fact that the Hadoop 2 distro
used by spark
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-93489570
@ilganeli - I think this is a useful feature, but given that there isn't a
strong committer sponsor (I am just a random contributor) to take this PR
through to the end
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5027#issuecomment-93117302
Confirmed. Simply building Spark with the Hadoop version explicitly set to
1.0.4 resolves this issue.
---
If your project is set up for it, you can reply to this email
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5027#issuecomment-93069921
This PR seems to have broken spark-perf. Not sure why, but the executor
stderr logs have the following:
```
15/04/14 19:14:46 INFO
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5027#issuecomment-93076897
Suspicion is it's just a Hadoop 1 vs. 2 issue since spark-ec2 (which we use
for spark-perf testing) launches clusters with Hadoop 1 by default.
Will confirm
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5172#discussion_r28174579
--- Diff: ec2/spark_ec2.py ---
@@ -502,8 +502,10 @@ def launch_cluster(conn, opts, cluster_name
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5244#issuecomment-90977078
Looks like @voukka closed #4038 in favor of this PR?
In any case, this patch LGTM.
---
If your project is set up for it, you can reply to this email and have
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5244#issuecomment-90173226
Does this also address #4038 ?
Yes, I believe it does.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5244#issuecomment-90117829
Oh, if you've tested out the relevant commands then that's great. I just
wanted to know that we checked nothing was broken with this change. :)
---
If your project
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-89697205
TODO: ec2/spark-ec2.py is not fully tested with python3.
I can help with this. Do we want to hold off other spark-ec2 PRs until this
one goes in? Do we have
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5173#discussion_r27773756
--- Diff: python/pyspark/cloudpickle.py ---
@@ -40,164 +40,126 @@
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5173#discussion_r27773735
--- Diff: python/pyspark/sql/functions.py ---
@@ -116,7 +114,7 @@ def __init__(self, func, returnType):
def _create_judf(self):
f
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5244#discussion_r27760408
--- Diff: ec2/spark_ec2.py ---
@@ -282,6 +282,10 @@ def parse_args():
parser.add_option(
--vpc-id, default=None,
help=VPC
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5244#issuecomment-89439386
This is a well thought-out change. I prefer the explicit `--private-ips`
option to the implicit address searching of #4038, but I must ask:
@mdagost Did you try
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5172#discussion_r27512141
--- Diff: ec2/spark_ec2.py ---
@@ -502,8 +502,10 @@ def launch_cluster(conn, opts, cluster_name
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5244#issuecomment-87281232
I'll look into this next week, but @mdagost do you mind sharing what you
are using spark-ec2 for? It's always good to hear about real-life use cases
from users
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-86750128
I have advocated in the past for removing the redundant merges cleanly
message, but IIRC some people found it useful and wanted to keep it.
One way or the other
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5109#discussion_r27132818
--- Diff: ec2/spark_ec2.py ---
@@ -528,32 +532,53 @@ def launch_cluster(conn, opts, cluster_name):
name = '/dev/sd' + string.letters[i + 1
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5109#discussion_r27143511
--- Diff: ec2/spark_ec2.py ---
@@ -567,16 +592,28 @@ def launch_cluster(conn, opts, cluster_name):
for i in my_req_ids
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5109#issuecomment-86131454
Taking a closer look at this patch, I have to admit I'm neutral on whether
it's worth adding the logic and complexity to support this feature. Granted
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5109#discussion_r27149320
--- Diff: ec2/spark_ec2.py ---
@@ -528,32 +532,53 @@ def launch_cluster(conn, opts, cluster_name):
name = '/dev/sd' + string.letters[i + 1
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5109#discussion_r27152639
--- Diff: ec2/spark_ec2.py ---
@@ -567,16 +592,28 @@ def launch_cluster(conn, opts, cluster_name):
for i in my_req_ids
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5109#issuecomment-86167855
Thanks for explaining the use case @acvogel. I'm not familiar with MLlib,
honestly, but that makes sense.
---
If your project is set up for it, you can reply
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5109#issuecomment-85521950
OK sounds good. FWIW, I think it's fine if we won't support the
master-spot/slaves-non-spot combo as long as we error out early and cleanly.
But if you want to make
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5172#issuecomment-85709569
OK, tried those commands and everything checked out OK. The error when the
AMI was bad was a bit verbose, but that's what we had before I guess.
cc @JoshRosen
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5156#issuecomment-85590781
Bash changes LGTM. :star:
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5172#issuecomment-85682085
Hohoho I guess things have changed around here. :sunglasses:
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5172#issuecomment-85677434
LGTM.
@mrdrozdov Can you post a few invocations that you used to test things out?
Change looks totally good and harmless, but it doesn't hurt to double check
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5172#issuecomment-85677806
Jenkins, test this please. (Though I doubt Jenkins will listen to me.)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-85069918
Hey @ilganeli, I'm not sure since this is outside my comfort zone. Pinging
@pwendell who may be able to direct you better.
---
If your project is set up for it, you
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-85119879
@srowen [SPARK-3533](https://issues.apache.org/jira/browse/SPARK-3533) has
a lot of votes and watchers, and there are a few linked questions on Stack
Overflow from
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5109#issuecomment-85314778
Thanks for submitting this patch!
@acvogel Just to be clear, with this PR do we mean to support any of the
following combinations?
* master is spot
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r26813437
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -0,0 +1,515 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/5093#discussion_r26800415
--- Diff: dev/tests/pr_new_dependencies.sh ---
@@ -0,0 +1,36 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5083#issuecomment-83081792
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4038#issuecomment-82091330
@pishen This patch has a merge conflict and needs updating + more testing
per @voukka's previous comment.
---
If your project is set up for it, you can reply
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4919#issuecomment-77883455
Yeah, if @JoshRosen (who wrote the original `setup_boto()` function) can't
take a look, maybe @shivaram can give this a look.
---
If your project is set up for it, you
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079948
--- Diff: bin/spark-sql ---
@@ -43,15 +46,12 @@ function usage {
echo
echo CLI options:
$FWDIR/bin/spark-class $CLASS --help 21 | grep
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079926
--- Diff: bin/spark-sql ---
@@ -25,12 +25,15 @@ set -o posix
# NOTE: This exact class name is matched downstream by SparkSubmit.
# Any
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079987
--- Diff: bin/spark-submit ---
@@ -17,58 +17,18 @@
# limitations under the License.
#
-# NOTE: Any changes in this file must be reflected
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079978
--- Diff: bin/spark-submit ---
@@ -17,58 +17,18 @@
# limitations under the License.
#
-# NOTE: Any changes in this file must be reflected
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26080561
--- Diff: sbin/spark-daemon.sh ---
@@ -121,45 +121,63 @@ if [ $SPARK_NICENESS = ]; then
export SPARK_NICENESS=0
fi
+run_command
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26080617
--- Diff: sbin/spark-daemon.sh ---
@@ -121,45 +121,63 @@ if [ $SPARK_NICENESS = ]; then
export SPARK_NICENESS=0
fi
+run_command
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26080422
--- Diff: sbin/spark-daemon.sh ---
@@ -121,45 +121,63 @@ if [ $SPARK_NICENESS = ]; then
export SPARK_NICENESS=0
fi
+run_command
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26081687
--- Diff: bin/pyspark2.cmd ---
@@ -17,59 +17,22 @@ rem See the License for the specific language governing
permissions and
rem limitations under
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-77954145
Sorry to spam the PR with comments about Bash word splitting, @vanzin. I
think what you have Bash-wise looks good, but as a matter of consistency I just
went through
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079243
--- Diff: bin/spark-class ---
@@ -110,83 +39,48 @@ else
exit 1
fi
fi
-JAVA_VERSION=$($RUNNER -version 21 | grep 'version' | sed 's
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26078786
--- Diff: bin/pyspark2.cmd ---
@@ -17,59 +17,22 @@ rem See the License for the specific language governing
permissions and
rem limitations under
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26078620
--- Diff: bin/pyspark ---
@@ -18,36 +18,24 @@
#
# Figure out where Spark is installed
-FWDIR=$(cd `dirname $0`/..; pwd)
+export
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079792
--- Diff: bin/spark-class ---
@@ -110,83 +39,48 @@ else
exit 1
fi
fi
-JAVA_VERSION=$($RUNNER -version 21 | grep 'version' | sed 's
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079633
--- Diff: bin/spark-class ---
@@ -110,83 +39,48 @@ else
exit 1
fi
fi
-JAVA_VERSION=$($RUNNER -version 21 | grep 'version' | sed 's
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26079837
--- Diff: bin/spark-shell ---
@@ -28,25 +28,24 @@ esac
# Enter posix mode for bash
set -o posix
-## Global script variables
-FWDIR
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26080472
--- Diff: sbin/spark-daemon.sh ---
@@ -121,45 +121,63 @@ if [ $SPARK_NICENESS = ]; then
export SPARK_NICENESS=0
fi
+run_command
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3916#discussion_r26081000
--- Diff: bin/spark-class ---
@@ -110,83 +39,48 @@ else
exit 1
fi
fi
-JAVA_VERSION=$($RUNNER -version 21 | grep 'version' | sed 's
201 - 300 of 933 matches
Mail list logo