Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4763#issuecomment-75949794
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4765
SPARK-5983 [WEBUI] Don't respond to HTTP TRACE in HTTP-based UIs
Disallow TRACE HTTP method in servlets
You can merge this pull request into a Git repository by running:
$ git pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4767#issuecomment-75958385
[Test build #27950 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27950/consoleFull)
for PR 4767 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4765#issuecomment-75964052
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user saucam opened a pull request:
https://github.com/apache/spark/pull/4764
SPARK-6006: Optimize count distinct for high cardinality columns
Currently the plan for count distinct looks like this :
Aggregate false, [snAppProtocol#448],
GitHub user 1123 opened a pull request:
https://github.com/apache/spark/pull/4766
fixing 3 typos in the graphx programming guide
Corrected 3 Typos in the GraphX programming guide. I hope this is the
correct way to contribute.
You can merge this pull request into a Git repository
Github user dragos commented on the pull request:
https://github.com/apache/spark/pull/4178#issuecomment-75967598
If my comment is the only thing that was holding this PR from merging, I
withdraw my comment. :)
---
If your project is set up for it, you can reply to this email and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4178
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user prabeesh commented on the pull request:
https://github.com/apache/spark/pull/4178#issuecomment-75948873
@srowen I think so I addressed the @dragos comments. Should we need more
updates for that comments ?, If so could you please explain in detail ?
---
If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4765#issuecomment-75964039
[Test build #27949 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27949/consoleFull)
for PR 4765 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4766#issuecomment-75971531
Since it's strictly a tiny doc typo fix editing 4 words, I don't think we
have to wait on Jenkins and don't need a JIRA.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4763#issuecomment-75949789
[Test build #27948 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27948/consoleFull)
for PR 4763 at commit
GitHub user jackylk opened a pull request:
https://github.com/apache/spark/pull/4767
[SPARK-6007][SQL] Add numRows param in DataFrame.show()
It is useful to let use decide the number of rows to show in DataFrame.show
You can merge this pull request into a Git repository by running:
Github user saucam commented on the pull request:
https://github.com/apache/spark/pull/4764#issuecomment-75952342
@marmbrus can you please guide how to rewrite this in a better way ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4766#issuecomment-75955147
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4764#issuecomment-75951661
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4765#issuecomment-75953486
[Test build #27949 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27949/consoleFull)
for PR 4765 at commit
Github user prabeesh commented on the pull request:
https://github.com/apache/spark/pull/4178#issuecomment-75968545
@dragos no need of withdraw the comment.
Ultimately my aim to deliver good code to Spark users.
---
If your project is set up for it, you can reply to this email
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4178#issuecomment-75948201
@prabeesh eh OK you mean you addressed that comment? I still am not sure
that the comment from @dragos was addressed as I mentioned before. But since
that `catch` block
Github user petro-rudenko commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-75989874
Having problem compiling spark with sbt due to next error:
```
$ build/sbt -Phadoop-2.4 compile
[error]
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4767#issuecomment-75974679
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4567
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4767#issuecomment-75974667
[Test build #27950 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27950/consoleFull)
for PR 4767 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4766
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/4734#issuecomment-76009304
@srowen, @pwendell how is this looking? Any followup issues?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/4768
[SPARK-6010] [SQL] Merging compatible Parquet schemas before computing
splits
`ReadContext.init` calls `InitContext.getMergedKeyValueMetadata`, which
doesn't know how to merge conflicting user
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4768#issuecomment-76012212
[Test build #27951 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27951/consoleFull)
for PR 4768 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4729#discussion_r25362135
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -424,7 +424,7 @@ private[hive] class
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-75992556
@petro-rudenko try a clean build. This changed code across modules.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/4705#issuecomment-76009819
@maropu thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-75994230
Hi @davies , mind taking a look at this again, I've addressed your
comments, though some duplications are hard to remove, any suggestions?
---
If your project is set
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-76008886
Roger, that makes good sense! Thanks for the update @ankurdave. Since the
patch isn't deprecating anything I'd say we're good to merge in then? Just let
me know if
Github user petro-rudenko commented on the pull request:
https://github.com/apache/spark/pull/4514#issuecomment-75994711
Thanks, works now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-76003479
LGTM, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user brkyvz commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76019148
@tdas I think that's returning an error now, because the class in fact
doesn't really exist in the jar. Not because it's not in the classpath
---
If your project is set
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4729#discussion_r25362155
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -458,6 +458,9 @@ private[hive] class
Github user brkyvz commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76019805
nvm, it should be in spark-streaming-kafka_2.10.jar
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-76042475
Your views make sense! Thanks a lot =) the discussion was helpful and
clarified the pitfalls here. I have learned a lot.
I'm going to defer to @pwendell for the
Github user foxik commented on the pull request:
https://github.com/apache/spark/pull/4759#issuecomment-76044807
You are right, I just modified the patch to use `createTempDir`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76047622
I've merged this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76039615
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25373313
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/** Get the
Github user pankajarora12 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25373106
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/**
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76043739
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76044064
[Test build #27954 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27954/consoleFull)
for PR 4769 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4770#issuecomment-76045346
There is also a second thing that is broken by this patch:
`YARN_LOCAL_DIRS` can actually be multiple directories, as the name implies.
The BlockManager uses that to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4759#issuecomment-76045057
[Test build #27955 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27955/consoleFull)
for PR 4759 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4750#issuecomment-76046906
[Test build #27956 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27956/consoleFull)
for PR 4750 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76025236
[Test build #27952 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27952/consoleFull)
for PR 4769 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4747#discussion_r25365970
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -345,11 +345,11 @@ private[spark] class Worker(
}
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4747#discussion_r25366051
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -728,6 +746,11 @@ private[spark] object Utils extends Logging {
localDirs
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4770#issuecomment-76028290
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4768#issuecomment-76028360
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user pankajarora12 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25367786
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/**
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-76032123
The security model I want to support is: if the client application wants to
execute a job that reads and writes from HDFS that has been secured with
kerberos, they
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4759#issuecomment-76033642
@vanzin Nevermind, both of these end up in the same place with
`createDirectory`. I agree with @vanzin that this could be simplified by just
calling `createTempDir`.
Github user pankajarora12 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25369644
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/**
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25370059
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/** Get the
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4770#issuecomment-76036782
@pankajarora12 Data generated by `DiskBlockManager` cannot be deleted since
it may be used by other executors when using the external shuffle service. You
may be able to
GitHub user piaozhexiu opened a pull request:
https://github.com/apache/spark/pull/4771
[SPARK-6014] [Core] java.io.IOException: Filesystem is thrown when ctrl+c
or ctrl+d spark-sql on YARN
You can merge this pull request into a Git repository by running:
$ git pull
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76046982
Jenkins doesn't run anything for docs so we don't need to trigger/wait for
Jenkins here.
---
If your project is set up for it, you can reply to this email and have your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76047828
Yeah, actually just dumb habit. For non-trivial doc changes I try the docs
build locally to sniff out syntax errors
---
If your project is set up for it, you can reply
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-76033180
if the client application wants to execute a job that reads and writes
from HDFS that has been secured with kerberos, they should be allowed to do so
if they have the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25369132
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/** Get the
Github user pankajarora12 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25370157
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/**
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4757#issuecomment-76021386
Merged into master and branch-1.3. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4759#issuecomment-76026644
Ok, I think this might work. But I'd call just `createTempDir` instead of
`createDirectory` + `registerForWhatever`.
I was worried that this might cause shuffle
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-76028480
Hi @mccheah, just to clarify my comments, I'm thinking about someone
looking at this feature and thinking that hey, Spark Standalone now supports
kerberos!, while that's
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25367198
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/** Get the
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4750#issuecomment-76033308
OK that should fix those 2 issues
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25368698
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/** Get the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25370530
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/** Get the
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4769
SPARK-5930 [DOCS] Documented default of spark.shuffle.io.retryWait is
confusing
Clarify default max wait in spark.shuffle.io.retryWait docs
CC @andrewor14
You can merge this pull request
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4734#issuecomment-76026448
I like it; would like to get a reaction from @pwendell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user pankajarora12 opened a pull request:
https://github.com/apache/spark/pull/4770
[CORE][YARN] SPARK-6011: Used Current Working directory for sparklocaldirs
instead of Application Directory so that spark-local-files gets deleted when
executor exits abruptly.
Spark uses
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4759#issuecomment-76031791
Hm, I was going to say that `createTempDir` will do something more, and
create a subdirectory. But now looking at the code, isn't the point to create a
*sub*-directory of
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4750#issuecomment-76033540
[Test build #27953 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27953/consoleFull)
for PR 4750 at commit
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4763#issuecomment-76036245
Please see my comments on the JIRA; I think parts of this change need to be
discussed. Thanks!
---
If your project is set up for it, you can reply to this email and
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-76037757
Come to think of it, with my current approach, since the keytab is
specified in the driver's SparkConf, theoretically different Spark applications
can specify different
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-76038575
Come to think of it, with my current approach, since the keytab is
specified in the driver's SparkConf, theoretically different Spark applications
can specify different
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4757
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76023504
No I verified the class does exist in jar
On Feb 25, 2015 10:07 AM, Burak Yavuz notificati...@github.com wrote:
nvm, it should be in
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4768#issuecomment-76028345
[Test build #27951 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27951/consoleFull)
for PR 4768 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4759#issuecomment-76032633
@srowen not sure I understand; `createTempDir` takes a `root` just like
`createDirectory`.
---
If your project is set up for it, you can reply to this email and have
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4750#issuecomment-76033436
Note I corrected a bunch of other doc lines in those files too but did not
edit the text content at all.
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4769#issuecomment-76035560
[Test build #27952 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27952/consoleFull)
for PR 4769 at commit
Github user pankajarora12 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25369938
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/**
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4750#issuecomment-76049944
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76050548
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2504#issuecomment-76054035
Minor side comment: Can we update the title of this PR to something like:
```
[SPARK-3172] [SPARK-3577] Concise description of changes goes here
```
Github user pankajarora12 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4770#discussion_r25377996
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
/**
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4767#discussion_r25375265
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -159,9 +159,11 @@ class DataFrame protected[sql](
/**
*
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4750#issuecomment-76049926
[Test build #27953 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27953/consoleFull)
for PR 4750 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76050964
[Test build #27957 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27957/consoleFull)
for PR 4771 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76052585
I'm a little worried that this is treating the symptom and not the problem.
`ApplicationMaster.scala` has a shutdown hook to stop the SparkContext if the
user hasn't done
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-76054123
[Test build #27958 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27958/consoleFull)
for PR 3916 at commit
Github user pankajarora12 commented on the pull request:
https://github.com/apache/spark/pull/4770#issuecomment-76056141
I thought about that case too. Since we will be having many executors on
one node. So yarn will use different local dir for launching each executor
and that
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4769
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4767#discussion_r25375180
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -272,9 +272,9 @@ def isLocal(self):
return self._jdf.isLocal()
-def
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4767#discussion_r25375221
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -159,9 +159,11 @@ class DataFrame protected[sql](
/**
*
1 - 100 of 402 matches
Mail list logo