Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-69449925
ok, understood. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3720#discussion_r22757533
--- Diff: mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala
---
@@ -0,0 +1,964 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/3805
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/3709#issuecomment-69450446
Ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/3709
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-69454126
[Test build #25359 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25359/consoleFull)
for PR 3935 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3923#issuecomment-69463661
[Test build #25360 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25360/consoleFull)
for PR 3923 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3923#issuecomment-69463666
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user tgaloppo commented on the pull request:
https://github.com/apache/spark/pull/3923#issuecomment-69461424
I have made the requested changes and resolved the merge conflicts.
Question: MutlivariateGuassian now keeps a private Breeze version of the
mean vector rather
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3989#issuecomment-69462789
[Test build #25362 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25362/consoleFull)
for PR 3989 at commit
Github user GenTang commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69456767
As all the exception about ec2 will throw out an EC2ResponseError
exception, we use error_code to identify the specific error of instance not
existing.
If EC2
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3989
[Minor]Resolve sbt warnings during build (MQTTStreamSuite.scala).
cc @andrewor14
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69463862
We can do this by maintaining a set of such executors, and marking these
as idle as soon as new executors join, for instance.
We can just use
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-69455901
[Test build #25359 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25359/consoleFull)
for PR 3935 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-69455903
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69461485
@GenTang Did you test this on a few cluster launches to make sure it works?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3986#discussion_r22759982
--- Diff: ec2/spark_ec2.py ---
@@ -569,15 +569,34 @@ def launch_cluster(conn, opts, cluster_name):
master_nodes = master_res.instances
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3923#issuecomment-69461192
[Test build #25360 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25360/consoleFull)
for PR 3923 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3222#issuecomment-69462061
[Test build #25361 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25361/consoleFull)
for PR 3222 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69463950
[Test build #25363 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25363/consoleFull)
for PR 3783 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3222#issuecomment-69464730
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3783#discussion_r22760847
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -426,39 +433,49 @@ private[spark] class ExecutorAllocationManager(
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69466165
LGTM pending a few minor comment suggestions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user GenTang commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69471651
Yes, I reproduced the InvalidInstanceID.NotFound by change the instance ID
before add_tag action and then re-change it to the correct id. However, it will
print the
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3766
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3766#issuecomment-69472422
Thanks! Merged to master and branch-1.2. @pwendell is going to publish
sometime later this weekend.
---
If your project is set up for it, you can reply to this email
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3872#discussion_r22762242
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Catalog.scala
---
@@ -41,6 +41,8 @@ trait Catalog {
def
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69473834
Hmm, sucks that boto still prints that error.
Getting errors on `i.update()` seems to be part of AWS's general flakiness
when propagating metadata info. The tag
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3827#discussion_r22762347
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -195,4 +196,19 @@ class SQLQuerySuite extends QueryTest {
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-69474872
/cc @rxin
Here is another public API we should consider standardizing for 1.3. Do we
want to have a unified `DESCRIBE` that works for both SQL and Hive
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3935#discussion_r22762496
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/DDLTestSuit.scala ---
@@ -0,0 +1,74 @@
+/*
+* Licensed to the Apache Software
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3989#issuecomment-69465531
[Test build #25362 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25362/consoleFull)
for PR 3989 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69465512
Ah great, I didn't realize we already maintain such a set in the listener.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3989#issuecomment-69465537
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3783#discussion_r22760832
--- Diff:
core/src/test/scala/org/apache/spark/ExecutorAllocationManagerSuite.scala ---
@@ -597,6 +607,41 @@ class ExecutorAllocationManagerSuite extends
Github user GenTang commented on a diff in the pull request:
https://github.com/apache/spark/pull/3986#discussion_r22761550
--- Diff: ec2/spark_ec2.py ---
@@ -569,15 +569,34 @@ def launch_cluster(conn, opts, cluster_name):
master_nodes = master_res.instances
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/3433
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/3433#issuecomment-69473753
O.K, closing this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3820#discussion_r22762553
--- Diff: sql/core/pom.xml ---
@@ -69,6 +69,11 @@
version2.3.0/version
/dependency
dependency
+ groupIdorg.jodd/groupId
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/3431#issuecomment-69475221
Yeah, no problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3783#discussion_r22760840
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -478,12 +500,21 @@ private[spark] class ExecutorAllocationManager(
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3783#discussion_r22760845
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -466,7 +483,12 @@ private[spark] class ExecutorAllocationManager(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69466712
[Test build #25363 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25363/consoleFull)
for PR 3783 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69466714
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3941
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user GenTang reopened a pull request:
https://github.com/apache/spark/pull/3986
[SPARK-4983]exception handling about adding tags to EC2 instance
As the boto API doesn't support tag ec2 instances in the same call that
launches them. We use exception handling code to wait that
Github user GenTang commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69473316
However, I met a really strange error a moment ago.
I launched a cluster containing 1 master and 1 slave with the script.
Add_tag to master succussed after two
Github user GenTang closed the pull request at:
https://github.com/apache/spark/pull/3986
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3433#issuecomment-69473685
Yeah, agreed. Lets block this on SPARK-4867, but once that is fixed I'd
love to have a native `concat` function.
---
If your project is set up for it, you can reply
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3433#issuecomment-69473691
For now though I propose we close this issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3965#discussion_r22762017
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -323,9 +349,9 @@ class SQLContext(@transient val sparkContext:
SparkContext)
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3926#issuecomment-69473524
BTW, I'm going to try to merge #3431 first, which might conflict with this.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3926#issuecomment-69473490
This is awesome, thanks for cleaning this up. One question though, do we
really want to have case insensitive keywords? Are there any systems that
actually do that?
Github user GenTang commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69473464
Yes, boto will print the error even we catch the exception but the script
will continue and the cluster will be successfully launched.
It is just ugly to have such
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3948
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3948#issuecomment-69474130
Thanks, merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3931#issuecomment-69474611
this seems like something that would be better suited for a UDF instead of
adding more syntax. Is this a common thing in other systems or are we
inventing something
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3921#issuecomment-69474588
[Test build #25364 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25364/consoleFull)
for PR 3921 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3921#issuecomment-69474574
Also can you please fix the merge conflict?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3986#issuecomment-69472483
Are you saying that boto will print an error to screen even if we catch the
exception?
---
If your project is set up for it, you can reply to this email and have your
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3431#issuecomment-69473637
Thanks for working on this guys! Merging to master.
@yhuai can you clarify the difference between SchemaRelationProvider and
RelationProvider in the scala doc
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3431
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3935#discussion_r22762492
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/DDLTestSuit.scala ---
@@ -0,0 +1,74 @@
+/*
+* Licensed to the Apache Software
Github user ankurdave commented on the pull request:
https://github.com/apache/spark/pull/1297#issuecomment-69475120
@octavian-ganea IndexedRDD creates a new lineage entry for each operation.
This enables fault tolerance but, as with other iterative Spark programs,
causes stack
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3222#issuecomment-69464724
[Test build #25361 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25361/consoleFull)
for PR 3222 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3827#issuecomment-69474520
Thanks! I've merged this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3827
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3872#issuecomment-69474553
Thanks for working on this!
/cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3872#discussion_r22762425
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -293,6 +293,13 @@ class SQLContext(@transient val sparkContext:
SparkContext)
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3921#issuecomment-69474562
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3847#issuecomment-69474629
Are you sure? I believe that `ident` does support backticks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3965#issuecomment-69474892
Hey, sorry this conflicts now. I really like the change though!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3783#discussion_r22760861
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -315,7 +319,7 @@ private[spark] class ExecutorAllocationManager(
Github user mulby commented on the pull request:
https://github.com/apache/spark/pull/3978#issuecomment-69473145
@davies thanks for the prompt review. I've included that fix as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3987
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3987#issuecomment-69474694
Merged to master and branch-1.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/3935#discussion_r22762487
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -77,6 +78,16 @@ private[sql] class DDLParser extends
StandardTokenParsers
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3821#issuecomment-69476014
I'm also still confused by this solution. The contract should be as
follows: internally we use catalyst mutable decimal type. thus, the UDF code
should only ever see
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3957#issuecomment-69478104
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3957#issuecomment-69478101
[Test build #25365 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25365/consoleFull)
for PR 3957 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3778#issuecomment-69479638
This looks good to me. @liancheng have you looked this over too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3900#issuecomment-69480361
[Test build #25369 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25369/consoleFull)
for PR 3900 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/3783#issuecomment-69481186
Added comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3960#issuecomment-69483431
[Test build #25372 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25372/consoleFull)
for PR 3960 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3872#issuecomment-69484420
[Test build #25371 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25371/consoleFull)
for PR 3872 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3558#issuecomment-69476078
I think this should probably go in after #3965. We can have some parent
trait in catalyst that the SQL version can extend. That way we can just pass
the configuration
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3778#issuecomment-69476490
@marmbrus, any comments here? i think this is ok to go
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user marmbrus opened a pull request:
https://github.com/apache/spark/pull/3990
[SPARK-5049][SQL] Fix ordering of partition columns in ParquetTableScan
Followup to #3870. Props to @rahulaggarwalguavus for identifying the issue.
You can merge this pull request into a Git
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-69480293
[Test build #563 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/563/consoleFull)
for PR 3910 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3990#issuecomment-69480344
[Test build #25366 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25366/consoleFull)
for PR 3990 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3990#issuecomment-69480346
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3872#issuecomment-69482839
[Test build #25371 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25371/consoleFull)
for PR 3872 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3978#issuecomment-69475501
ok to test
@davies good to go now?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3946#issuecomment-69475480
/cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3978#issuecomment-69475750
[Test build #562 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/562/consoleFull)
for PR 3978 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3957#issuecomment-69476143
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/3957#issuecomment-69476187
The implementation here looks reasonable. @alexbaretta can you elaborate
on what your use case is? We are doing a clean-up of the API and I was
actually wondering if
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3718#issuecomment-69476551
ping @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3989#issuecomment-69476840
Merging this. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3985
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
1 - 100 of 162 matches
Mail list logo