Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925330
--- Diff: sql/hive/src/test/scala/org/apache/spark/sql/QueryTest.scala ---
@@ -1,140 +0,0 @@
-/*
--- End diff --
yes. These are the two
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-84933397
[Test build #28986 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28986/consoleFull)
for PR 4491 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-84933426
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5129#issuecomment-84939647
[Test build #28991 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28991/consoleFull)
for PR 5129 at commit
Github user kellyzly commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-84855188
@steveloughran: i don't understand why need make
CryptoOutputStream.scala#close safe. Is there situation when multiple threads
call this function at the same time?
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/5132#issuecomment-84855187
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/5132#issuecomment-84855200
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84860442
Do InvocationTargetExceptions only wrap Exceptions and not all Throwables?
It will wrap Error, too. Run the following codes in my machine,
```Scala
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/5132#issuecomment-84897442
@liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5134#issuecomment-84930028
[Test build #28988 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28988/consoleFull)
for PR 5134 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4697#issuecomment-84936129
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4697#issuecomment-84936098
[Test build #28985 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28985/consoleFull)
for PR 4697 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-84939649
[Test build #28992 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28992/consoleFull)
for PR 5042 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5061#discussion_r26916348
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -212,6 +212,22 @@ class SQLContext(@transient val sparkContext:
SparkContext)
Github user kellyzly commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-84893111
@steveloughran : in hadoop, if we need add a native lib path to hadoop
execution path, we need export LD_LIBRARY_PATH
export LD_LIBRARY_PATH=x
in hadoop,
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/5080#issuecomment-84899209
@yhuai Any more comment on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user MechCoder commented on the pull request:
https://github.com/apache/spark/pull/4986#issuecomment-84913707
What would be the reason to add a Save Load Version 1.0. What are the
expected changes to be done in further versions?
---
If your project is set up for it, you can
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/5134
[SPARK-6466][SQL] Remove unnecessary attributes when resolving GroupingSets
When resolving `GroupingSets`, we currently list all outputs of
`GroupingSets`'s child plan. However, the columns that are
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925134
--- Diff: pom.xml ---
@@ -1472,6 +1474,46 @@
groupIdorg.scalatest/groupId
artifactIdscalatest-maven-plugin/artifactId
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-84843466
[Test build #28981 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28981/consoleFull)
for PR 4930 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5130#issuecomment-84860777
If they wrap Errors as well, then the fix would be to replace Exception
with Throwable in the match block of the InvocationTargetException cause.
This has been
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-84876719
[Test build #28982 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28982/consoleFull)
for PR 4930 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4930#issuecomment-84876740
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925089
--- Diff: pom.xml ---
@@ -158,6 +158,7 @@
fasterxml.jackson.version2.4.4/fasterxml.jackson.version
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4697#issuecomment-84940364
[Test build #28994 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28994/consoleFull)
for PR 4697 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5111#issuecomment-85198759
OK, nevermind my question. I think it's clear you know what to do here and
it's as you think it should be. I'll leave it open a bit for any other opinions
but if it's
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5144#issuecomment-85221484
[Test build #29030 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29030/consoleFull)
for PR 5144 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5144#issuecomment-85220925
[Test build #29030 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29030/consoleFull)
for PR 5144 at commit
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-85229862
@tnachen I'm stumped at the moment. I've gone so far as to exclude the
explicit docker/spark-mesos/Dockerfile path, but it is still not excluded. I
had put this down
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5118
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hunglin commented on the pull request:
https://github.com/apache/spark/pull/5124#issuecomment-85186794
@JoshRosen thanks for the suggestions. Let's me work on those tonight.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5143#issuecomment-85215859
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5145#issuecomment-85223025
Agree, I like this one. Fail-fast checks should go first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5085#discussion_r26985197
--- Diff: launcher/src/main/java/org/apache/spark/launcher/Main.java ---
@@ -47,10 +47,14 @@
* character. On Windows, the output is a command line
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4867#issuecomment-85183514
The master SBT build is currently broken for a few Hadoop profiles due to
dependency issues. Do you think that this patch may have been responsible? I
noticed that
Github user yuecong commented on the pull request:
https://github.com/apache/spark/pull/5111#issuecomment-85188711
Let me clarify my opinions more clearly.
1, change '$ PYSPARK_DRIVER_PYTHON=ipython
PYSPARK_DRIVER_PYTHON_OPTS=notebook --pylab inline ./bin/pyspark' to $
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4337#issuecomment-85192767
[Test build #29024 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29024/consoleFull)
for PR 4337 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4337#issuecomment-85192800
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-85197419
[Test build #29026 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29026/consoleFull)
for PR 4435 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-85197443
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user brennonyork commented on a diff in the pull request:
https://github.com/apache/spark/pull/5093#discussion_r26983769
--- Diff: dev/tests/pr_new_dependencies.sh ---
@@ -0,0 +1,85 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation
GitHub user brennonyork opened a pull request:
https://github.com/apache/spark/pull/5145
[SPARK-6477][Build]: Run MIMA tests before the Spark test suite
This moves the MIMA checks to before the full Spark test suite such that,
if new PR's fail the MIMA check, they will return much
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5144#issuecomment-85221495
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5074#issuecomment-85229756
@srowen I've got some more comments. Going to be fairly nitpicky on this
because I think it'd benefit people to be as clear as possible.
---
If your project is set up
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5146#issuecomment-85231185
[Test build #29035 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29035/consoleFull)
for PR 5146 at commit
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5074#discussion_r26987263
--- Diff: docs/programming-guide.md ---
@@ -1086,6 +1086,62 @@ for details.
/tr
/table
+### Shuffle operations
+
+Certain operations
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4986#discussion_r26976590
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/GaussianMixtureModel.scala
---
@@ -83,5 +95,82 @@ class GaussianMixtureModel(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5143#issuecomment-85215748
[Test build #29027 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29027/consoleFull)
for PR 5143 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5085#discussion_r26985093
--- Diff: bin/spark-class ---
@@ -40,36 +40,24 @@ else
fi
fi
-# Look for the launcher. In non-release mode, add the compiled classes
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-85228821
[Test build #29034 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29034/consoleFull)
for PR 4027 at commit
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4129#issuecomment-85229133
@CodingCat sorry you're right, I didn't realize CPUS_PER_TASK was
configured to that flag. LGTM
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5142#issuecomment-85228796
[Test build #29033 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29033/consoleFull)
for PR 5142 at commit
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5074#discussion_r26987355
--- Diff: docs/programming-guide.md ---
@@ -1086,6 +1086,62 @@ for details.
/tr
/table
+### Shuffle operations
+
+Certain operations
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/5118#issuecomment-85180705
Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-85179771
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-85179766
[Test build #29028 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29028/consoleFull)
for PR 5093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-85179769
[Test build #29028 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29028/consoleFull)
for PR 5093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5143#issuecomment-85179800
[Test build #29027 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29027/consoleFull)
for PR 5143 at commit
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/4632#issuecomment-85180665
Thanks @pwendell. I had stumbled across that
[SPARK-3377](https://issues.apache.org/jira/browse/SPARK-3377) work as well.
I think there are solid arguments
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5142#issuecomment-85182245
[Test build #29029 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29029/consoleFull)
for PR 5142 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5143#issuecomment-85186045
Seems fine to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/5144#issuecomment-85219844
@andrewor14 Let me know what you think!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user vlyubin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4859#discussion_r26986539
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JDBCRelation.scala ---
@@ -115,18 +116,21 @@ private[sql] class DefaultSource extends
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5074#discussion_r26986920
--- Diff: docs/programming-guide.md ---
@@ -1086,6 +1086,62 @@ for details.
/tr
/table
+### Shuffle operations
+
+Certain operations
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5142#issuecomment-85220210
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5142#issuecomment-85220152
[Test build #29029 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29029/consoleFull)
for PR 5142 at commit
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4984#issuecomment-85228355
@pwendell @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4859#issuecomment-85228515
[Test build #29032 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29032/consoleFull)
for PR 4859 at commit
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-85228145
@hellertime are you able to figure out the RAT problem?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5074#discussion_r26987121
--- Diff: docs/programming-guide.md ---
@@ -1086,6 +1086,62 @@ for details.
/tr
/table
+### Shuffle operations
+
+Certain operations
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/5146
[SPARK-6475][SQL] recognize array types when infer data types from JavaBeans
Right now if there is a array field in a JavaBean, the user wold see an
exception in `createDataFrame`. @liancheng
You
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-85103963
[Test build #29003 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29003/consoleFull)
for PR 4435 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26957574
--- Diff: core/src/main/scala/org/apache/spark/rpc/RpcEnv.scala ---
@@ -0,0 +1,412 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26958295
--- Diff: core/src/main/scala/org/apache/spark/rpc/RpcEnv.scala ---
@@ -0,0 +1,412 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26958330
--- Diff: core/src/main/scala/org/apache/spark/rpc/RpcEnv.scala ---
@@ -0,0 +1,412 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5111#issuecomment-85116122
But does this then work with ipython 2? I wouldn't want to necessarily
'break' support, even if it's just in an example. Or are two examples called
for? Ideally, one
Github user brennonyork commented on a diff in the pull request:
https://github.com/apache/spark/pull/5093#discussion_r26959105
--- Diff: dev/tests/pr_new_dependencies.sh ---
@@ -0,0 +1,77 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5139#issuecomment-85116031
[Test build #29004 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29004/consoleFull)
for PR 5139 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-85118421
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5140#issuecomment-85118446
[Test build #29005 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29005/consoleFull)
for PR 5140 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-85118397
[Test build #29006 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29006/consoleFull)
for PR 5093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5093#issuecomment-85118409
[Test build #29006 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29006/consoleFull)
for PR 5093 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26959685
--- Diff: core/src/main/scala/org/apache/spark/rpc/akka/AkkaRpcEnv.scala ---
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4997#discussion_r26953234
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -111,9 +111,11 @@ private[python] class PythonMLLibAPI extends
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/5014#discussion_r26953213
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -557,7 +557,6 @@
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-85098523
[Test build #29000 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29000/consoleFull)
for PR 4435 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26958734
--- Diff: core/src/main/scala/org/apache/spark/rpc/akka/AkkaRpcEnv.scala ---
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-85115634
My entirely personal opinion is I'm neutral on whether this is worth more
API methods.
---
If your project is set up for it, you can reply to this email and have your
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26959384
--- Diff: core/src/main/scala/org/apache/spark/rpc/akka/AkkaRpcEnv.scala ---
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user nkronenfeld commented on the pull request:
https://github.com/apache/spark/pull/5140#issuecomment-85117276
I'm not sure how mesos and yarn clusters are started/stopped (nor do I have
such clusters on which to test), so I'm not sure how this will affect them. I
think the
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5136#discussion_r26959388
--- Diff:
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala ---
@@ -91,7 +90,12 @@ private[spark] class DiskBlockManager(blockManager:
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4986#issuecomment-85086722
We want to allow the model data to be extended (with defaults to allow
backwards compatibility). There might be unforeseeable reasons to change the
format, too.
---
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85096699
@sryza When creating a Mesos Task, one usually define the resources
required for the execution of the task and the resources required to run the
Mesos executor. Again
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85096806
Fractional is definitely supported, since it's just cpu shares in the end.
We should make it a double
---
If your project is set up for it, you can reply to this email
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5118#issuecomment-85101090
[Test build #29001 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/29001/consoleFull)
for PR 5118 at commit
Github user nkronenfeld closed the pull request at:
https://github.com/apache/spark/pull/3699
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user nkronenfeld commented on the pull request:
https://github.com/apache/spark/pull/3699#issuecomment-85112967
I'm redoing this in the latest code, remaking the PR from scratch, to
alleviate merge issues. I'll post the new PR here when it's made.
---
If your project is set
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4588#discussion_r26958715
--- Diff: core/src/main/scala/org/apache/spark/rpc/akka/AkkaRpcEnv.scala ---
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/5139
[SPARK-6369] [SQL] [WIP] Uses commit coordinator to help committing Hive
and Parquet tables
This PR leverages the output commit coordinator introduced in #4066 to help
committing Hive and
GitHub user nkronenfeld opened a pull request:
https://github.com/apache/spark/pull/5140
[Spark-4848] Stand-alone cluster: Allow differences between workers with
multiple instances
This refixes #3699 with the latest code.
This fixes SPARK-4848
I've changed the
201 - 300 of 514 matches
Mail list logo