GitHub user chenghao-intel opened a pull request:
https://github.com/apache/spark/pull/4892
[SPARK-6145] [SQL] Fix the bug of nested data type resolving in ORDER BY
You can merge this pull request into a Git repository by running:
$ git pull
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77179530
cc @marmbrus @cloud-fan This is a quick fix, in long term, we should
resolve the nested attribute sequence in one time.
What do you think?
---
If your
Github user gvramana closed the pull request at:
https://github.com/apache/spark/pull/4893
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user matuskik commented on the pull request:
https://github.com/apache/spark/pull/4875#issuecomment-77179421
@jerryshao, I am in the process of developing my application but basically
I have a stream of events that I persist outside of Spark in Cassandra and also
fed into a
Github user gvramana commented on the pull request:
https://github.com/apache/spark/pull/4893#issuecomment-77179751
Sorry pull request raised against wrong repository. closed the same
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4891#issuecomment-77187119
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77178280
[Test build #28268 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28268/consoleFull)
for PR 4892 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4891#issuecomment-77187099
[Test build #28267 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28267/consoleFull)
for PR 4891 at commit
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-77180148
FYI: There is [a question on
SO](http://stackoverflow.com/q/28841940/877069), I believe, about the type of
functionality being added in this PR.
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4893#issuecomment-77179213
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4884#issuecomment-77178252
@srowen @liancheng You can reproduce my problem by
1. clone spark
2. import it into intellij by choose import from maven
3. run sparksqlclidriver in
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77193441
[Test build #28268 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28268/consoleFull)
for PR 4892 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4890#issuecomment-77203459
@liancheng can you put this exclusion in the pom.xml file instead of in
sbt? If you look there we already have other exclusions.
---
If your project is set up for it,
Github user trystanleftwich commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77208611
So to confirm, i think this function needs to be able to handle 5 states:
Path is a dir which has subdirs
(structure is hdfs://foo/foo1/foo2.jar)
Github user jkleckner commented on the pull request:
https://github.com/apache/spark/pull/4780#issuecomment-77195569
Well, [sbt-assembly
rename](http://stackoverflow.com/questions/24596914/sbt-assembly-rename-class-with-merge-conflicts-shade)
does not perform a shade operation.
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77209251
I tested with this version of `fetchHcfsFile` and my tests pass:
/**
* Fetch a file or directory from a Hadoop-compatible filesystem.
*
*
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77193451
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77221474
[Test build #28269 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28269/consoleFull)
for PR 4894 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77223185
Hi @trystanleftwich , just tested my code again with more strict checks,
and files show up as files, directories show up as directories.
---
If your project is set up
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4891#issuecomment-77233294
@viirya This is an interesting feature. My personal opinion is that we
shouldn't introduce an `.abnormal` extension because the shutdown hook is not
always guaranteed
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4894#discussion_r25807190
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -639,19 +640,22 @@ private[spark] object Utils extends Logging {
fs:
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4819#issuecomment-77221980
That's correct: element i should have the error/loss for the ensemble
containing trees {0, 1, ..., i}.
---
If your project is set up for it, you can reply to this
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4881#discussion_r25805285
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -389,16 +389,30 @@ class UtilsSuite extends FunSuite with
ResetSystemProperties {
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4894#discussion_r25807125
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -624,7 +624,8 @@ private[spark] object Utils extends Logging {
case _ =
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77232940
[Test build #28270 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28270/consoleFull)
for PR 4894 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77220585
@trystanleftwich I'm not trying to steal your thunder :-). It's just that
this is pretty urgent given the release schedule. You'll be given full credit
for the change.
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77224961
Leaving a link to an alternate fix in #4894
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4841#issuecomment-77228784
So my proposal that we revert all of these related patches and submit the
right fix, which is something like the following: Before we decrement the
`coresGranted`, we
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4894#discussion_r25806615
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -624,7 +624,8 @@ private[spark] object Utils extends Logging {
case _ =
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77219721
Hmm, this solves some problems, but not all of them:
```scala
sqlContext.jsonRDD(sc.parallelize({a: {a: {a: 1}}, c: 1} ::
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77221581
The code will create a directory local_dir/foo/foo.jar and not a file
Hmm. Let me check that.
---
If your project is set up for it, you can reply to this email
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4841#issuecomment-77227608
Let's summarize the original goals and the current state of SPARK-5771.
From there we can decide how to move forward with this issue:
- Before #4567, a
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4894#discussion_r25806676
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -639,19 +640,22 @@ private[spark] object Utils extends Logging {
fs:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-7723
LGTM provided that tests pass
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user GenTang commented on the pull request:
https://github.com/apache/spark/pull/3920#issuecomment-77252529
Hi, I am sorry to bother you all.
But is this pull request OK for merging, please?
---
If your project is set up for it, you can reply to this email and have your
Github user rnowling commented on the pull request:
https://github.com/apache/spark/pull/4724#issuecomment-77254099
@srowen This affects the Spark 1.2.1 release as well. (wasn't present in
Spark 1.2.0 release.) Please merge into 1.2 branch.
---
If your project is set up for it,
Github user rnowling commented on the pull request:
https://github.com/apache/spark/pull/4724#issuecomment-77254369
Created JIRA here: https://issues.apache.org/jira/browse/SPARK-6167
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77245736
Let's close this PR in favor of #4894, which I just merged. Thanks for
reporting this blocker.
---
If your project is set up for it, you can reply to this email and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4894
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user trystanleftwich closed the pull request at:
https://github.com/apache/spark/pull/4881
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-77246969
I commented on the JIRA. This LGTM as an immediate fix - clearly the
property is not correct in the published 2.11 poms.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77249528
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77249474
[Test build #28270 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28270/consoleFull)
for PR 4894 at commit
GitHub user gvramana opened a pull request:
https://github.com/apache/spark/pull/4893
Rebased and merged to update with latest apache-spark/master
Author: Venkata Ramana G ramana.gollam...@huawei.com
You can merge this pull request into a Git repository by running:
$ git pull
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4892#discussion_r25784278
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -385,7 +385,7 @@ class SqlParser extends
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4894#discussion_r25808215
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -639,19 +640,22 @@ private[spark] object Utils extends Logging {
fs:
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4894#discussion_r25808358
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -639,19 +640,22 @@ private[spark] object Utils extends Logging {
fs:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77237784
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77237745
[Test build #28269 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28269/consoleFull)
for PR 4894 at commit
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/4839#issuecomment-77239829
@shivaram @pwendell Does this look ok? Would be good to get into the next
RC, just because this is pretty broken right now.
---
If your project is set up for it,
Github user calvinjia commented on the pull request:
https://github.com/apache/spark/pull/4867#issuecomment-77242143
@aarondav @pwendell
I've updated the client interface and tested it with basic count/wordcount
and off heap storage.
---
If your project is set up for it, you
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/4839#issuecomment-77243915
Functionality-wise this looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4894#issuecomment-77245616
I'm merging this into master and 1.3. Thanks @vanzin and @trystanleftwich
for your fixes.
---
If your project is set up for it, you can reply to this email and have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-77248119
Hey @suyanNone how is this related to #4886? The JIRA for this PR
(SPARK-6157) is marked as duplicate of the JIRA for that PR (SPARK-6156), so
are these two trying to
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77216017
@trystanleftwich I believe that's correct. To summarize:
- Before this patch, adding `hdfs://single/file.jar` doesn't work (a
regression from Spark 1.2)
-
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4068#issuecomment-77219492
Here is a partial solution: #4892
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user trystanleftwich commented on the pull request:
https://github.com/apache/spark/pull/4881#issuecomment-77221033
Ok, i've pushed my changes, I've added tests that should cover all the
states, I was getting errors with @vanzin code snippet, if you pass in dir i.e
path =
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/4894
[SPARK-6144] [core] Fix addFile when source files are on hdfs:
The code failed in two modes: it complained when it tried to re-create a
directory that already existed, and it was placing some files
Github user trystanleftwich commented on a diff in the pull request:
https://github.com/apache/spark/pull/4881#discussion_r25806270
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -389,16 +389,30 @@ class UtilsSuite extends FunSuite with
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-77263660
Hi all - I had to revert to my initial implementation since Mark's
suggested refactoring introduced a test failure. Is this good to go?
---
If your project is set up
GitHub user ilganeli opened a pull request:
https://github.com/apache/spark/pull/4895
[SPARK-3533] Add saveAsTextFileByKey() method to RDDs
This patch adds a method to allow saving an RDD as multiple text files
split up by key. I've included a test suite that should verify its
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-77263465
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-77264040
[Test build #28271 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28271/consoleFull)
for PR 4895 at commit
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/4878#issuecomment-77264094
Sorry to chime in late, but have you done performance tests with this to
see if it makes a difference? There two issues I see here:
(1) This isn't the
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/4878#discussion_r25821421
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/FileSystemPersistenceEngine.scala
---
@@ -68,7 +68,7 @@ private[spark] class
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/4878#discussion_r25821370
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockManager.scala ---
@@ -106,7 +106,7 @@ class IndexShuffleBlockManager(conf:
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4780#issuecomment-77265506
I think shading CA's HyperLogLog usage does fix this, but it is a band-aid.
I am still not clear why the user-classpath-first mechanism doesn't resolve
this. This should
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-77267879
Interesting. Looks like the failed test results are no longer available.
Do you recall what the problem was?
---
If your project is set up for it, you can reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4896#issuecomment-77280366
[Test build #28272 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28272/consoleFull)
for PR 4896 at commit
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-77272127
It was a pretty obscure error. I could revert and give you the stack trace
but I played around with it a bit and wasn't able to trace it down.
---
If your project is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-77273150
[Test build #28271 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28271/consoleFull)
for PR 4895 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-77273160
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3486#discussion_r25830508
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -134,6 +135,12 @@ private[spark] class ExecutorRunner(
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/4890#issuecomment-77301314
@pwendell Thanks! Moved to POM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-77295859
@andrewor14 , i thinks is not duplicate.
Put a memory_and_disk level block.
1) Try to put in memory store, unroll fails. 2) Put into disk success. 3)
return
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-77299031
Hi @nchammas - I'd be happy to add tests for the others as soon as I can
figure out this init() error. I'm open to suggestions. Thanks!
---
If your project is set up
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77300520
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/4841#issuecomment-77287549
Hi @andrewor14 , I think your proposal is a better way to fix this issue.
Currently the name **Cores Requested** may have two meanings depending on
whether app
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4896#issuecomment-77289567
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4896#issuecomment-77289559
[Test build #28272 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28272/consoleFull)
for PR 4896 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4897#issuecomment-77285477
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4500#issuecomment-77298143
Hi @liancheng , Can you take a look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4892#issuecomment-77300516
[Test build #28273 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28273/consoleFull)
for PR 4892 at commit
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-77273061
Thanks for opening this PR! Are you planning to add matching methods/tests
for the Python and Java APIs?
---
If your project is set up for it, you can reply to this
Github user nettok commented on the pull request:
https://github.com/apache/spark/pull/3766#issuecomment-77276529
I can't find the artifact in Maven in
http://mvnrepository.com/artifact/org.apache.spark
Any reason why it hasn't been published yet?
---
If your project is set
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/4896
[SPARK-6163][SQL] jsonFile should be backed by the data source API
jira: https://issues.apache.org/jira/browse/SPARK-6163
You can merge this pull request into a Git repository by running:
$ git
GitHub user buckheroux opened a pull request:
https://github.com/apache/spark/pull/4897
[SPARK-5929] Pyspark: Register a pip requirements file with spark_context
Ships all packages in the requirements file by installing them locally via
pip and then ships the packages to the
Github user advancedxy commented on the pull request:
https://github.com/apache/spark/pull/4783#issuecomment-77308814
@shivaram, I add a bunch of comments in the code. I think it's time for you
to review it now. And the previous failures are not related to this code
change. Wonder
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4783#issuecomment-77312744
[Test build #28277 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28277/consoleFull)
for PR 4783 at commit
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/4900#discussion_r25842094
--- Diff:
network/common/src/main/java/org/apache/spark/network/TransportContext.java ---
@@ -17,13 +17,10 @@
package
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/3822#issuecomment-77314457
[HIVE-8119](https://issues.apache.org/jira/browse/HIVE-8119) have been
merged into hive trunk. I'll test my patch based on that.
---
If your project is set up for
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4891#issuecomment-77317379
@andrewor14 Thanks! I got the idea. Although we can not guarantee that an
app with `.inprogress` extension must be in progressing, we can be very sure
that an app with
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4884#issuecomment-77318488
updated, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4902#issuecomment-77319193
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4890#issuecomment-77301587
[Test build #28274 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28274/consoleFull)
for PR 4890 at commit
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4884#issuecomment-77309752
BTW, I am using Intellij community version 14.0.3 on Ubuntu 12.04.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-77312791
[Test build #28275 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28275/consoleFull)
for PR 4586 at commit
GitHub user vinodkc opened a pull request:
https://github.com/apache/spark/pull/4900
[SPARK-6178][Shuffle] Removed unused imports
Author: Vinod K C vinod...@huawei.com
You can merge this pull request into a Git repository by running:
$ git pull
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/4884#issuecomment-77317965
Also, I think this PR is not about deploy?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4884#issuecomment-77308008
I had a small talk with @liancheng , and Guava for module hive with level
'provided' works for him when trying to reproduce my issue, while it doesn't
work for me.
1 - 100 of 173 matches
Mail list logo