Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/3890#issuecomment-68912504
Since hinge loss in SVM is not differentiable around zero, L-BFGS will not
work correctly. You need to use OWLQN to address the non-differentiable issue.
PS, I tried to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68913549
[Test build #25107 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25107/consoleFull)
for PR 3638 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-68923630
[Test build #25109 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25109/consoleFull)
for PR 3913 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3158#issuecomment-68925902
[Test build #25110 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25110/consoleFull)
for PR 3158 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3899#discussion_r22549295
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Gradient.scala ---
@@ -64,11 +64,17 @@ class LogisticGradient extends Gradient {
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-68933389
That doc is very outdated - you can actually just look in the UI after
caching some data, you don't need to visit the logs.
---
If your project is set up for it, you
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3795#discussion_r22552583
--- Diff:
core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala ---
@@ -72,6 +72,8 @@ class OrderedRDDFunctions[K : Ordering : ClassTag,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3158#issuecomment-68935727
[Test build #25110 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25110/consoleFull)
for PR 3158 at commit
Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68944274
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3861#issuecomment-68948330
[Test build #25119 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25119/consoleFull)
for PR 3861 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3645#issuecomment-68933786
@smola hey - currently `Utils.localHostName` should respect SPARK_LOCAL_IP
if it is set (it will try to find the associated interface). It will do a
reverse lookup and
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3914#discussion_r22552770
--- Diff: pom.xml ---
@@ -149,7 +149,7 @@
scala.binary.version2.10/scala.binary.version
jline.version${scala.version}/jline.version
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3884#issuecomment-68935347
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3914#discussion_r22552780
--- Diff: pom.xml ---
@@ -830,7 +830,17 @@
artifactIdjackson-core-asl/artifactId
version${jackson.version}/version
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-68936032
Ooh OK I'll update the doc. That's still a little cumbersome though for
someone who just wants to see how much space an object takes up. Most of the
recommendations on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-68936098
[Test build #25115 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25115/consoleFull)
for PR 3564 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3884#issuecomment-68936082
[Test build #25114 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25114/consoleFull)
for PR 3884 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-68938486
[Test build #25115 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25115/consoleFull)
for PR 3564 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-68938503
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3871
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/3916
[SPARK-4924] Add a library for launching Spark jobs programatically.
This change encapsulates all the logic involved in launching a Spark job
into a small Java library that can be easily embedded
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2982#discussion_r22559327
--- Diff: examples/pom.xml ---
@@ -98,143 +98,145 @@
version${project.version}/version
/dependency
dependency
-
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68952746
[Test build #25120 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25120/consoleFull)
for PR 3915 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68953745
[Test build #25116 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25116/consoleFull)
for PR 3638 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68953756
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3861#issuecomment-68956289
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3861#issuecomment-68956280
[Test build #25119 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25119/consoleFull)
for PR 3861 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-68957572
[Test build #25118 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25118/consoleFull)
for PR 3916 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-68957582
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3917#issuecomment-68960078
It sounds like you're confirming my suspicion that the default is not
`1.0.4` for any good reason.
Meanwhile,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-68962335
[Test build #25123 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25123/consoleFull)
for PR 3910 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-68962344
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68955600
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68955592
[Test build #25117 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25117/consoleFull)
for PR 3914 at commit
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/3431#issuecomment-68953584
@scwf I am working on the data type parser. Will try to make a pull request
to your branch soon.
---
If your project is set up for it, you can reply to this email and
Github user oliviertoupin commented on a diff in the pull request:
https://github.com/apache/spark/pull/205#discussion_r22562649
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlSerializer.scala
---
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3917#discussion_r22562840
--- Diff: pom.xml ---
@@ -119,7 +119,7 @@
mesos.classifiershaded-protobuf/mesos.classifier
slf4j.version1.7.5/slf4j.version
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3918#discussion_r22563038
--- Diff: yarn/pom.xml ---
@@ -41,28 +42,34 @@
dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-yarn-api/artifactId
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3917#issuecomment-68958937
OK, I don't think the issue in SPARK-5115 should be the motivation here,
even if all else equal, I'd prefer to use Hadoop 2+ as a default. Really, for
most purposes, the
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/3909#discussion_r22563282
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SparkSQLParser.scala
---
@@ -66,7 +66,13 @@ class SqlLexical(val keywords:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68960106
[Test build #25120 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25120/consoleFull)
for PR 3915 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68960150
I can confirm this is a bug when the keyword is too long, however, this
fixing seems a little hack to me, Sorry, @OopsOutOfMemory , I need more time in
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68960113
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3918#issuecomment-68961163
A `version` here would override what's in the `spark-parent` POM. So this
would force it to `hadoop.version`. `yarn.version` itself defaults to take the
value of
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68961808
For MLOR case, it's not as simple as binary case, but I managed to address
it. Will be in another PR.
Actually, this solves instability issue when the initial
GitHub user ryan-williams opened a pull request:
https://github.com/apache/spark/pull/3918
specify hadoop version for yarn module deps
fixes IntelliJ's dependency-resolution in the yarn module
You can merge this pull request into a Git repository by running:
$ git pull
GitHub user ryan-williams opened a pull request:
https://github.com/apache/spark/pull/3917
Bump default hadoop.version to 2.4.0 in pom.xml.
Fixes intellij's inability to resolve many hadoop/yarn dependencies.
You can merge this pull request into a Git repository by running:
$
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-68958456
[Test build #25123 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25123/consoleFull)
for PR 3910 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3917#issuecomment-68958460
[Test build #25121 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25121/consoleFull)
for PR 3917 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3918#issuecomment-68958464
[Test build #25122 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25122/consoleFull)
for PR 3918 at commit
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/3917#discussion_r22562943
--- Diff: pom.xml ---
@@ -119,7 +119,7 @@
mesos.classifiershaded-protobuf/mesos.classifier
slf4j.version1.7.5/slf4j.version
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/3917#discussion_r22562960
--- Diff: pom.xml ---
@@ -119,7 +119,7 @@
mesos.classifiershaded-protobuf/mesos.classifier
slf4j.version1.7.5/slf4j.version
Github user ryan-williams commented on a diff in the pull request:
https://github.com/apache/spark/pull/3918#discussion_r22563329
--- Diff: yarn/pom.xml ---
@@ -41,28 +42,34 @@
dependency
groupIdorg.apache.hadoop/groupId
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3918#issuecomment-68962022
Interesting. So the profiles in the parent POM will override the default
`hadoop.version` there (and set `yarn.version` to the `hadoop.version` only
_after_ the
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3918#issuecomment-68962515
Well right now versions are managed in the parent POM, on purpose. A
declaration in the child would override a declaration in the parent, if it
existed. Children don't
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68930886
[Test build #25111 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25111/consoleFull)
for PR 3914 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3871#discussion_r22552390
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/stat/impl/MultivariateGaussian.scala
---
@@ -17,23 +17,84 @@
package
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-68935889
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68944538
[Test build #25116 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25116/consoleFull)
for PR 3638 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3884#issuecomment-68946403
[Test build #25114 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25114/consoleFull)
for PR 3884 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3884#issuecomment-68946424
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user zhzhan opened a pull request:
https://github.com/apache/spark/pull/3914
[SPARK-5108][BUILD] Jackson dependency management for Hadoop-2.6.0 support
There is dependency compatibility issue. Currently hadoop-2.6.0 use 1.9.13
for jackson. Upgrade to the same version to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68933088
[Test build #25112 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25112/consoleFull)
for PR 3915 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-68933621
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-68933612
[Test build #25109 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25109/consoleFull)
for PR 3913 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68946821
[Test build #25117 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25117/consoleFull)
for PR 3914 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2982#discussion_r22559248
--- Diff: bin/compute-classpath.cmd ---
@@ -109,6 +115,24 @@ if x%YARN_CONF_DIR%==x goto no_yarn_conf_dir
set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934167
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3893#issuecomment-68934185
I put some thoughts on a new JIRA about how to clean this up overall in
Spark - as for this patch. I'm fine to merge it, but it would be good if we did
a proper
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3869#issuecomment-68942083
LGTM. Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3869
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68943726
[Test build #25112 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25112/consoleFull)
for PR 3915 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3915#issuecomment-68943749
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68946545
@srowen Thanks for the comments. I will try more combinations to see
whether it has other potential impact.
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-68948304
[Test build #25118 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25118/consoleFull)
for PR 3916 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-68948473
A few pre-emptive comments:
- I understand this change is a little large. I'd recommend looking at
separate commits if you feel overwhelmed. But I don't think
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/3431#discussion_r22547365
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -83,10 +99,104 @@ private[sql] class DDLParser extends
StandardTokenParsers
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/3915
[SPARK-5101] Add common ML math functions
When `x` is positive and large, computing `math.log(1 + math.exp(x))` will
lead to arithmetic
overflow. This will happen when `x 709.78` which is not a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934666
[Test build #25113 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25113/consoleFull)
for PR 3074 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934672
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934655
[Test build #25113 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25113/consoleFull)
for PR 3074 at commit
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/3871#issuecomment-68934738
@tgaloppo Now that my confusion is over...LGTM Thanks very much!
CC: @mengxr
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3158#issuecomment-68935741
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3871#issuecomment-68941542
Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68941634
[Test build #25111 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25111/consoleFull)
for PR 3914 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3914#issuecomment-68941644
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3564#issuecomment-68947371
We've been slowly making progress on the streaming test refactoring and it
looks like we're down to only three failing tests on this branch:
```
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3917#issuecomment-68965042
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/3899#discussion_r22567347
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Gradient.scala ---
@@ -64,11 +64,17 @@ class LogisticGradient extends Gradient {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68968909
[Test build #25127 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25127/consoleFull)
for PR 3899 at commit
GitHub user GenTang opened a pull request:
https://github.com/apache/spark/pull/3920
The improvement of python converter for hbase(examples)
Hi,
Following the discussion in
Github user coderxiang commented on the pull request:
https://github.com/apache/spark/pull/3919#issuecomment-68972784
@mengxr Do you mean update the existing implementations using the
extractor? Sure I can do that.
---
If your project is set up for it, you can reply to this email
Github user ryan-williams closed the pull request at:
https://github.com/apache/spark/pull/3918
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ryan-williams commented on the pull request:
https://github.com/apache/spark/pull/3918#issuecomment-68963588
makes sense. Looking again at the POMs, it seems like I need to revise my
mental model a little further.
Step 3 above (`yarn.version` assigned to value of
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3920#issuecomment-68969694
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3431#issuecomment-68970836
[Test build #25128 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25128/consoleFull)
for PR 3431 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3878#issuecomment-68971395
[Test build #25125 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25125/consoleFull)
for PR 3878 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3878#issuecomment-68971397
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3431#issuecomment-68973513
[Test build #25128 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25128/consoleFull)
for PR 3431 at commit
GitHub user guowei2 opened a pull request:
https://github.com/apache/spark/pull/3921
[SPARK-5118][SQL] Fix: create table test stored as parquet as select ..
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/guowei2/spark
101 - 200 of 359 matches
Mail list logo