Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16627#discussion_r96778846
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -34,6 +35,132 @@ import
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16634
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16593#discussion_r96785107
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -1361,6 +1355,38 @@ class HiveDDLSuite
}
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16593#discussion_r96785016
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -1361,6 +1355,38 @@ class HiveDDLSuite
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16635
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16635
**[Test build #71628 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71628/testReport)**
for PR 16635 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16635
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71628/
Test FAILed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16593#discussion_r96784647
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -1361,6 +1355,38 @@ class HiveDDLSuite
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16593
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71626/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16593
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16593
can you also update the test name?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16635
**[Test build #71628 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71628/testReport)**
for PR 16635 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16593
**[Test build #71626 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71626/testReport)**
for PR 16593 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16635#discussion_r96784377
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2497,9 +2497,7 @@ class SQLQuerySuite extends QueryTest with
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16635
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96784321
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user windpiger commented on the issue:
https://github.com/apache/spark/pull/16593
this added test?
https://github.com/apache/spark/pull/16593/files#diff-b7094baa12601424a5d19cb930e3402fR1385
---
If your project is set up for it, you can reply to this email and have your
Github user scwf commented on the issue:
https://github.com/apache/spark/pull/16633
@viirya @rxin i support the idea of @wzhfy in the maillist
http://apache-spark-developers-list.1001551.n3.nabble.com/Limit-Query-Performance-Suggestion-td20570.html,
it solved the single partition
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16631
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user ron8hu commented on the issue:
https://github.com/apache/spark/pull/16395
@wzhfy For predicate condition d_date >= '2000-01-27', we do not support it
because Spark SQL cast d_date column to String first before comparison. For
predicate condition d_date >=
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/16395
ok after more testing, estimation for timestamp/date comparison is still
useful.
e.g. users can write cast to date explicitly:
```
where d_date > date('2000-08-23'), or
where d_date >
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16593
please address
https://github.com/apache/spark/pull/16593#discussion_r96610195
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16593#discussion_r96783342
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/CreateHiveTableAsSelectCommand.scala
---
@@ -45,6 +46,18 @@ case class
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16593#discussion_r96783111
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/CreateHiveTableAsSelectCommand.scala
---
@@ -87,8 +101,8 @@ case class
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96782809
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96782626
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16633
@rxin even it breaks the RDD job chain. I think it is still useful in some
cases, for example, the number of partitions is big and you only need to get
one or few partitions to satisfy the limit.
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96782248
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16627
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71624/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16627
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96782094
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16627
**[Test build #71624 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71624/testReport)**
for PR 16627 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96781528
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16633
@rxin ok. I see what you mean breaking the RDD job chain.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96781278
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96780969
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96780810
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/16395
@ron8hu @rxin It seems we don't need logics for binary filter conditions
for date/timestamp types, because currently spark will always cast all relative
timestamp/data/string comparison into string
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96780724
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96780571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16625
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16625
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71622/
Test PASSed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16628
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16625
**[Test build #71622 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71622/testReport)**
for PR 16625 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16613
nvm. After second thought, the feature flag does not really buy us
anything. We just store the original view definition and the column mapping in
the metastore. So, I think it is fine to just do the
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96779801
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16633
**[Test build #71627 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71627/testReport)**
for PR 16633 at commit
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96779648
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16633
@rxin Can you explain it more? I don't get it. Why it breaks the RDD job
chain?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96778868
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96778275
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16628
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71621/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16628
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16628
**[Test build #71621 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71621/testReport)**
for PR 16628 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16636
uh, not all the serde need the schema. We need to check
`hive.serdes.using.metastore.for.schema`, which contains a list of serdes that
require user-specified schema:
```
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16637
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71625/
Test PASSed.
---
GitHub user discipleforteen opened a pull request:
https://github.com/apache/spark/pull/16637
[SPARK-19225][SQL]round decimal return normal value but not null
## What changes were proposed in this pull request?
in https://issues.apache.org/jira/browse/SPARK-19225,
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/14461
I did some testing and this properly shows up on local and standalone and
doesn't show the link on yarn. So this LGTM
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71625 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71625/testReport)**
for PR 15211 at commit
Github user viper-kun commented on the issue:
https://github.com/apache/spark/pull/16632
@srowen I have not test in master version. I will do it later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16593
**[Test build #71626 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71626/testReport)**
for PR 16593 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16633
This breaks the RDD job chain doesn't it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lins05 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16593#discussion_r96774234
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/CreateHiveTableAsSelectCommand.scala
---
@@ -87,8 +101,8 @@ case class
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96773557
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96773174
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/16631
cc @rxin @cloud-fan Can you please review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71623 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71623/testReport)**
for PR 15211 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71623/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/16629
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16344
@srowen @yanboliang @felixcheung @jkbradley Could you help kick off the new
test please? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/15821
>Shall we update this PR to the latest and solicit from involvement from
Spark committers?
Yeah, I think it's about ready for that. After we integrate the latest
changes, I'll go over
Github user michalsenkyr commented on the issue:
https://github.com/apache/spark/pull/16541
I added the benchmarks based on the code you provided but I am getting
almost the same results before and after the optimization (see description). So
either the added benefit is really small
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/14204
@nblintao If you could close this and @vanzin could you take a look at
#14461 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/15211#discussion_r96767873
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/LinearSVCSuite.scala ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16627
**[Test build #71624 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71624/testReport)**
for PR 16627 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71625 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71625/testReport)**
for PR 15211 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
ok I agree. Originally, I thought it will be helpful to figure out the
worker that an executor belongs to.
But, if it does not provide very useful information. I am fine to drop it.
---
If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71623 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71623/testReport)**
for PR 15211 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16625
**[Test build #71622 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71622/testReport)**
for PR 16625 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16633#discussion_r96761037
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -90,21 +94,74 @@ trait BaseLimitExec extends UnaryExecNode with
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16441
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16441
LGTM
Merging with master
Thanks @imatiach-msft and @sethah for reviewing!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/15211
I see, will work on combining the tests now. Also I'm thinking if we should
consider using `c` (cost) to replace `RegParam` in `LinearSVC' to be more
friendly for SVM users. Yet the change may be
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16628
**[Test build #71621 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71621/testReport)**
for PR 16628 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71620/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71620 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71620/testReport)**
for PR 15211 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/15211#discussion_r96756732
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/LinearSVCSuite.scala ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/15211#discussion_r96756712
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/LinearSVCSuite.scala ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/15211#discussion_r96756358
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/LinearSVCSuite.scala ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/15211#discussion_r96756901
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/LinearSVCSuite.scala ---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71619/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #71619 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71619/testReport)**
for PR 15211 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15211
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/71618/
Test PASSed.
---
101 - 200 of 538 matches
Mail list logo