Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80192 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80192/testReport)**
for PR 18804 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18804
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80192/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18804
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18779
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80191 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80191/testReport)**
for PR 18779 at commit
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18538
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18628
**[Test build #80193 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80193/testReport)**
for PR 18628 at commit
GitHub user actuaryzhang opened a pull request:
https://github.com/apache/spark/pull/18831
[SPARK-21622][ML][SparkR] Support offset in SparkR GLM
## What changes were proposed in this pull request?
Support offset in SparkR GLM #16699
You can merge this pull request into a Git
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18828
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18828
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80187/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18828
**[Test build #80187 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80187/testReport)**
for PR 18828 at commit
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18413
@HyukjinKwon Could you help review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18809
Thank you @felixcheung and @actuaryzhang.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80186 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80186/testReport)**
for PR 18668 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18779
Our Dataset APIs support `conf.groupByOrdinal`? If so, this might surprise
me. `conf.groupByOrdinal` was introduced for SQL APIs only.
---
If your project is set up for it, you can reply to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18779
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80191/
Test FAILed.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18830
Nice catch, looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18828
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18828
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80190/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18828
**[Test build #80190 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80190/testReport)**
for PR 18828 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131060237
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
It is documented: http://spark.apache.org/contributing.html
It's been the convention forever and it's also good to use one way rather
than multiple, so I'd prefer us just using that ... until
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80186/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18628
**[Test build #80193 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80193/testReport)**
for PR 18628 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18628
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18628
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80193/
Test PASSed.
---
GitHub user ConeyLiu opened a pull request:
https://github.com/apache/spark/pull/18830
[SPARK-21621][Core] Reset numRecordsWritten after
DiskBlockObjectWriter.commitAndGet called
## What changes were proposed in this pull request?
We should reset numRecordsWritten to zero
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131059429
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18830
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
I think we should rather make https://spark-prs.appspot.com recognising the
github approval as well .. I considered github approval as an approval for this
patch.
BTW, for now, is it
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18628#discussion_r131059886
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIService.scala
---
@@ -57,6 +59,19 @@ private[hive]
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/18830
@cloud-fan @vanzin Would you mind take a lookï¼ Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131059952
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18809
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18668
@vanzin @zhzhan @tejasapatil Could you also help review the documentation
and the fix? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131059637
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -413,7 +414,10 @@ private[hive] class HiveClientImpl(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131059673
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/18830
You can see here
[L208](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/DiskBlockObjectWriter.scala#L208),
when we called 'revertPartialWritesAndClose',
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@gatorsmile
scala> df.groupBy(lit(2)).agg(col("a")).queryExecution.logical
res6: org.apache.spark.sql.catalyst.plans.logical.LogicalPlan =
'Aggregate [2], [2 AS 2#51,
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
Hm, do you mean this line?
> Reviewers can indicate that a change looks suitable for merging with a
comment such as: âI think this patch looks goodâ. Spark uses the LGTM
convention
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80194 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80194/testReport)**
for PR 18831 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80195 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80195/testReport)**
for PR 18804 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18828
Still looking into it, but the failure is related to reuse exchange and
caching.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80195 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80195/testReport)**
for PR 18804 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80194/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18831
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
I was wondering if other methods such as SGTM or a Github approval aa an
approval for a patch are not allowed by rule. I usually say based on
documentation or references to other guys and I
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
Ah OK. That's what we are discussing here. In the past it has always been
an explicit "LGTM". That was defined before github had even the approval
feature. Now most committers are actually not ASF
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18749
RTC means a vote happens for each change:
https://www.apache.org/foundation/glossary.html#ReviewThenCommit
That's not what we do. What debate are you referring to?
---
If your project is set up
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
@HyukjinKwon you weren't a committer before :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
@srowen search for "RTC vs CTR (was: Concerning Sentry...)"
From Todd Lipcon:
```
I don't have incubator stats... nor do I have a good way to measure "most
active" or "most
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18821
**[Test build #3875 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3875/testReport)**
for PR 18821 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18815
This won't necessarily be available to people viewing the UI? it could be
an HDFS location
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18832
@sethah
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18821
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
@gatorsmile @viirya I looked into why it applied some analyzer rules into
already-analyzed plans and I noticed that some rules used
`transform/transformUp` instead of `resolveOperators` in `apply`.
Github user zuotingbing commented on the issue:
https://github.com/apache/spark/pull/18811
ok, have done. Thanks @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@maropu I have a PR before to solve that. Due to some reasons it will be
merged on 2.3. I am out of laptop, will refer it once I can access laptop.
On Aug 3, 2017 5:07 PM,
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18811
merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131068575
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131068501
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18821
**[Test build #3875 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3875/testReport)**
for PR 18821 at commit
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18833
[SPARK-21625][SQL] sqrt(negative number) should be null.
## What changes were proposed in this pull request?
This PR makes `sqrt(negative number)` to null, same as Hive and MySQL.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80192 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80192/testReport)**
for PR 18804 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
When using Dataset groupBy API, if you use int literals as grouping
expressions, do we filter this case out for substituting `UnresolvedOrdinals`?
Seems there is no related logic to prevent it.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
What's your point? You should be able to merge PR without anybody reviewing?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18804
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80195/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18831
**[Test build #80194 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80194/testReport)**
for PR 18831 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18804
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80197 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80197/testReport)**
for PR 18779 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
I see. Thanks for the details and explanation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
Actually Sean I disagree. Spark has always been review then commit from the
days before it entered ASF. In a huge debate last year within the ASF on RTC vs
CTR, Spark was cited as a prominent example.
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131074348
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18808
**[Test build #3874 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3874/testReport)**
for PR 18808 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80198 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80198/testReport)**
for PR 18668 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18832
**[Test build #80199 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80199/testReport)**
for PR 18832 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
I understood Spark have used LGTM as convention and it is a good way to
show an approval as a sign-off but I meant, is "LGTM", not any other methods,
required before merging it?
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80196 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80196/testReport)**
for PR 18804 at commit
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18804
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18749
Formally, our model is "Commit Then Review". No approval or vote is
required for any change, but changes can be retroactively vetoed.
https://www.apache.org/foundation/glossary.html#CommitThenReview
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
(BTW, just for clarification, I think anyone can use that approve feature.
I did it before - `https://github.com/apache/spark/pull/17734`)
---
If your project is set up for it, you can reply
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18749
Yes, this isn't RTC. It isn't even RTC with 1-vote consensus, because you
rightly say that you could merge with no other votes if it were obviously not
required. But whatever. We can pick the
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18811
Ah, I'm sorry I misled you a bit here @zuotingbing . Yes you found another
unused variable, but, it's in code that is copied directly from Hive. I think
we should leave HiveSessionImplwithUGI
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18808
I hit the same issue! Thanks for fixing it!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user mpjlu opened a pull request:
https://github.com/apache/spark/pull/18832
[SPARK-21623][ML]fix RF doc
## What changes were proposed in this pull request?
comments of parentStats in RF are wrong.
parentStats is not only used for the first iteration, it is used
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18833
**[Test build #80200 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80200/testReport)**
for PR 18833 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18607
@radford1 Have you added the test as suggested by @gatorsmile?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18745
Ping @caneGuy -- adding `[test-maven]` will let us also verify this passes
the Maven build
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18833
Pg does not accept this query;
```
postgres=# select sqrt(3);
sqrt
--
1.73205080756888
(1 row)
postgres=# select sqrt(-1);
ERROR:
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18811
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18815
@srowen
Are you sure that the master and worker logs can be stored in hdfs?
Spark's master and worker logs are generated by log4j.
---
If your project is set up for it, you can
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131093892
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,61 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/18628
Thanks for making sure this is consistent with other uses of
Configuration.get(); consistency is critical here
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18832
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18816
**[Test build #3876 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3876/testReport)**
for PR 18816 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18832
**[Test build #80199 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80199/testReport)**
for PR 18832 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18832
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80199/
Test PASSed.
---
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18815
It's not an http URI though right? it's a path. I'm missing why this is
browseable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
1 - 100 of 482 matches
Mail list logo