Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
Let's remove the rule `SubstituteUnresolvedOrdinals` and move all the
`UnresolvedOrdinal` stuffs (order by and group by) into parser.
Btw, also don't forget Dataset API, please see
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/17849
Thanks your work on this but I am curious what is the benefit of doing
this? In pyspark there is no param in Model itself currently, what is the
problem or bugs it can resolve after adding
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131058058
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -42,13 +42,5 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80189/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80189 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80189/testReport)**
for PR 18742 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131057644
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18759
@BryanCutler I linked this PR to the JIRA manually. Thanks for remind.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18749
https://spark-prs.appspot.com does not show GITHUB approval. If we can put
LGTM explicitly, it will help us track the progress.
---
If your project is set up for it, you can reply to this
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
I think @srowen did it here using the new github approval: srowen approved
these changes 20 hours ago
@srowen might be better if we stick with the LGTM one.
---
If your project is set up
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18749
Before merging the PR, you have to wait for another committer to say LGTM.
I think this is still required.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131056819
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -616,15 +616,24 @@ private[spark] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80188 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80188/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80188/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131056558
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -616,15 +616,24 @@ private[spark] class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
Will go merging this one. Thanks for reviewing this @srowen and @rxin.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131055857
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/AggregateBenchmark.scala
---
@@ -301,6 +301,61 @@ class AggregateBenchmark
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131055788
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,13 @@ object SQLConf {
"disable logging or -1
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131055704
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,16 @@ class CodegenContext {
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18817
I guess we really need a close investigation and thorough tests for this
case. There are few PRs open for unicode support but I believe we have been
finding few holes, even, including my PR.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
seems fine to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18779
Thank you for finding the root cause. @maropu
Moving them to the parser sounds reasonable to me, but we also should avoid
analyzing the analyzed plan again. Thanks for your works!
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18779
I think this will fail the test case in
`SubstituteUnresolvedOrdinalsSuite`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18823
Thanks @hyukjinkwon!
On Aug 2, 2017 6:30 PM, "Hyukjin Kwon" wrote:
Merged into branch-2.2.
â
You are receiving this because you were
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
@rxin, does this look okay to you too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131053346
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteUnresolvedOrdinals.scala
---
@@ -42,13 +42,5 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131053311
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -528,7 +529,15 @@ class AstBuilder(conf: SQLConf)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18819
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80184/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18819
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18819
**[Test build #80184 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80184/testReport)**
for PR 18819 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80186 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80186/testReport)**
for PR 18668 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80191 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80191/testReport)**
for PR 18779 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80189 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80189/testReport)**
for PR 18742 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18828
**[Test build #80190 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80190/testReport)**
for PR 18828 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80188 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80188/testReport)**
for PR 18742 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18828
**[Test build #80187 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80187/testReport)**
for PR 18828 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18828
cc @shaneknapp It sounds like we are unable to trigger the test. Could you
please check it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r131050585
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
I think it is a perfect solution,thank you very much. @viirya @maropu
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18828
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131048805
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -413,7 +414,10 @@ private[hive] class HiveClientImpl(
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18824#discussion_r131047358
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -616,15 +616,24 @@ private[spark] class
Github user BiyuHuang commented on the issue:
https://github.com/apache/spark/pull/11863
I'm wondering that why the setting "enable.auto.commit" existed, but it was
set to false by default and I could't modify it . Anyway, how do I use it ?
---
If your project is set up for it, you
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/11863
You won't get any reasonable semantics out of auto commit, because it will
commit on the driver without regard to what the executors have done.
On Aug 2, 2017 21:46, "Wallace Huang"
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18829
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/18829
[SPARK-21620]Add metrics url in spark web ui.
## What changes were proposed in this pull request?
Add metrics url in spark web ui.
Big data system several other components of
Github user BiyuHuang commented on the issue:
https://github.com/apache/spark/pull/11863
heyï¼I have an question about the setting "auto.commit.enable"ï¼ It could
be changed Because I wanna save the offsets information to zookeeper
cluster.
---
If your project is set up for
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18828
cc @adrian-ionescu @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18824
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18828
[SPARK-21619][SQL] Fail the execution of canonicalized plans explicitly
## What changes were proposed in this pull request?
Canonicalized plans are not supposed to be executed. I ran into a case
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18824
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80182/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18824
**[Test build #80182 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80182/testReport)**
for PR 18824 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
oh, yea. I feel it's ok to do so.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
`AstBuilder` can access conf. So it can only replace int literals with
`UnresolvedOrdinal` when the config is enabled. Otherwise, leave it as it's.
In Analyzer, we remove the rule
Github user rgbkrk commented on the issue:
https://github.com/apache/spark/pull/18734
Workarounds welcomed on cloudpickle. ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18827
@venkorls, it looks mistakenly open. Close this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18779
yea, it is like; it adds `UnresolvedOrdinal` in `AstBuilder` and, if
`conf.groupByOrdinal`=false, analyzer drops `UnresolvedOrdinal`. Since
`conf.groupByOrdinal`=true by defualt, dropping
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18824
Hmm, after I made some changes to the test the whole test suite is failing
(although running them individually works). I'll work on that but the fix
itself, other than the test, should be correct.
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18816#discussion_r131037188
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -272,7 +272,7 @@ private[hive]
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18825
Merged into branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dmvieira commented on a diff in the pull request:
https://github.com/apache/spark/pull/18765#discussion_r131036498
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2571,6 +2572,23 @@ private[spark] object Utils extends Logging {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18825
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18825
**[Test build #80181 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80181/testReport)**
for PR 18825 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18825
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80181/
Test PASSed.
---
Github user dmvieira commented on the issue:
https://github.com/apache/spark/pull/18802
I don't know why these tests are breaking. Could some one help me?
Permission denied?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18823
Merged into branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18823
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18823
**[Test build #80185 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80185/testReport)**
for PR 18823 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18823
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80185/
Test PASSed.
---
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18468
ping @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18827
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user venkorls opened a pull request:
https://github.com/apache/spark/pull/18827
Merge pull request #1 from apache/master
update with source
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18746
Also, you can remove "implemented" from the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18746
@ajaysaini725 Is there a JIRA for this PR? Please tag this PR in the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80183/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80183 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80183/testReport)**
for PR 18746 at commit
GitHub user radford1 reopened a pull request:
https://github.com/apache/spark/pull/18607
[SPARK-21362][SQL][Adding Apache Drill JDBC Dialect]
## What changes were proposed in this pull request?
Adding Apache Drill to the JDBC Dialect
## How was this patch tested?
Github user radford1 commented on the issue:
https://github.com/apache/spark/pull/18607
@viira can you review for me again please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user radford1 closed the pull request at:
https://github.com/apache/spark/pull/18607
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user bravo-zhang commented on the issue:
https://github.com/apache/spark/pull/18826
Hi @holdenk , I'm opening this PR to continue the effort in
https://github.com/apache/spark/pull/12491
When adding doctest, I noticed that the `uid` part of the string is always
a random
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
@rxin - Sure, let me talk to folks internally to see if it is possible to
relicense. Otherwise, we might have to upgrade to hadoop 2.9.0, which will come
with its own zstd implementation.
---
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18815
@srowen Help review the code,Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user bravo-zhang commented on the issue:
https://github.com/apache/spark/pull/12491
Hi @holdenk , to continue this PR I opened
https://github.com/apache/spark/pull/18826
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18826
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user bravo-zhang opened a pull request:
https://github.com/apache/spark/pull/18826
LogisticRegressionModel.toString should summarize model
## What changes were proposed in this pull request?
[SPARK-14712](https://issues.apache.org/jira/browse/SPARK-14712)
Github user liyichao commented on the issue:
https://github.com/apache/spark/pull/18070
I will update the pr in a day.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #80183 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80183/testReport)**
for PR 18746 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18823
**[Test build #80185 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80185/testReport)**
for PR 18823 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18824
**[Test build #80182 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80182/testReport)**
for PR 18824 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18819
**[Test build #80184 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80184/testReport)**
for PR 18819 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18825
**[Test build #80181 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80181/testReport)**
for PR 18825 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
Well, I think we can not avoid the various cases bringing int literals into
grouping expressions.
To fix it, I think we should not replace any int literals with
`UnresolvedOrdinal` in
Github user ajaysaini725 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18742#discussion_r131028435
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -375,6 +375,18 @@ def copy(self, extra=None):
that._defaultParamMap = {}
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
@sitalkedia anyway you can talk to the FB team that does that one and
relicense, similar to RocksDB?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18825
Since it seemed like some users in the JIRA might be using 2.1, I made this
PR too. I think this is pretty low risk also, and it went in without any
conflicts. But up to you if you're ok with
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18823
Yea, that's not in the guide and not required IIRC but just a little
suggestion by me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18823
Sure no prob. I can add that to the PR title too, but I don't think I've
done that in past backports.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18823
@BryanCutler mind adding something like `[BRANCH-2.2]` in the PR title?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
1 - 100 of 422 matches
Mail list logo