Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15884
Rebased to resolve conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15884#discussion_r88189699
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -1366,7 +1366,7 @@ class JsonSuite extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88190170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -358,7 +358,8 @@ case class DataSource
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88190276
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -667,7 +667,14 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88190519
--- Diff: docs/sql-programming-guide.md ---
@@ -1087,6 +1087,13 @@ the following case-sensitive options:
+ maxConnections
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Just FYI, the only failure is irrelevant.
```
[info] StateStoreSuite:
[info] - maintenance *** FAILED *** (10 seconds, 120 milliseconds)
[info] The code passed to eventually
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Hi, @gatorsmile .
Indeed, I really want to add a test case for this.
Could you give me some advice how to test this kind of features?
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15884
Thank you so much, @cloud-fan ! Also, thank you for this issue, @gatorsmile
.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88197599
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -358,7 +358,8 @@ case class DataSource
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
I rebased and squashed to resolve conflicts. The PR is updated the
followings.
- Revert `DataSource.scala` change.
- Rename `JDBC_MAX_CONNECTION` to `JDBC_MAX_CONNECTIONS`.
- Add
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Hi, @gatorsmile .
How do you think about this PR? I can close this if you don't feel strongly.
---
If your project is set up for it, you can reply to this email and have your
reply a
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15682#discussion_r88324581
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -202,6 +202,7 @@ case class DropTableCommand
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15682#discussion_r88324895
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -202,6 +202,7 @@ case class DropTableCommand
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Sorry, but `uncacheQuery` is related to this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Maybe, do you mean #15896 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Ah, you mean that code you gave the pointer.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Yep. `uncachedQuery` is not called here. Yep. I got it. It's a bug.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Yep. I investigate the behavior here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
I see. I'll include that command here, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Sure!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Could you review this PR again, @gatorsmile?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Thank you, @gatorsmile . I added the negative test case.
But, for the positive test case, `DataFrameWriter` keeps it `private` and
`save` function returns `Unit`.
```scala
private
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Sorry for late response, @gatorsmile and @srowen .
---
> However, when users use the DataFrameWriter, this is not available.
Maybe, in the future, we can add another opt
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15896
Hi, @brkyvz and @gatorsmile .
Does this proceed to Option 1 or 2 for now? Or is this holding on for next
month?
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15896
I see. Thank you for informing that, @brkyvz
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Thank you for review again, @cloud-fan , @liancheng , and @srowen .
I had the same opinion with @srowen . Also, let's think about the
background of this issue.
Currentl
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
For document, may I add more description about the behavior to help the
user understanding?
**BEFORE**
> The number of JDBC connections, which specifies the maximum number
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Ah, right. Now always!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Documentation and PR descriptions are updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15891
Maybe, could you close this? @hvanhovell ;)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88777989
--- Diff: docs/sql-programming-guide.md ---
@@ -1087,6 +1087,13 @@ the following case-sensitive options:
+ maxConnections
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88778234
--- Diff: docs/sql-programming-guide.md ---
@@ -1087,6 +1087,13 @@ the following case-sensitive options:
+ maxConnections
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Hi, @gatorsmile .
Is there any decision for the following?
> The fixed behavior needs to be consistent with each other.
---
If your project is set up for it, you can reply to t
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
If the issue (and bugs) about *uncacheQuery* and *uncacheTable* is
postpone, I'd like to focus on the warning messages in this PR. Is it okay? We
can revisit that cache bugs for Apache
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Hi, @srowen .
Could you review this PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Thank you, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15948
Thank you, @hvanhovell . And sorry for the trouble.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15953
[SPARK-18517][SQL] DROP TABLE IF EXISTS should not warn for non-existing
tables
## What changes were proposed in this pull request?
Currently, `DROP TABLE IF EXISTS` shows warning
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15953
Hi, @gatorsmile .
This is a related case with #15682 (Dropping View) and the `uncache` issue.
But, this case is different from the above case because this PR has the
obvious solution
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88829741
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -122,6 +122,11 @@ class JDBCOptions
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
It seems not a test failure.
```
Traceback (most recent call last):
File "./dev/run-tests-jenkins.py", line 232, in
main()
File "./dev/run-tests-je
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Hi, @cloud-fan .
If there is something to do more, please let me know again.
Thank you so much.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Thank you for review and merging, @srowen , @gatorsmile , @cloud-fan ,
@lichenglin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Could you give the example of existing method you mention?
> the user has already explicitly defined the degree of parallelism in the
JDBC source using other options.
---
If your proj
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
It's read-only option. At the first commit, I reuse that option for
writing. But, as you see, it changed according the review advices.
---
If your project is set up for it, you can rep
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Hi, @rxin .
Do you mean changing SQL syntax like `BROADCAST HINT` PR we tried before?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Let me find the correct code. (It's weird I attached a wrong code example.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
The code seems to be gone during resolving conflicts. This is the my
comment at the first review.
> Thank you for review, @srowen .
>
> First, the property numP
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
The rational is to give a name which has a clear meaning. Also, I agreed
with the following advice.
> numPartitions might be not a good name for this purpose. How ab
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15868
Then, @rxin and @gatorsmile .
I'll make a PR to merge `maxConnection` into `numPartitions`.
---
If your project is set up for it, you can reply to this email and have your
reply appe
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15966
[SPARK-18413][SQL][FOLLOW-UP] Use `numPartitions` instead of
`maxConnections`
## What changes were proposed in this pull request?
This is a follow-up PR of #15868 to merge
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15953
Thank you so much, @andrewor14 ! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r88989788
--- Diff: docs/sql-programming-guide.md ---
@@ -1073,6 +1073,15 @@ the following case-sensitive options:
+ numPartitions
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89003367
--- Diff: docs/sql-programming-guide.md ---
@@ -1073,6 +1073,16 @@ the following case-sensitive options:
+ numPartitions
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89003713
--- Diff: docs/sql-programming-guide.md ---
@@ -1073,6 +1073,16 @@ the following case-sensitive options:
+ numPartitions
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89006007
--- Diff: docs/sql-programming-guide.md ---
@@ -1061,7 +1061,7 @@ the following case-sensitive options:
-partitionColumn
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89006973
--- Diff: docs/sql-programming-guide.md ---
@@ -1061,7 +1061,7 @@ the following case-sensitive options:
-partitionColumn
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89008907
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -83,10 +89,8 @@ class JDBCOptions
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89009736
--- Diff: docs/sql-programming-guide.md ---
@@ -1073,6 +1073,16 @@ the following case-sensitive options:
+ numPartitions
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89012169
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -83,10 +89,8 @@ class JDBCOptions
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
To further review, I updated the doc. We can proceed on the updated content.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Thank you for review, @cloud-fan .
With the same parameter name `numPartitions` for read/write, we will use
the same parallelism by default. It's easy to use.
T
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Great! Thank you for #15975.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15975#discussion_r89181212
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -404,6 +425,7 @@ class JDBCSuite extends SparkFunSuite
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15975
Hi, @gatorsmile .
Can we can add a testcase for writing partition here? (According to the PR
title, it's beyond of the scope.)
---
If your project is set up for it, you can reply to
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15976
Thank you for fixing this, @liancheng !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
The only one failure seems to be irrelevant to this.
```
- SPARK-8020: set sql conf in spark conf *** FAILED *** (17 seconds, 602
milliseconds)
```
---
If your project is set up
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Thank you for coming back, @gatorsmile and @hvanhovell !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
Ah, In fact, that issue includes this. May I close this PR and the issue
SPARK-18169 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Hi, @rxin .
Could you review this again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/15682
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15682
I guess SPARK-18169 will cover the better error message too. That issue
will need to revisit all occurrence of `uncache` and related exception
handling. Thank you, @gatorsmile and @hvanhovell
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15987
[SPARK-18515][SQL] AlterTableDropPartitions fails for non-string columns
## What changes were proposed in this pull request?
While [SPARK-17732](https://issues.apache.org/jira/browse
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15987
Hi, @hvanhovell .
Could you review this PR?
This is the first attempt to use `UnresolvedAttribute` and Analyzer rule.
There are two debatable issues.
- Catalyst Analyzer
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15987
Thank you for review, @rxin . In the expression, the partition attribute
are assumed `String`, so Analyzer injects that.
I tried to implement the following recommended way in JIRA
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Hi, @rxin , @cloud-fan , @gatorsmile .
Please let me know if there is something to do more.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
The only one failure is irrelevant to this PR.
```
[info] KafkaSourceStressForDontFailOnDataLossSuite:
[info] - stress test for failOnDataLoss=false *** FAILED *** (1 minute, 58
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Maybe something internal errors.
```
Traceback (most recent call last):
File "./dev/run-tests-jenkins.py", line 232, in
main()
File "./dev/run-t
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Thank you, @cloud-fan !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89560507
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -35,11 +35,12 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89560594
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -35,11 +35,12 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89560765
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -35,11 +35,12 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15966#discussion_r89561319
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -35,11 +35,12 @@ class
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16012
[SPARK-17251][SQL] Support `OuterReference` in projection list of a
correlated subquery
## What changes were proposed in this pull request?
Currently, correlated subqueries does not
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16012
Thank you, @hvanhovell ! I'll fix like that!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15966
Thank you for review and merging, @rxin , @gatorsmile , @cloud-fan !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16012#discussion_r89656679
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/namedExpressions.scala
---
@@ -356,10 +356,17 @@ case class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16012#discussion_r89656824
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -989,7 +989,7 @@ class Analyzer
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16015
[SPARK-17251][SQL] Improve `OuterReference` to be `NamedExpression`
## What changes were proposed in this pull request?
Currently, `OuterReference` is not `NamedExpression`. So, it
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16012
Thank you for review, @hvanhovell and @nsyca .
I agree with you. We need enough time for this.
So, the option one for 2.1 is spun off into #16015.
---
If your project is set up for it
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15975
@gatorsmile NP. Thank you for informing that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16015
The only one test failure is irrelevant to this PR.
```
[info] - set spark.sql.warehouse.dir *** FAILED *** (5 minutes, 0 seconds)
[info] Timeout of './bin/spark-submit'
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16015
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16015
Unknown Jenkins failure.
```
Traceback (most recent call last):
File "./dev/run-tests-jenkins.py", line 232, in
main()
File "./dev/run-tests-jenkin
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16015
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16015
Thank you, @hvanhovell !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16015
Thank you for merging, @hvanhovell !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
501 - 600 of 7376 matches
Mail list logo