Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14236
**[Test build #62427 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62427/consoleFull)**
for PR 14236 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14217
**[Test build #62428 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62428/consoleFull)**
for PR 14217 at commit
Github user ahmed-mahran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14234#discussion_r71081040
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -14,29 +14,13 @@ Structured Streaming is a scalable and fault-tolerant
stream
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14086
Let me share my 2 cents here.
`Truncate Table` is very risky. It is fast since RDBMS does not log the
individual row deletes. That means, we are unable to roll it back in most
RDBMS.
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r71083334
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -351,6 +353,44 @@ class CatalogImpl(sparkSession:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14239
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/14237
[WIP][SPARK-16283][SQL] Implement `percentile_approx` SQL function
## What changes were proposed in this pull request?
WIP
## How was this patch tested?
WIP
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r71083304
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -487,6 +487,10 @@ object DDLUtils {
GitHub user WeichenXu123 opened a pull request:
https://github.com/apache/spark/pull/14238
[MINOR][TYPO] fix fininsh typo
## What changes were proposed in this pull request?
fininsh => finish
## How was this patch tested?
(Please explain how this patch was
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14132
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14132
**[Test build #62430 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62430/consoleFull)**
for PR 14132 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14132
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62430/
Test FAILed.
---
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71085323
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1329,7 @@ class SparkSqlAstBuilder(conf:
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/14240
[SPARK-16594] [SQL] Remove Physical Plan Differences when Table Scan Having
Duplicate Columns
What changes were proposed in this pull request?
Currently, we keep two implementations
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14237
**[Test build #62431 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62431/consoleFull)**
for PR 14237 at commit
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14122#discussion_r71083700
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -327,6 +327,11 @@ class LinearRegression @Since("1.3.0")
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14238
**[Test build #62432 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62432/consoleFull)**
for PR 14238 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14217
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14217
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62428/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14132
**[Test build #62430 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62430/consoleFull)**
for PR 14132 at commit
Github user lw-lin closed the pull request at:
https://github.com/apache/spark/pull/14237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user f7753 opened a pull request:
https://github.com/apache/spark/pull/14239
[SPARK-16593] [CORE] Provide a pre-fetch mechanism to accelerate shuffle
stage.
## What changes were proposed in this pull request?
Added a pre-fetch mechanism for shuffle stage.
The
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14240
**[Test build #62433 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62433/consoleFull)**
for PR 14240 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r71085723
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -487,6 +487,10 @@ object DDLUtils {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14236
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62427/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14236
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14238
**[Test build #62432 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62432/consoleFull)**
for PR 14238 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14238
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62432/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14238
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14054
`READ_UNCOMMITTED` is impossible for the transactions that contain inserts
or updates, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14086
@gatorsmile that all sounds reasonable, but right now a DROP/CREATE table
happens. That's also not possible within a transaction and is a more drastic
operation. Does this argument not apply more to
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14054
I am not aware of any restriction on the type of modification that is
possible in any transaction isolation level. It should be orthogonal. Insert,
update and delete is everything right -- what
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
Thank you for attention, @gatorsmile .
BTW, this option is for the advanced users who knows their DB and the
limitation and powers of `TRUNCATE`.
The the following comment, I
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14234
Yeah OK modulo the last round of comments here I think it's a good set of
small fixes
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user wilson-lauw commented on the issue:
https://github.com/apache/spark/pull/14219
file org.apache.spark.ml.linalg.BLAS.scala on module mllib-local also need
to be fixed, once the changes is okay, I will also apply the changes there.
---
If your project is set up for it, you
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62434/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62434 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62434/consoleFull)**
for PR 14116 at commit
Github user wilson-lauw commented on the issue:
https://github.com/apache/spark/pull/14219
@hhbyyh I just checked, the implementation of `def dot(x: SparseVector, y:
SparseVector)` also need to be modified.
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14132
Hi, @hvanhovell , @rxin , @gatorsmile .
Now, it's ready for review again. The followings are updated.
- Add a note about not supporing `database_name.table_name` like Hive.
-
Github user ahmed-mahran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14234#discussion_r71090897
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -65,11 +51,13 @@ val words = lines.as[String].flatMap(_.split(" "))
val wordCounts
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14241
**[Test build #62438 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62438/consoleFull)**
for PR 14241 at commit
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/14241#discussion_r71090902
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -205,17 +209,17 @@ private[sql] trait DataSourceScanExec extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14176
**[Test build #62440 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62440/consoleFull)**
for PR 14176 at commit
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14235#discussion_r71092375
--- Diff: sql/hive/src/test/resources/sqlgen/agg1.sql ---
@@ -0,0 +1,3 @@
+SELECT COUNT(value) FROM parquet_t1 GROUP BY key HAVING MAX(key) > 0
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62434 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62434/consoleFull)**
for PR 14116 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14234#discussion_r71087402
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -14,29 +14,13 @@ Structured Streaming is a scalable and fault-tolerant
stream processing
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14234#discussion_r71087415
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -14,29 +14,13 @@ Structured Streaming is a scalable and fault-tolerant
stream processing
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
@felixcheung I plan to merge this into master but skip branch-2.0 as I dont
want to introduce new test errors if we have another RC. Let me know if that
sounds good
---
If your project is set up
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Hi, @rxin .
Please let me know if there is something to do more for
`INFORMATION_SCHEMA`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62436/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14229
**[Test build #62439 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62439/consoleFull)**
for PR 14229 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14243
**[Test build #62441 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62441/consoleFull)**
for PR 14243 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/14243
[SPARK-10683][SPARK-16510][SPARKR] Move SparkR include jar test to
SparkSubmitSuite
## What changes were proposed in this pull request?
This change moves the include jar test from R to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14240
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14240
**[Test build #62433 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62433/consoleFull)**
for PR 14240 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14240
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62433/
Test PASSed.
---
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14234#discussion_r71087433
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -65,11 +51,13 @@ val words = lines.as[String].flatMap(_.split(" "))
val wordCounts =
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62436 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62436/consoleFull)**
for PR 14116 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
Just rebased to the master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14132
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62435/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14132
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
According to the code, this PR changed the code of @rxin .
Hi, @rxin .
Could you give us some opinion about supporting `truncate` option with
SaveMode.Overwrite for JDBC sources?
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14086
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62437/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14086
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14086
**[Test build #62437 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62437/consoleFull)**
for PR 14086 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14241
**[Test build #62438 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62438/consoleFull)**
for PR 14241 at commit
Github user wilson-lauw commented on the issue:
https://github.com/apache/spark/pull/14219
Modifications also needed on module mllib-local, file
org.apache.spark.ml.linalg.Vectors.scala. Once the changes is okay, I will also
apply the changes there.
---
If your project is set up
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62436 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62436/consoleFull)**
for PR 14116 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
The followings are my opinions according to the priority.
* First of all, `TRUNCATE` is the lower privilege than DROP/CREATE.
- You know that DROP/CREATE privilege do anything harm.
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14235#discussion_r71092240
--- Diff: sql/hive/src/test/resources/sqlgen/agg1.sql ---
@@ -0,0 +1,3 @@
+SELECT COUNT(value) FROM parquet_t1 GROUP BY key HAVING MAX(key) > 0
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14235#discussion_r71092270
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/catalyst/LogicalPlanToSQLSuite.scala
---
@@ -76,7 +85,34 @@ class LogicalPlanToSQLSuite extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14235
@dongjoon-hyun can you take a look at @gatorsmile's suggestion to check
optimized logical plan and see if it is feasible?
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
Yep. I totally agree with @srowen 's opinions, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14179
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14086
**[Test build #62437 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62437/consoleFull)**
for PR 14086 at commit
GitHub user ericl opened a pull request:
https://github.com/apache/spark/pull/14241
[SPARK-16596] [SQL] Refactor DataSourceScanExec to do partition discovery
at execution instead of planning time
## What changes were proposed in this pull request?
Partition discovery is
Github user ahmed-mahran commented on the issue:
https://github.com/apache/spark/pull/14234
Note, "img/structured-streaming-stream-as-a-table.png" needs to be
regenerated; someone changed "new rows appended to **a** unbounded table" to
"new rows appended to **an** unbounded table". I
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14054
`READ_UNCOMMITTED` is applicable to read-only transactions. Normally, the
RDBMS will promote it to the higher/restricter isolation level if users try to
apply `READ_UNCOMMITTED` to insert,
GitHub user kzhang28 opened a pull request:
https://github.com/apache/spark/pull/14242
Add a comment
## What changes were proposed in this pull request?
(comment added, no source code changed)
## How was this patch tested?
(unit test)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14229
**[Test build #62439 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62439/consoleFull)**
for PR 14229 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14229
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62439/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14229
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14241
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14241
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62438/
Test FAILed.
---
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14235#discussion_r71092452
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/catalyst/LogicalPlanToSQLSuite.scala
---
@@ -76,7 +85,34 @@ class LogicalPlanToSQLSuite
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14229
**[Test build #62442 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62442/consoleFull)**
for PR 14229 at commit
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/14241
You should be able to add those filter constraints in
FileDataSourceStrategy. I don't think it matters too much whether that code is
located within buildScan(), or in the operator itself.
---
If
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
I always appreciate your intensive reviews. Those always make my PRs
meaningful and stronger. Thank you, @srowen and @gatorsmile .
---
If your project is set up for it, you can reply to this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Hi, @gatorsmile .
Could you review this PR too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14229
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14229
**[Test build #62442 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62442/consoleFull)**
for PR 14229 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14229
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62442/
Test PASSed.
---
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14054#discussion_r71093448
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -284,9 +286,17 @@ object JdbcUtils extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14235
@dongjoon-hyun i'm not against merging this first. Just want to see if we
can improve it further.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
To identify further works, I'll rephrase your comments.
You means the case `SaveMode.Overwrite` should fail due to that current
Spark uses `statement.executeUpdate(s"DROP TABLE
1 - 100 of 316 matches
Mail list logo