ming95 commented on PR #40688:
URL: https://github.com/apache/spark/pull/40688#issuecomment-1500796913
The CI build failure doesn't seem to be caused by this patch, can you take a
look?
@dongjoon-hyun @viirya
--
This is an automated message from the Apache Git Service.
To
ming95 commented on PR #40688:
URL: https://github.com/apache/spark/pull/40688#issuecomment-1500796714
> Maybe, no? If this is not working properly before, we cannot enable this
configuration at Apache Spark 3.5.0. Since we need to wait for one release
cycle, we may be able to do that
dongjoon-hyun commented on PR #40663:
URL: https://github.com/apache/spark/pull/40663#issuecomment-1500756781
No problem at all. Thank you always, @LuciferYang !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HeartSaVioR commented on PR #40561:
URL: https://github.com/apache/spark/pull/40561#issuecomment-1500752714
The last update is to rebase with master branch - just to make sure CI is
happy with the change before merging this.
--
This is an automated message from the Apache Git Service.
To
github-actions[bot] closed pull request #38896: [WIP][SQL] Replace `require()`
by an internal error in catalyst
URL: https://github.com/apache/spark/pull/38896
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
github-actions[bot] closed pull request #38893: [Spark-40099][SQL] Merge
adjacent CaseWhen branches if their values are the same
URL: https://github.com/apache/spark/pull/38893
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
github-actions[bot] commented on PR #39021:
URL: https://github.com/apache/spark/pull/39021#issuecomment-1500737503
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #39219: [WIP][SPARK-41277] Auto infer
bucketing info for shuffled actions
URL: https://github.com/apache/spark/pull/39219
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
github-actions[bot] commented on PR #39259:
URL: https://github.com/apache/spark/pull/39259#issuecomment-1500737477
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
HyukjinKwon commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1161032682
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##
@@ -0,0 +1,198 @@
+/*
+ * Licensed to the
gengliangwang commented on PR #40711:
URL: https://github.com/apache/spark/pull/40711#issuecomment-1500727241
cc @xinrong-meng it would be great to include this in the doc of Spark
3.4.0.
(Document changes won't fail RC vote)
--
This is an automated message from the Apache Git
gengliangwang commented on PR #40711:
URL: https://github.com/apache/spark/pull/40711#issuecomment-1500726183
I will come up with screenshots from branch-3.4.
The markdown tables in the master branch are not showing properly. cc
@grundprinzip
gengliangwang opened a new pull request, #40711:
URL: https://github.com/apache/spark/pull/40711
### What changes were proposed in this pull request?
There are important syntax rules about Cast/Store assignment/Type precedent
list in the [ANSI Compliance
dtenedor commented on code in PR #40710:
URL: https://github.com/apache/spark/pull/40710#discussion_r1161013565
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -91,6 +90,25 @@ case class ResolveDefaultColumns(catalog:
dongjoon-hyun closed pull request #40709: [SPARK-43070][BUILD] Upgrade
`sbt-unidoc` to 0.5.0
URL: https://github.com/apache/spark/pull/40709
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #40709:
URL: https://github.com/apache/spark/pull/40709#issuecomment-1500709472
Merged to master for Apache Spark 3.5. Thank you, @huaxingao and @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
dongjoon-hyun commented on PR #40709:
URL: https://github.com/apache/spark/pull/40709#issuecomment-1500709184
Thank you so much!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dtenedor commented on code in PR #40710:
URL: https://github.com/apache/spark/pull/40710#discussion_r1161011837
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -91,6 +90,25 @@ case class ResolveDefaultColumns(catalog:
dongjoon-hyun commented on PR #40709:
URL: https://github.com/apache/spark/pull/40709#issuecomment-1500708179
Could you review this PR, @huaxingao ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun commented on PR #40709:
URL: https://github.com/apache/spark/pull/40709#issuecomment-1500708097
Documentation generation GitHub Action job passed.
![Screenshot 2023-04-07 at 3 56 39
gengliangwang commented on code in PR #40710:
URL: https://github.com/apache/spark/pull/40710#discussion_r1161010584
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -91,6 +90,25 @@ case class
gengliangwang commented on code in PR #40710:
URL: https://github.com/apache/spark/pull/40710#discussion_r1161009871
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -91,6 +90,25 @@ case class
dtenedor opened a new pull request, #40710:
URL: https://github.com/apache/spark/pull/40710
### What changes were proposed in this pull request?
This PR extends column default support to allow the ORDER BY, LIMIT, and
OFFSET clauses at the end of a SELECT query in the INSERT source
ueshin commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160985748
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
ueshin commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160985748
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
ueshin commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160985748
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
zhenlineo closed pull request #40274: [SPARK-42215][CONNECT] Simplify Scala
Client IT tests
URL: https://github.com/apache/spark/pull/40274
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
jiangxb1987 commented on code in PR #40690:
URL: https://github.com/apache/spark/pull/40690#discussion_r1160971328
##
core/src/main/scala/org/apache/spark/MapOutputTracker.scala:
##
@@ -157,22 +164,29 @@ private class ShuffleStatus(
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160970094
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160970094
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160970094
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160967676
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160967676
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
jiangxb1987 commented on PR #40690:
URL: https://github.com/apache/spark/pull/40690#issuecomment-1500652566
This happens on a benchmark job generating a large number of very tiny
blocks. When the job is finished, the cluster tries to shutdown the idle
executors and migrate all the blocks
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160964896
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160964896
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
amaliujia commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160953776
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
amaliujia commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160953776
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
amaliujia commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160953776
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
amaliujia commented on code in PR #40692:
URL: https://github.com/apache/spark/pull/40692#discussion_r1160953776
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -60,13 +61,19 @@ private[sql] class SparkResult[T](
dongjoon-hyun commented on PR #40709:
URL: https://github.com/apache/spark/pull/40709#issuecomment-1500625015
Yes, correctly. Apache Spark 3.2.0+ uses SBT 1.5.0+ via SPARK-34959.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
WweiL commented on PR #40691:
URL: https://github.com/apache/spark/pull/40691#issuecomment-1500623778
CC @rangadi @pengzhon-db
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
warrenzhu25 commented on code in PR #39280:
URL: https://github.com/apache/spark/pull/39280#discussion_r1160949314
##
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##
@@ -2242,6 +2242,16 @@ package object config {
.checkValue(_ >= 0, "needs to be a
dongjoon-hyun commented on PR #39280:
URL: https://github.com/apache/spark/pull/39280#issuecomment-1500617216
Gentle ping @Ngone51 once more.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on code in PR #39280:
URL: https://github.com/apache/spark/pull/39280#discussion_r1160945224
##
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##
@@ -2242,6 +2242,16 @@ package object config {
.checkValue(_ >= 0, "needs to be
dongjoon-hyun opened a new pull request, #40709:
URL: https://github.com/apache/spark/pull/40709
### What changes were proposed in this pull request?
This PR aims to upgrade `sbt-unidoc` to 0.5.0.
### Why are the changes needed?
Since v0.5.0, organization has moved from
dongjoon-hyun commented on PR #40708:
URL: https://github.com/apache/spark/pull/40708#issuecomment-1500594750
I tested this manually. Merged to master/3.4/3.3/3.2.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun closed pull request #40708: [SPARK-43069][BUILD] Use
`sbt-eclipse` instead of `sbteclipse-plugin`
URL: https://github.com/apache/spark/pull/40708
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun commented on PR #40708:
URL: https://github.com/apache/spark/pull/40708#issuecomment-1500587401
Thank you, @viirya . The description is fixed now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
viirya commented on PR #40708:
URL: https://github.com/apache/spark/pull/40708#issuecomment-1500583121
> This PR aims to use set-eclipse instead of sbteclipse-plugin.
One typo `set-eclipse` in the description.
--
This is an automated message from the Apache Git Service.
To respond
ueshin commented on code in PR #40015:
URL: https://github.com/apache/spark/pull/40015#discussion_r1160908837
##
python/pyspark/sql/connect/plan.py:
##
@@ -1830,14 +1831,24 @@ def plan(self, session: "SparkConnectClient") ->
proto.Relation:
class CacheTable(LogicalPlan):
dongjoon-hyun commented on PR #40708:
URL: https://github.com/apache/spark/pull/40708#issuecomment-1500561170
Could you review this, @viirya ?
Although the build system seems to be recovering now, I want to reduce the
chance of failures in the future by switching the repo.
-
anishshri-db commented on PR #40696:
URL: https://github.com/apache/spark/pull/40696#issuecomment-150022
@HeartSaVioR - all tests passed. Please merge when you get a chance. Thx
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun opened a new pull request, #40708:
URL: https://github.com/apache/spark/pull/40708
### What changes were proposed in this pull request?
This PR aims to use `set-eclipse` instead of `sbteclipse-plugin`.
### Why are the changes needed?
Thanks to SPARK-34959,
amaliujia commented on PR #40315:
URL: https://github.com/apache/spark/pull/40315#issuecomment-1500531162
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
amaliujia commented on PR #40656:
URL: https://github.com/apache/spark/pull/40656#issuecomment-1500518103
late LGTM!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
RyanBerti commented on PR #40615:
URL: https://github.com/apache/spark/pull/40615#issuecomment-1500518140
@dtenedor FYI, I updated the tests and am just missing one for empty input
table, and one for merging sparse/dense sketches. Once I get the build to be
green, I'm going to remove the
dongjoon-hyun commented on PR #40688:
URL: https://github.com/apache/spark/pull/40688#issuecomment-1500503549
Maybe, no? If this is not working properly before, we cannot enable this
configuration at Apache Spark 3.5.0. Since we need to wait for one release
cycle, we may be able to do that
clownxc closed pull request #40703: [SPARK-43033][SQL] Avoid task retries due
to AssertNotNull checks
URL: https://github.com/apache/spark/pull/40703
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
clownxc opened a new pull request, #40707:
URL: https://github.com/apache/spark/pull/40707
## What changes were proposed in this pull request?
This PR update the task retry logic to not retry if the exception has an
error class which means a user error.
## Why are the changes
ming95 commented on PR #40688:
URL: https://github.com/apache/spark/pull/40688#issuecomment-1500479527
One more question , it time to make the default value of
`SQLConf.COALESCE_BUCKETS_IN_JOIN_ENABLED` as true ?
--
This is an automated message from the Apache Git Service.
To
ming95 commented on code in PR #40688:
URL: https://github.com/apache/spark/pull/40688#discussion_r1160843998
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/InsertAdaptiveSparkPlan.scala:
##
@@ -60,6 +61,7 @@ case class InsertAdaptiveSparkPlan(
anishshri-db commented on PR #40696:
URL: https://github.com/apache/spark/pull/40696#issuecomment-1500446559
> Could you please rebase so that CI is retriggered? If the new trial fails
again, maybe good to post to dev@ and see whether someone encountered this
before, and/or someone is
aokolnychyi commented on PR #40308:
URL: https://github.com/apache/spark/pull/40308#issuecomment-1500441091
Failures don't seem to be related.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
rangadi closed pull request #40373: [Draft] Streaming Spark Connect POC
URL: https://github.com/apache/spark/pull/40373
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1160800555
##
python/pyspark/sql/dataframe.py:
##
@@ -3928,6 +3928,71 @@ def dropDuplicates(self, subset: Optional[List[str]] =
None) -> "DataFrame":
jdf =
warrenzhu25 commented on PR #38852:
URL: https://github.com/apache/spark/pull/38852#issuecomment-1500375839
@holdenk @dongjoon-hyun @Ngone51 Help take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
warrenzhu25 commented on code in PR #39280:
URL: https://github.com/apache/spark/pull/39280#discussion_r1160765918
##
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala:
##
@@ -102,6 +103,15 @@ class
cloud-fan commented on PR #40701:
URL: https://github.com/apache/spark/pull/40701#issuecomment-1500331931
https://github.com/apache/spark/pull/40437 might be related. We want to
remove `hiveResultString` from CLI and only use it in hive compatibility tests.
--
This is an automated
itholic opened a new pull request, #40706:
URL: https://github.com/apache/spark/pull/40706
### What changes were proposed in this pull request?
This PR proposes to migrate TypeError from DataFrame(Reader|Writer) into
error class
### Why are the changes needed?
Improve
cloud-fan commented on code in PR #40697:
URL: https://github.com/apache/spark/pull/40697#discussion_r1160503755
##
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala:
##
@@ -750,37 +750,29 @@ case class WholeStageCodegenExec(child:
HeartSaVioR commented on PR #40705:
URL: https://github.com/apache/spark/pull/40705#issuecomment-1500261827
This is introduced from 3.4 hence ideal to land the fix to 3.4, but the
possibility to trigger the bug is relatively very low, hence probably not
urgent.
--
This is an automated
HeartSaVioR opened a new pull request, #40705:
URL: https://github.com/apache/spark/pull/40705
### What changes were proposed in this pull request?
This PR moves the error class resource file in Kafka connector from test to
src, so that error class works without test artifacts.
MaxGekk opened a new pull request, #40704:
URL: https://github.com/apache/spark/pull/40704
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
clownxc opened a new pull request, #40703:
URL: https://github.com/apache/spark/pull/40703
## What changes were proposed in this pull request?
This PR update the task retry logic to not retry if the exception has an
error class which means a user error.
## Why are the changes needed?
LuciferYang commented on code in PR #40352:
URL: https://github.com/apache/spark/pull/40352#discussion_r1160631080
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1073,6 +1074,91 @@ class SparkConnectPlanner(val
LuciferYang commented on code in PR #40352:
URL: https://github.com/apache/spark/pull/40352#discussion_r1160620933
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/DataFrameStatFunctions.scala:
##
@@ -584,6 +585,86 @@ final class DataFrameStatFunctions
LuciferYang commented on code in PR #40352:
URL: https://github.com/apache/spark/pull/40352#discussion_r1160620933
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/DataFrameStatFunctions.scala:
##
@@ -584,6 +585,86 @@ final class DataFrameStatFunctions
beliefer commented on PR #40697:
URL: https://github.com/apache/spark/pull/40697#issuecomment-1500156450
> @beliefer This is not a performance feature. It's just to avoid people
making mistakes referencing extra objects in the closure, which can slow down
task serialization and increase
LuciferYang commented on PR #40352:
URL: https://github.com/apache/spark/pull/40352#issuecomment-1500147099
GA failure is not related to the current PR
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
LuciferYang commented on PR #40352:
URL: https://github.com/apache/spark/pull/40352#issuecomment-1500146157
In the last commit, make `BloomFilterAggregate` explicitly supported
`IntegerType/ShortType/ByteType` and added corresponding updaters, then removed
pass `dataType` and `adding cast
HeartSaVioR opened a new pull request, #40702:
URL: https://github.com/apache/spark/pull/40702
### What changes were proposed in this pull request?
This PR proposes to add test for dropDuplicates in JavaDatasetSuite.
### Why are the changes needed?
The API dropDuplicates
Yikf commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-150019
After code validation, ThriftServerQueryTestSuite and SQLQueryTestSuite
depend on goldgen files;
If goldgen file follows the format of df.show (the format of df.show depends
on the
yaooqinn commented on PR #40697:
URL: https://github.com/apache/spark/pull/40697#issuecomment-1500115645
`PartitionEvaluator` looks better to me, altho I don't have a strong option
either.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
yaooqinn commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-1500113298
Adjusting `df.show` may need to change the output of `show` first. Some data
values do not have a nice string representation yet
--
This is an automated message from the Apache Git
wangyum commented on PR #40114:
URL: https://github.com/apache/spark/pull/40114#issuecomment-1500110950
Date | No. of queries optimized by this patch | No. of total queries
-- | -- | --
2023/4/5 | 62 | 167608
2023/4/4 | 139 | 203393
2023/4/3 | 62 | 191147
2023/4/2 | 14 |
AngersZh commented on PR #40315:
URL: https://github.com/apache/spark/pull/40315#issuecomment-1500104571
@amaliujia Like current? also ping @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
LuciferYang commented on code in PR #40352:
URL: https://github.com/apache/spark/pull/40352#discussion_r1160566049
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/BloomFilterAggregate.scala:
##
@@ -78,7 +79,7 @@ case class
LuciferYang commented on code in PR #40352:
URL: https://github.com/apache/spark/pull/40352#discussion_r1160563946
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1154,6 +1155,91 @@ class SparkConnectPlanner(val
Hisoka-X commented on code in PR #40632:
URL: https://github.com/apache/spark/pull/40632#discussion_r1160562342
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala:
##
@@ -1404,8 +1404,8 @@ private[sql] object QueryExecutionErrors extends
AngersZh commented on code in PR #40701:
URL: https://github.com/apache/spark/pull/40701#discussion_r1160561169
##
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala:
##
@@ -65,8 +66,13 @@ private[hive] class SparkSQLDriver(val
MaxGekk commented on code in PR #39937:
URL: https://github.com/apache/spark/pull/39937#discussion_r1160535300
##
sql/core/src/test/scala/org/apache/spark/sql/sources/InsertSuite.scala:
##
@@ -776,38 +808,62 @@ class InsertSuite extends DataSourceTest with
SharedSparkSession {
LuciferYang commented on code in PR #40352:
URL: https://github.com/apache/spark/pull/40352#discussion_r1160557436
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1073,6 +1074,91 @@ class SparkConnectPlanner(val
LuciferYang commented on code in PR #40605:
URL: https://github.com/apache/spark/pull/40605#discussion_r1160556535
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/CheckConnectJvmClientCompatibility.scala:
##
@@ -62,15 +62,29 @@ object
cloud-fan commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-1500082745
also cc @AngersZh
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on code in PR #40701:
URL: https://github.com/apache/spark/pull/40701#discussion_r1160551428
##
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala:
##
@@ -65,8 +66,13 @@ private[hive] class SparkSQLDriver(val
cloud-fan commented on code in PR #40701:
URL: https://github.com/apache/spark/pull/40701#discussion_r1160551276
##
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala:
##
@@ -65,8 +66,13 @@ private[hive] class SparkSQLDriver(val
AngersZh commented on PR #40314:
URL: https://github.com/apache/spark/pull/40314#issuecomment-1500073974
ping @dongjoon-hyun @HyukjinKwon @attilapiros @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
Yikf commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-1500073818
> I'm looking for consistency. `df.show` is what users see, and
`hiveResultString` is for golden files. Shouldn't the golden file match what
users really see? Why do we test something that
LuciferYang commented on code in PR #40605:
URL: https://github.com/apache/spark/pull/40605#discussion_r1160542658
##
dev/connect-jvm-client-mima-check:
##
@@ -34,20 +34,18 @@ fi
rm -f .connect-mima-check-result
-echo "Build sql module, connect-client-jvm module and
1 - 100 of 141 matches
Mail list logo