aokolnychyi commented on PR #40655:
URL: https://github.com/apache/spark/pull/40655#issuecomment-1496932417
@gengliangwang, got it. I was initially concerned as well but I believe this
is the right thing to do after we discussed it.
--
This is an automated message from the Apache Git
wankunde commented on code in PR #40523:
URL: https://github.com/apache/spark/pull/40523#discussion_r1158021949
##
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala:
##
@@ -1036,8 +1036,17 @@ case class SortMergeJoinExec(
val
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157988309
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,37 @@ case class
HyukjinKwon closed pull request #40669: [SPARK-42983][CONNECT][PYTHON] Fix
createDataFrame to handle 0-dim numpy array properly
URL: https://github.com/apache/spark/pull/40669
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on PR #40669:
URL: https://github.com/apache/spark/pull/40669#issuecomment-1496856632
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on code in PR #40662:
URL: https://github.com/apache/spark/pull/40662#discussion_r1157966718
##
sql/core/src/test/resources/tpcds-plan-stability/approved-plans-modified/q27.sf100/explain.txt:
##
@@ -209,208 +209,208 @@ Aggregate Attributes [4]:
cloud-fan commented on code in PR #40662:
URL: https://github.com/apache/spark/pull/40662#discussion_r1157966389
##
sql/core/src/test/resources/tpcds-plan-stability/approved-plans-modified/q27.sf100/explain.txt:
##
@@ -209,208 +209,208 @@ Aggregate Attributes [4]:
cloud-fan commented on code in PR #40662:
URL: https://github.com/apache/spark/pull/40662#discussion_r1157965959
##
sql/core/src/test/resources/sql-tests/analyzer-results/subquery/in-subquery/in-with-cte.sql.out:
##
@@ -198,23 +198,20 @@ WithCTE
:: : :
gengliangwang commented on PR #40655:
URL: https://github.com/apache/spark/pull/40655#issuecomment-1496846899
@aokolnychyi Yes I got it. My concern was around the behavior change. I am
OK with the idea and merging this one.
--
This is an automated message from the Apache Git Service.
To
HyukjinKwon commented on PR #40671:
URL: https://github.com/apache/spark/pull/40671#issuecomment-1496816167
cc @allanf-db @zhengruifeng @ueshin FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
cloud-fan commented on PR #40124:
URL: https://github.com/apache/spark/pull/40124#issuecomment-1496815727
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon opened a new pull request, #40671:
URL: https://github.com/apache/spark/pull/40671
### What changes were proposed in this pull request?
This PR clarifies Spark Connect option to be consistent with other sections.
### Why are the changes needed?
To be
cloud-fan closed pull request #40124: [SPARK-37980][SQL] Access row_index via
_metadata if possible in tests
URL: https://github.com/apache/spark/pull/40124
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
cloud-fan commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157929474
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/percentiles.scala:
##
@@ -155,7 +156,7 @@ abstract class PercentileBase
}
HyukjinKwon closed pull request #40670: [MINOR][PYTHON][CONNECT][DOCS]
Deduplicate versionchanged directive in Catalog
URL: https://github.com/apache/spark/pull/40670
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
HyukjinKwon commented on PR #40670:
URL: https://github.com/apache/spark/pull/40670#issuecomment-1496811319
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
hvanhovell commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157925662
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/percentiles.scala:
##
@@ -155,7 +156,7 @@ abstract class PercentileBase
}
cloud-fan commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157924616
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/percentiles.scala:
##
@@ -155,7 +156,7 @@ abstract class PercentileBase
}
hvanhovell commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157922948
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/percentiles.scala:
##
@@ -155,7 +156,7 @@ abstract class PercentileBase
}
cloud-fan commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157921993
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/percentiles.scala:
##
@@ -155,7 +156,7 @@ abstract class PercentileBase
}
cloud-fan commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157911591
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,204 @@
package org.apache.spark.sql.catalyst.types
amaliujia commented on code in PR #40611:
URL: https://github.com/apache/spark/pull/40611#discussion_r1157907417
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowSerializer.scala:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache
HyukjinKwon commented on PR #40670:
URL: https://github.com/apache/spark/pull/40670#issuecomment-1496775494
cc @zhengruifeng @ueshin FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon opened a new pull request, #40670:
URL: https://github.com/apache/spark/pull/40670
### What changes were proposed in this pull request?
This PR proposes to deduplicate versionchanged directive in Catalog.
### Why are the changes needed?
All API is implemented
aokolnychyi commented on PR #40655:
URL: https://github.com/apache/spark/pull/40655#issuecomment-1496774141
@gengliangwang, this PR is based on the consensus we reached in
[this](https://github.com/apache/spark/pull/40308#discussion_r1127081206)
thread. Each approach has its own pros/cons.
HyukjinKwon closed pull request #40666: [SPARK-43009][SQL][3.4] Parameterized
`sql()` with `Any` constants
URL: https://github.com/apache/spark/pull/40666
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #40666:
URL: https://github.com/apache/spark/pull/40666#issuecomment-1496753586
Merged to branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on code in PR #40664:
URL: https://github.com/apache/spark/pull/40664#discussion_r1157883873
##
dev/infra/Dockerfile:
##
@@ -64,8 +64,8 @@ RUN Rscript -e "devtools::install_version('roxygen2',
version='7.2.0', repos='ht
# See more in SPARK-39735
ENV
HyukjinKwon commented on PR #40665:
URL: https://github.com/apache/spark/pull/40665#issuecomment-1496750694
cc @itholic @zhengruifeng @xinrong-meng @Yikun if you find some time to
review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
hvanhovell commented on PR #40649:
URL: https://github.com/apache/spark/pull/40649#issuecomment-1496739368
@Hisoka-X thanks for the write up. We should be able to support most of this
at the moment. GRPC supports this type of execution out of the box. The reason
we did not really go for
zhengruifeng commented on PR #40607:
URL: https://github.com/apache/spark/pull/40607#issuecomment-1496734980
In my local env, the failed test can pass with even bigger model size.
but let me try to reduce the model size for GA to see what will happen.
--
This is an automated message
gengliangwang commented on PR #40655:
URL: https://github.com/apache/spark/pull/40655#issuecomment-1496724854
@aokolnychyi @cloud-fan I am +0 for changing the behavior since I haven't
heard complaints about this from end-users. Instead, relaxing the strict
compiler check can bring
gengliangwang commented on code in PR #40655:
URL: https://github.com/apache/spark/pull/40655#discussion_r1157855435
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##
@@ -130,38 +128,93 @@ object TableOutputResolver {
}
ueshin opened a new pull request, #40669:
URL: https://github.com/apache/spark/pull/40669
### What changes were proposed in this pull request?
Fix `createDataFrame` to handle 0-dim numpy array properly.
### Why are the changes needed?
When 0-dim numpy array is passed to
WweiL commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1157836737
##
python/pyspark/sql/connect/streaming/readwriter.py:
##
@@ -0,0 +1,484 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
aokolnychyi commented on PR #40655:
URL: https://github.com/apache/spark/pull/40655#issuecomment-1496675818
Ok, all tests have been adapted. This PR is ready for a detailed review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
pengzhon-db commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1157815748
##
python/pyspark/sql/connect/streaming/query.py:
##
@@ -0,0 +1,161 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157804913
##
sql/core/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software
shardulm94 commented on PR #40637:
URL: https://github.com/apache/spark/pull/40637#issuecomment-1496640034
Thanks @ShreyeshArangath for this! I think it helps clear a lot of
unnecessary noise from user logs and keeps the logs manageable.
One thing I noticed is that we set
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r115619
##
sql/core/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r115170
##
sql/core/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software
justaparth closed pull request #40668: spark protobuf: add materializeDefaults
option to spark-protobuf
URL: https://github.com/apache/spark/pull/40668
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
justaparth opened a new pull request, #40668:
URL: https://github.com/apache/spark/pull/40668
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157765220
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class ResolveDefaultColumns(catalog:
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157763648
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,33 +271,45 @@ case class
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157763473
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157749910
##
sql/core/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157749445
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class ResolveDefaultColumns(catalog:
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157747931
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class ResolveDefaultColumns(catalog:
aokolnychyi commented on code in PR #40655:
URL: https://github.com/apache/spark/pull/40655#discussion_r1157744682
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##
@@ -130,38 +128,93 @@ object TableOutputResolver {
}
}
Kimahriman commented on PR #34558:
URL: https://github.com/apache/spark/pull/34558#issuecomment-1496583248
> There seems to be a lot of repetition. Wish it could be avoided somehow
but can't help though (beside nit-picking).
Thanks for the review! I tried to get as much common code
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1157743929
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/higherOrderFunctions.scala:
##
@@ -101,6 +101,14 @@ case class NamedLambdaVariable(
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1157743684
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala:
##
@@ -172,6 +172,40 @@ class CodegenContext extends Logging {
dongjoon-hyun commented on code in PR #40655:
URL: https://github.com/apache/spark/pull/40655#discussion_r1157717949
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##
@@ -130,38 +128,93 @@ object TableOutputResolver {
}
dongjoon-hyun commented on PR #40645:
URL: https://github.com/apache/spark/pull/40645#issuecomment-1496541323
Sorry for misleading you. You are right about timezone. What I imagined was
more like the following case.
```
$ docker run -it --rm --cap-add SYS_TIME openjdk:latest bash
ksumit opened a new pull request, #40667:
URL: https://github.com/apache/spark/pull/40667
### What changes were proposed in this pull request?
Building the project against jdk11 on IDE shows errors because
`Platform.java` depends on `sun.misc` which is in `jdk.unsupported` module in
amaliujia commented on PR #40586:
URL: https://github.com/apache/spark/pull/40586#issuecomment-1496448770
The proto side overall looks good.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
amaliujia commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1157644345
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,126 @@ message WriteOperationV2 {
// (Optional) A condition for
amaliujia commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157641944
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157637643
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class
hvanhovell commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157632963
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157631790
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class
amaliujia commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157631385
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
amaliujia commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157629600
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157627009
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##
@@ -271,32 +271,33 @@ case class
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157622700
##
sql/core/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157620766
##
sql/core/src/test/scala/org/apache/spark/sql/ResolveDefaultColumnsSuite.scala:
##
Review Comment:
NP, moved to this package instead.
--
This is an
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157616471
##
sql/core/src/test/scala/org/apache/spark/sql/ResolveDefaultColumnsSuite.scala:
##
Review Comment:
I meant
```
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157615804
##
sql/core/src/test/scala/org/apache/spark/sql/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
dtenedor commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157610703
##
sql/core/src/test/scala/org/apache/spark/sql/ResolveDefaultColumnsSuite.scala:
##
Review Comment:
Yes, good point! Moved.
##
Kimahriman commented on PR #32987:
URL: https://github.com/apache/spark/pull/32987#issuecomment-1496399849
Threw together a quick script to get some rough numbers. Did two types of
queries, one doing a `sqrt` and one doing a `regexp_extract` to test a simple
numeric thing and a more
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157602703
##
sql/core/src/test/scala/org/apache/spark/sql/ResolveDefaultColumnsSuite.scala:
##
Review Comment:
This should be under `sq/catalyst`, right?
--
This
gengliangwang commented on code in PR #40652:
URL: https://github.com/apache/spark/pull/40652#discussion_r1157603360
##
sql/core/src/test/scala/org/apache/spark/sql/ResolveDefaultColumnsSuite.scala:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
hvanhovell commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157602019
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
hvanhovell commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157601361
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
amaliujia commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157600655
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
amaliujia commented on code in PR #40651:
URL: https://github.com/apache/spark/pull/40651#discussion_r1157598354
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala:
##
@@ -17,53 +17,234 @@
package org.apache.spark.sql.catalyst.types
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1157597665
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,126 @@ message WriteOperationV2 {
// (Optional) A condition for
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1157385141
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,118 @@ message WriteOperationV2 {
// (Optional) A condition for
tgravescs commented on PR #40622:
URL: https://github.com/apache/spark/pull/40622#issuecomment-1496383011
definitely looks like a typo, thanks for catching and fixing
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
MaxGekk commented on PR #40623:
URL: https://github.com/apache/spark/pull/40623#issuecomment-1496368421
Here is the backport to `branch-3.4`:
https://github.com/apache/spark/pull/40666
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
yliou commented on PR #35939:
URL: https://github.com/apache/spark/pull/35939#issuecomment-1496353590
@dependabot reopen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
MaxGekk opened a new pull request, #40666:
URL: https://github.com/apache/spark/pull/40666
### What changes were proposed in this pull request?
In the PR, I propose to change API of parameterized SQL, and replace type of
argument values from `string` to `Any` in Scala/Java/Python and
dtenedor commented on PR #40652:
URL: https://github.com/apache/spark/pull/40652#issuecomment-1496340472
Hi @gengliangwang here is the correctness bug fix
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
shrprasa commented on PR #40258:
URL: https://github.com/apache/spark/pull/40258#issuecomment-1496311839
Thanks a lot @cloud-fan for the guidance and support in getting this issue
fixed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
aokolnychyi commented on PR #40655:
URL: https://github.com/apache/spark/pull/40655#issuecomment-1496292594
@dongjoon-hyun, let me look into test failures.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
dzhigimont opened a new pull request, #40665:
URL: https://github.com/apache/spark/pull/40665
### What changes were proposed in this pull request?
Add inclusive parameter for pd.date_range to support the pandas 2.0.0
### Why are the changes needed?
When pandas 2.0.0 is released,
aokolnychyi commented on code in PR #40655:
URL: https://github.com/apache/spark/pull/40655#discussion_r1157516765
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##
@@ -130,38 +128,93 @@ object TableOutputResolver {
}
}
srielau commented on code in PR #40641:
URL: https://github.com/apache/spark/pull/40641#discussion_r1157492500
##
core/src/main/resources/error/error-classes.json:
##
@@ -542,6 +542,12 @@
],
"sqlState" : "22003"
},
+ "ARRAY_INSERT_BY_INDEX_ZERO" : {
+"message"
dzhigimont opened a new pull request, #40664:
URL: https://github.com/apache/spark/pull/40664
### What changes were proposed in this pull request?
The PR proposes to upgrade pandas to 2.0.0
### Why are the changes needed?
Support latest pandas for pandas API on Spark
tanvn commented on PR #38053:
URL: https://github.com/apache/spark/pull/38053#issuecomment-1496253469
@HyukjinKwon @wForget
Hi, may I know the status of this PR?
Would like to take part in this issue as we are facing this while reading
data from an orc partitioned table and do not
Hisoka-X commented on code in PR #40609:
URL: https://github.com/apache/spark/pull/40609#discussion_r1157482628
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -625,6 +625,21 @@ class QueryExecutionErrorsSuite
}
}
+
Hisoka-X commented on code in PR #40609:
URL: https://github.com/apache/spark/pull/40609#discussion_r1157481984
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -625,6 +625,21 @@ class QueryExecutionErrorsSuite
}
}
+
MaxGekk commented on PR #40623:
URL: https://github.com/apache/spark/pull/40623#issuecomment-1496220564
@cloud-fan I am working on the backport ...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
Hisoka-X commented on code in PR #40632:
URL: https://github.com/apache/spark/pull/40632#discussion_r1157457567
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala:
##
@@ -1404,8 +1404,8 @@ private[sql] object QueryExecutionErrors extends
srielau commented on code in PR #38867:
URL: https://github.com/apache/spark/pull/38867#discussion_r1157449697
##
sql/core/src/test/resources/sql-tests/results/array.sql.out:
##
@@ -431,6 +431,104 @@ struct
NULL
+-- !query
+select array_insert(array(1, 2, 3), 3, 4)
+--
srielau commented on code in PR #38867:
URL: https://github.com/apache/spark/pull/38867#discussion_r1157449697
##
sql/core/src/test/resources/sql-tests/results/array.sql.out:
##
@@ -431,6 +431,104 @@ struct
NULL
+-- !query
+select array_insert(array(1, 2, 3), 3, 4)
+--
srielau commented on code in PR #38867:
URL: https://github.com/apache/spark/pull/38867#discussion_r1157447487
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,155 @@ case class ArrayExcept(left:
cloud-fan commented on PR #40623:
URL: https://github.com/apache/spark/pull/40623#issuecomment-1496177514
It has conflicts with 3.4, @MaxGekk can you create a backport PR? Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan closed pull request #40623: [SPARK-43009][SQL] Parameterized `sql()`
with `Any` constants
URL: https://github.com/apache/spark/pull/40623
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
1 - 100 of 174 matches
Mail list logo