[GitHub] [spark] srielau commented on a diff in pull request #38531: [SPARK-40755][SQL] Migrate type check failures of number formatting onto error classes

2022-11-15 Thread GitBox
srielau commented on code in PR #38531: URL: https://github.com/apache/spark/pull/38531#discussion_r1022941891 ## core/src/main/resources/error/error-classes.json: ## @@ -290,6 +290,46 @@ "Null typed values cannot be used as arguments of ." ] }, +

[GitHub] [spark] srielau commented on a diff in pull request #38531: [SPARK-40755][SQL] Migrate type check failures of number formatting onto error classes

2022-11-15 Thread GitBox
srielau commented on code in PR #38531: URL: https://github.com/apache/spark/pull/38531#discussion_r1022941891 ## core/src/main/resources/error/error-classes.json: ## @@ -290,6 +290,46 @@ "Null typed values cannot be used as arguments of ." ] }, +

[GitHub] [spark] tgravescs commented on a diff in pull request #38622: [SPARK-39601][YARN] AllocationFailure should not be treated as exitCausedByApp when driver is shutting down

2022-11-15 Thread GitBox
tgravescs commented on code in PR #38622: URL: https://github.com/apache/spark/pull/38622#discussion_r1022918575 ## resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala: ## @@ -815,6 +815,7 @@ private[spark] class ApplicationMaster( c

[GitHub] [spark-docker] Yikun commented on pull request #23: [SPARK-40519] Add "Publish" workflow to help release apache/spark image

2022-11-15 Thread GitBox
Yikun commented on PR #23: URL: https://github.com/apache/spark-docker/pull/23#issuecomment-1315319292 @HyukjinKwon @martin-g Thanks! Merge to master. Looks like no more feedback about publishing. Maybe let's publish after docker official image review completed. -- This is an automated m

[GitHub] [spark-docker] Yikun closed pull request #23: [SPARK-40519] Add "Publish" workflow to help release apache/spark image

2022-11-15 Thread GitBox
Yikun closed pull request #23: [SPARK-40519] Add "Publish" workflow to help release apache/spark image URL: https://github.com/apache/spark-docker/pull/23 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to g

[GitHub] [spark] peter-toth commented on pull request #38640: [SPARK-41124][SQL][TEST] Add DSv2 PlanStabilitySuites

2022-11-15 Thread GitBox
peter-toth commented on PR #38640: URL: https://github.com/apache/spark/pull/38640#issuecomment-1315310978 > Ah I just realized that there is no way to use v2 parquet table today. Shall we support it first before benchmarking it? I'm ok with switching this benchmark to parquet v2 when

[GitHub] [spark] cloud-fan commented on pull request #38640: [SPARK-41124][SQL][TEST] Add DSv2 PlanStabilitySuites

2022-11-15 Thread GitBox
cloud-fan commented on PR #38640: URL: https://github.com/apache/spark/pull/38640#issuecomment-1315299552 Ah I just realized that there is no way to use v2 parquet table today. Shall we support it first before benchmarking it? -- This is an automated message from the Apache Git Service. T

[GitHub] [spark] peter-toth commented on pull request #38640: [SPARK-41124][SQL][TEST] Add DSv2 PlanStabilitySuites

2022-11-15 Thread GitBox
peter-toth commented on PR #38640: URL: https://github.com/apache/spark/pull/38640#issuecomment-1315293892 > @peter-toth there is an easy way to enable parquet v2: set `spark.sql.sources.useV1SourceList` to empty. I thought that config only controls sources when `spark.read.parquet(..

[GitHub] [spark] Ngone51 commented on pull request #38467: [SPARK-40987][CORE] Avoid creating a directory when deleting a block, causing DAGScheduler to not work

2022-11-15 Thread GitBox
Ngone51 commented on PR #38467: URL: https://github.com/apache/spark/pull/38467#issuecomment-1315276964 > BlockInfoManager#blockInfoWrappers block info and lock not removed. Can't we catch the exception from `BlockManager#removeBlockInternal` and release the lock when caught the execp

[GitHub] [spark] cloud-fan closed pull request #38627: [SPARK-40875] [CONNECT] [FOLLOW] Retain Group expressions in aggregate.

2022-11-15 Thread GitBox
cloud-fan closed pull request #38627: [SPARK-40875] [CONNECT] [FOLLOW] Retain Group expressions in aggregate. URL: https://github.com/apache/spark/pull/38627 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above t

[GitHub] [spark] cloud-fan commented on pull request #38627: [SPARK-40875] [CONNECT] [FOLLOW] Retain Group expressions in aggregate.

2022-11-15 Thread GitBox
cloud-fan commented on PR #38627: URL: https://github.com/apache/spark/pull/38627#issuecomment-1315270191 thanks, merging to master! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific c

[GitHub] [spark] cloud-fan commented on a diff in pull request #38627: [SPARK-40875] [CONNECT] [FOLLOW] Retain Group expressions in aggregate.

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38627: URL: https://github.com/apache/spark/pull/38627#discussion_r1022752267 ## connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala: ## @@ -441,11 +441,14 @@ class SparkConnectPlanner(session: SparkS

[GitHub] [spark] grundprinzip commented on a diff in pull request #38605: [SPARK-41103][CONNECT][DOC] Document how to add a new proto field of messages

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38605: URL: https://github.com/apache/spark/pull/38605#discussion_r1022745612 ## connector/connect/docs/adding-proto-messages.md: ## @@ -0,0 +1,41 @@ +# Required, Optional and default values + +Connect adopts proto3, which does not support `

[GitHub] [spark] cloud-fan commented on pull request #38640: [SPARK-41124][SQL][TEST] Add DSv2 PlanStabilitySuites

2022-11-15 Thread GitBox
cloud-fan commented on PR #38640: URL: https://github.com/apache/spark/pull/38640#issuecomment-1315256115 @peter-toth there is an easy way to enable parquet v2: set `spark.sql.sources.useV1SourceList` to empty. -- This is an automated message from the Apache Git Service. To respond to the

[GitHub] [spark] Ngone51 commented on a diff in pull request #38441: [SPARK-40979][CORE] Keep removed executor info due to decommission

2022-11-15 Thread GitBox
Ngone51 commented on code in PR #38441: URL: https://github.com/apache/spark/pull/38441#discussion_r1022732173 ## core/src/main/scala/org/apache/spark/internal/config/package.scala: ## @@ -2024,6 +2024,16 @@ package object config { .stringConf .createOptional +

[GitHub] [spark] Ngone51 commented on a diff in pull request #38441: [SPARK-40979][CORE] Keep removed executor info due to decommission

2022-11-15 Thread GitBox
Ngone51 commented on code in PR #38441: URL: https://github.com/apache/spark/pull/38441#discussion_r1022731167 ## core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala: ## @@ -2193,9 +2193,11 @@ private[spark] class DAGScheduler( * Return true when: * 1. Wai

[GitHub] [spark] Ngone51 commented on pull request #38441: [SPARK-40979][CORE] Keep removed executor info due to decommission

2022-11-15 Thread GitBox
Ngone51 commented on PR #38441: URL: https://github.com/apache/spark/pull/38441#issuecomment-1315242107 Seem like this PR is addressing my concern at https://github.com/apache/spark/pull/37924#discussion_r990925620. I actually think the original PR won't work in most cases without this PR.

[GitHub] [spark] Ngone51 commented on a diff in pull request #38441: [SPARK-40979][CORE] Keep removed executor info due to decommission

2022-11-15 Thread GitBox
Ngone51 commented on code in PR #38441: URL: https://github.com/apache/spark/pull/38441#discussion_r1022725414 ## core/src/main/scala/org/apache/spark/internal/config/package.scala: ## @@ -2024,6 +2024,16 @@ package object config { .stringConf .createOptional +

[GitHub] [spark] AmplabJenkins commented on pull request #38663: [SPARK-41143][SQL] Add named argument function syntax support

2022-11-15 Thread GitBox
AmplabJenkins commented on PR #38663: URL: https://github.com/apache/spark/pull/38663#issuecomment-1315236539 Can one of the admins verify this patch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

[GitHub] [spark] MaxGekk commented on a diff in pull request #38531: [SPARK-40755][SQL] Migrate type check failures of number formatting onto error classes

2022-11-15 Thread GitBox
MaxGekk commented on code in PR #38531: URL: https://github.com/apache/spark/pull/38531#discussion_r1022706881 ## core/src/main/resources/error/error-classes.json: ## @@ -290,6 +290,46 @@ "Null typed values cannot be used as arguments of ." ] }, +

[GitHub] [spark] peter-toth commented on a diff in pull request #38640: [SPARK-41124][SQL][TEST] Add DSv2 PlanStabilitySuites

2022-11-15 Thread GitBox
peter-toth commented on code in PR #38640: URL: https://github.com/apache/spark/pull/38640#discussion_r1022701848 ## sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala: ## @@ -351,6 +353,62 @@ class TPCDSModifiedPlanStabilityWithStatsSuite extends PlanStabili

[GitHub] [spark] MaxGekk closed pull request #38656: [SPARK-41140][SQL] Rename the error class `_LEGACY_ERROR_TEMP_2440` to `INVALID_WHERE_CONDITION`

2022-11-15 Thread GitBox
MaxGekk closed pull request #38656: [SPARK-41140][SQL] Rename the error class `_LEGACY_ERROR_TEMP_2440` to `INVALID_WHERE_CONDITION` URL: https://github.com/apache/spark/pull/38656 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub

[GitHub] [spark] MaxGekk commented on pull request #38656: [SPARK-41140][SQL] Rename the error class `_LEGACY_ERROR_TEMP_2440` to `INVALID_WHERE_CONDITION`

2022-11-15 Thread GitBox
MaxGekk commented on PR #38656: URL: https://github.com/apache/spark/pull/38656#issuecomment-1315205891 Merging to master. Thank you, @LuciferYang @cloud-fan @srielau @itholic for review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022651386 ## python/pyspark/sql/connect/dataframe.py: ## @@ -44,7 +44,7 @@ from pyspark.sql.connect.typing import ColumnOrString, ExpressionOrString from pyspark.s

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022651011 ## python/pyspark/sql/connect/dataframe.py: ## @@ -44,7 +44,7 @@ from pyspark.sql.connect.typing import ColumnOrString, ExpressionOrString from pyspark.s

[GitHub] [spark] itholic commented on pull request #38664: [SPARK-41147][SQL] Assign a name to the legacy error class `_LEGACY_ERROR_TEMP_1042`

2022-11-15 Thread GitBox
itholic commented on PR #38664: URL: https://github.com/apache/spark/pull/38664#issuecomment-1315140725 Sure, let me integrate into `DATATYPE_MISMATCH.WRONG_NUM_ARGS` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the

[GitHub] [spark] fred-db commented on pull request #38497: [SPARK-40999] Hint propagation to subqueries

2022-11-15 Thread GitBox
fred-db commented on PR #38497: URL: https://github.com/apache/spark/pull/38497#issuecomment-1315136114 @allisonwang-db Incorporated all the changes requested, lmk what you think! :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to G

[GitHub] [spark] MaxGekk commented on a diff in pull request #38531: [SPARK-40755][SQL] Migrate type check failures of number formatting onto error classes

2022-11-15 Thread GitBox
MaxGekk commented on code in PR #38531: URL: https://github.com/apache/spark/pull/38531#discussion_r1022613029 ## core/src/main/resources/error/error-classes.json: ## @@ -290,6 +290,46 @@ "Null typed values cannot be used as arguments of ." ] }, +

[GitHub] [spark] zhengruifeng commented on pull request #38653: [SPARK-41128][CONNECT][PYTHON] Implement `DataFrame.fillna ` and `DataFrame.na.fill `

2022-11-15 Thread GitBox
zhengruifeng commented on PR #38653: URL: https://github.com/apache/spark/pull/38653#issuecomment-1315103593 merged into master, thank you guys -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[GitHub] [spark] zhengruifeng closed pull request #38653: [SPARK-41128][CONNECT][PYTHON] Implement `DataFrame.fillna ` and `DataFrame.na.fill `

2022-11-15 Thread GitBox
zhengruifeng closed pull request #38653: [SPARK-41128][CONNECT][PYTHON] Implement `DataFrame.fillna ` and `DataFrame.na.fill ` URL: https://github.com/apache/spark/pull/38653 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and us

[GitHub] [spark] WweiL commented on a diff in pull request #38503: [SPARK-40940] Remove Multi-stateful operator checkers for streaming queries.

2022-11-15 Thread GitBox
WweiL commented on code in PR #38503: URL: https://github.com/apache/spark/pull/38503#discussion_r1022251949 ## sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala: ## @@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends StateStor

[GitHub] [spark] ulysses-you commented on pull request #38619: [SPARK-41112][SQL] RuntimeFilter should apply ColumnPruning eagerly with in-subquery filter

2022-11-15 Thread GitBox
ulysses-you commented on PR #38619: URL: https://github.com/apache/spark/pull/38619#issuecomment-1315058947 thank you @cloud-fan @dongjoon-hyun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to th

[GitHub] [spark] cloud-fan commented on a diff in pull request #38605: [SPARK-41103][CONNECT][DOC] Document how to add a new proto field of messages

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38605: URL: https://github.com/apache/spark/pull/38605#discussion_r1022545314 ## connector/connect/docs/adding-proto-messages.md: ## @@ -0,0 +1,41 @@ +# Required, Optional and default values + +Connect adopts proto3, which does not support `req

[GitHub] [spark] zhengruifeng commented on a diff in pull request #38653: [SPARK-41128][CONNECT][PYTHON] Implement `DataFrame.fillna ` and `DataFrame.na.fill `

2022-11-15 Thread GitBox
zhengruifeng commented on code in PR #38653: URL: https://github.com/apache/spark/pull/38653#discussion_r1022538668 ## connector/connect/src/main/scala/org/apache/spark/sql/connect/dsl/package.scala: ## @@ -226,6 +226,58 @@ package object dsl { } } +implicit cl

[GitHub] [spark] WweiL commented on a diff in pull request #38503: [SPARK-40940] Remove Multi-stateful operator checkers for streaming queries.

2022-11-15 Thread GitBox
WweiL commented on code in PR #38503: URL: https://github.com/apache/spark/pull/38503#discussion_r1021870148 ## sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala: ## @@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends StateStor

[GitHub] [spark] cloud-fan commented on a diff in pull request #38653: [SPARK-41128][CONNECT][PYTHON] Implement `DataFrame.fillna ` and `DataFrame.na.fill `

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38653: URL: https://github.com/apache/spark/pull/38653#discussion_r1022527827 ## connector/connect/src/main/scala/org/apache/spark/sql/connect/dsl/package.scala: ## @@ -226,6 +226,58 @@ package object dsl { } } +implicit class

[GitHub] [spark] cloud-fan commented on a diff in pull request #38495: [SPARK-35531][SQL] Update hive table stats without unnecessary convert

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38495: URL: https://github.com/apache/spark/pull/38495#discussion_r1022525354 ## sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala: ## @@ -609,6 +609,17 @@ private[hive] class HiveClientImpl( shim.alterTable(cli

[GitHub] [spark] cloud-fan commented on a diff in pull request #38595: [SPARK-41090][SQL] Throw Exception for `db_name.view_name` when creating temp view by Dataset API

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38595: URL: https://github.com/apache/spark/pull/38595#discussion_r1022521452 ## sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala: ## @@ -542,11 +542,11 @@ private[sql] object QueryParsingErrors extends QueryErr

[GitHub] [spark] cloud-fan commented on a diff in pull request #38595: [SPARK-41090][SQL] Throw Exception for `db_name.view_name` when creating temp view by Dataset API

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38595: URL: https://github.com/apache/spark/pull/38595#discussion_r1022520366 ## sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala: ## @@ -3799,13 +3798,21 @@ class Dataset[T] private[sql]( global: Boolean): CreateViewCommand =

[GitHub] [spark] wankunde commented on a diff in pull request #38495: [SPARK-35531][SQL] Update hive table stats without unnecessary convert

2022-11-15 Thread GitBox
wankunde commented on code in PR #38495: URL: https://github.com/apache/spark/pull/38495#discussion_r1022516593 ## sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala: ## @@ -894,12 +895,14 @@ class InsertSuite extends QueryTest with TestHiveSingleton with Befor

[GitHub] [spark] cloud-fan commented on a diff in pull request #38595: [SPARK-41090][SQL] Throw Exception for `db_name.view_name` when creating temp view by Dataset API

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38595: URL: https://github.com/apache/spark/pull/38595#discussion_r1022515695 ## core/src/main/resources/error/error-classes.json: ## @@ -933,6 +933,11 @@ ], "sqlState" : "42000" }, + "TEMP_VIEW_NAME_CONTAINS_UNSUPPORTED_NAME_PART

[GitHub] [spark] wankunde commented on a diff in pull request #38495: [SPARK-35531][SQL] Update hive table stats without unnecessary convert

2022-11-15 Thread GitBox
wankunde commented on code in PR #38495: URL: https://github.com/apache/spark/pull/38495#discussion_r1022515586 ## sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala: ## @@ -722,18 +722,15 @@ private[spark] class HiveExternalCatalog(conf: SparkConf, had

[GitHub] [spark] wankunde commented on a diff in pull request #38495: [SPARK-35531][SQL] Update hive table stats without unnecessary convert

2022-11-15 Thread GitBox
wankunde commented on code in PR #38495: URL: https://github.com/apache/spark/pull/38495#discussion_r1022515184 ## sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClient.scala: ## @@ -127,6 +127,9 @@ private[hive] trait HiveClient { */ def alterTable(dbName:

[GitHub] [spark] cloud-fan commented on a diff in pull request #38531: [SPARK-40755][SQL] Migrate type check failures of number formatting onto error classes

2022-11-15 Thread GitBox
cloud-fan commented on code in PR #38531: URL: https://github.com/apache/spark/pull/38531#discussion_r1022513821 ## core/src/main/resources/error/error-classes.json: ## @@ -290,6 +290,46 @@ "Null typed values cannot be used as arguments of ." ] }, +

[GitHub] [spark] cloud-fan closed pull request #38404: [SPARK-40956] SQL Equivalent for Dataframe overwrite command

2022-11-15 Thread GitBox
cloud-fan closed pull request #38404: [SPARK-40956] SQL Equivalent for Dataframe overwrite command URL: https://github.com/apache/spark/pull/38404 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[GitHub] [spark] cloud-fan commented on pull request #38404: [SPARK-40956] SQL Equivalent for Dataframe overwrite command

2022-11-15 Thread GitBox
cloud-fan commented on PR #38404: URL: https://github.com/apache/spark/pull/38404#issuecomment-1314988679 The failed test job is flaky (network issue) and it has passed in the previous commit. Given the last commit is comment only, I'm merging this PR to master. Thanks! -- This is an aut

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022503915 ## python/pyspark/sql/connect/dataframe.py: ## @@ -44,7 +44,7 @@ from pyspark.sql.connect.typing import ColumnOrString, ExpressionOrString from pyspark.s

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022503590 ## python/pyspark/sql/connect/dataframe.py: ## @@ -44,7 +44,7 @@ from pyspark.sql.connect.typing import ColumnOrString, ExpressionOrString from pyspark.s

[GitHub] [spark] cloud-fan closed pull request #38662: [SPARK-41144][SQL] Unresolved hint should not cause query failure

2022-11-15 Thread GitBox
cloud-fan closed pull request #38662: [SPARK-41144][SQL] Unresolved hint should not cause query failure URL: https://github.com/apache/spark/pull/38662 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go t

[GitHub] [spark] cloud-fan commented on pull request #38662: [SPARK-41144][SQL] Unresolved hint should not cause query failure

2022-11-15 Thread GitBox
cloud-fan commented on PR #38662: URL: https://github.com/apache/spark/pull/38662#issuecomment-1314981448 thanks, merging to master/3.3! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specif

[GitHub] [spark] cloud-fan closed pull request #38619: [SPARK-41112][SQL] RuntimeFilter should apply ColumnPruning eagerly with in-subquery filter

2022-11-15 Thread GitBox
cloud-fan closed pull request #38619: [SPARK-41112][SQL] RuntimeFilter should apply ColumnPruning eagerly with in-subquery filter URL: https://github.com/apache/spark/pull/38619 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022498105 ## python/pyspark/sql/connect/column.py: ## @@ -82,6 +82,73 @@ def to_plan(self, session: "RemoteSparkSession") -> "proto.Expression": def __str__(self) -> s

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022495692 ## python/pyspark/sql/tests/connect/test_connect_basic.py: ## @@ -248,6 +248,20 @@ def test_simple_datasource_read(self) -> None: actualResult = panda

[GitHub] [spark] grundprinzip commented on a diff in pull request #38631: [SPARK-40809] [CONNECT] [FOLLOW] Support `alias()` in Python client

2022-11-15 Thread GitBox
grundprinzip commented on code in PR #38631: URL: https://github.com/apache/spark/pull/38631#discussion_r1022492699 ## python/pyspark/sql/tests/connect/test_connect_column_expressions.py: ## @@ -134,6 +134,16 @@ def test_list_to_literal(self): lit_list_plan = fun.lit([f

<    1   2