[GitHub] [spark] dongjoon-hyun commented on pull request #38001: [SPARK-40562][SQL] Add `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`

2022-09-27 Thread GitBox
dongjoon-hyun commented on PR #38001: URL: https://github.com/apache/spark/pull/38001#issuecomment-1259217641 Thank you again, @cloud-fan , @viirya , @thiyaga, @huaxingao , @zhengruifeng . Since the last commit is about doc, I'll merge this. Merged to master/3.3/3.2. cc

[GitHub] [spark] dongjoon-hyun commented on pull request #38001: [SPARK-40562][SQL] Add `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`

2022-09-27 Thread GitBox
dongjoon-hyun commented on PR #38001: URL: https://github.com/apache/spark/pull/38001#issuecomment-1259077129 Thank you, @cloud-fan , @viirya , @huaxingao . Yes, as Wenchen shared, this is really Spark-specific syntax now. Let me add that to PR description. ``` hive> SELECT version();

[GitHub] [spark] dongjoon-hyun commented on pull request #38001: [SPARK-40562][SQL] Add `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`

2022-09-26 Thread GitBox
dongjoon-hyun commented on PR #38001: URL: https://github.com/apache/spark/pull/38001#issuecomment-1258524084 Thank you for your feedback, @thiyaga . -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

[GitHub] [spark] dongjoon-hyun commented on pull request #38001: [SPARK-40562][SQL] Add `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`

2022-09-26 Thread GitBox
dongjoon-hyun commented on PR #38001: URL: https://github.com/apache/spark/pull/38001#issuecomment-1258433555 Before SPARK-34932, Apache Spark fails at `GROUP BY a GROUPING SETS(a, b)` and forced users to also put b after GROUP BY. SPARK-34932 allows it by working like `GROUP BY GROUPING

[GitHub] [spark] dongjoon-hyun commented on pull request #38001: [SPARK-40562][SQL] Add `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`

2022-09-26 Thread GitBox
dongjoon-hyun commented on PR #38001: URL: https://github.com/apache/spark/pull/38001#issuecomment-1258332867 @cloud-fan As you wrote in the PR description (https://github.com/apache/spark/pull/32022), it's not in the SQL standard, is it? > GROUP BY ... GROUPING SETS (...) is a weird

[GitHub] [spark] dongjoon-hyun commented on pull request #38001: [SPARK-40562][SQL] Add `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`

2022-09-26 Thread GitBox
dongjoon-hyun commented on PR #38001: URL: https://github.com/apache/spark/pull/38001#issuecomment-1257769296 cc @cloud-fan , @wangyum , @viirya , @huaxingao , @sunchao -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use