[GitHub] [spark] cloud-fan commented on pull request #37941: [SPARK-40501][SQL] Enhance 'SpecialLimits' to support project(..., limit(...))

2022-09-22 Thread GitBox
cloud-fan commented on PR #37941: URL: https://github.com/apache/spark/pull/37941#issuecomment-1254997104 Ah sorry I misread the code. Let's add this rule then. I think it's beneficial, as it kinds of "normalize" the order of project and limit operator, so that we can have more chances to

[GitHub] [spark] cloud-fan commented on pull request #37941: [SPARK-40501][SQL] Enhance 'SpecialLimits' to support project(..., limit(...))

2022-09-22 Thread GitBox
cloud-fan commented on PR #37941: URL: https://github.com/apache/spark/pull/37941#issuecomment-1254706810 `PushProjectThroughLimit` is already in the optimizer, or did I miss something? -- This is an automated message from the Apache Git Service. To respond to the message, please log on

[GitHub] [spark] cloud-fan commented on pull request #37941: [SPARK-40501][SQL] Enhance 'SpecialLimits' to support project(..., limit(...))

2022-09-22 Thread GitBox
cloud-fan commented on PR #37941: URL: https://github.com/apache/spark/pull/37941#issuecomment-1254637958 I'm wondering why `PushProjectThroughLimit` does not optimize your query. It should push project through limit. -- This is an automated message from the Apache Git Service. To