Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21072#discussion_r183311275
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 ---
    @@ -736,12 +736,22 @@ object EliminateSorts extends Rule[LogicalPlan] {
     }
     
     /**
    - * Removes Sort operation if the child is already sorted
    + * Removes redundant Sort operation. This can happen:
    + * 1) if the child is already sorted
    + * 2) if there is another Sort operator separated by 0...n Project/Filter 
operators
      */
     object RemoveRedundantSorts extends Rule[LogicalPlan] {
       def apply(plan: LogicalPlan): LogicalPlan = plan transform {
         case Sort(orders, true, child) if 
SortOrder.orderingSatisfies(child.outputOrdering, orders) =>
           child
    +    case s @ Sort(_, _, child) => s.copy(child = 
recursiveRemoveSort(child))
    +  }
    +
    +  def recursiveRemoveSort(plan: LogicalPlan): LogicalPlan = plan match {
    +    case Project(fields, child) => Project(fields, 
recursiveRemoveSort(child))
    +    case Filter(condition, child) => Filter(condition, 
recursiveRemoveSort(child))
    --- End diff --
    
    why do you think we should check for the filter condition and the projected 
items to be deterministic?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to