Github user hvanhovell commented on the issue:

    https://github.com/apache/spark/pull/21580
  
    LGTM. It is better UX to have a more descriptive error messages.
    
    However I do like the idea of being able to use window functions in 
filters. I often use the following pattern:
    ```scala
    val df: Dataframe = ...
    
df.select(row_number().over(Window.partitionBy($"key").orderBy($"seq")).as("rn"))
      .filter($"rn" === 1)
      .drop("rn")
    ```
    Teradata, for example, has the `qualify` filter clause for these cases.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to