Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/10218#issuecomment-163809737
  
    More functions for the sake of more functions does not make the API easier 
to use.  We should not add functions just to be consistent, we should only add 
them if they are going to be used.
    
    The complexity come from the fact that users who have a seq of string have 
to do something like this now:
    
    ```scala
    val toDrop = Seq("col1", "col2", "col3")
    df.drop(toDrop.head, toDrop.tail)
    ```
    
    Can you construct a case when you would have columns instead of strings 
that you wanted to drop?  In the test case is strictly more typing to use this 
API compared to the one that exists already.  This doesn't seem worth 
complicating the case above.
    ```scala
    val df = src.drop("a", "b")
    val df = src.drop(src("a"), src("b"))
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to