HyukjinKwon commented on a change in pull request #33364:
URL: https://github.com/apache/spark/pull/33364#discussion_r671018582



##########
File path: python/pyspark/sql/dataframe.py
##########
@@ -1980,10 +1980,16 @@ def dropDuplicates(self, subset=None):
         |Alice|  5|    80|
         +-----+---+------+
         """
+        if isinstance(subset, str):
+            subset = [subset]

Review comment:
       Let's don't include other changes here, and only let the change target 
the exception improvement alone. It's because strictly we should include other 
changes like `subset` to `*subset` in the arguments to provide the consistent 
support with Scala side, more specifically `def dropDuplicates(col1: String, 
cols: String*)` signature.
   
   For now, let's match the support with `dropDuplicates(colNames: 
Seq[String])` signature for now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to