nchammas commented on code in PR #45236:
URL: https://github.com/apache/spark/pull/45236#discussion_r1510015578


##########
python/pyspark/pandas/supported_api_gen.py:
##########
@@ -38,7 +38,7 @@
 MAX_MISSING_PARAMS_SIZE = 5
 COMMON_PARAMETER_SET = {"kwargs", "args", "cls"}
 MODULE_GROUP_MATCH = [(pd, ps), (pdw, psw), (pdg, psg)]
-PANDAS_LATEST_VERSION = "2.2.0"
+PANDAS_LATEST_VERSION = "2.2.1"

Review Comment:
   I'm not familiar with Development Containers, but yes, there are probably 
many ways we can improve the situation.
   
   What I advocated in #27928, and what I still believe is the best option for 
us today (with some tweaks to my original proposal), is to adopt pip-tools. 
That's because it's a very conservative approach that builds on our existing 
use of pip, and lets us focus on the technology-agnostic problem of separating 
Spark's direct dependencies from our build environment dependencies.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to