Yikun commented on code in PR #37217:
URL: https://github.com/apache/spark/pull/37217#discussion_r923947841
##########
python/pyspark/pandas/tests/test_namespace.py:
##########
@@ -334,19 +334,21 @@ def test_concat_index_axis(self):
([psdf.reset_index(), psdf], [pdf.reset_index(), pdf]),
([psdf, psdf[["C", "A"]]], [pdf, pdf[["C", "A"]]]),
([psdf[["C", "A"]], psdf], [pdf[["C", "A"]], pdf]),
- # only one Series
- ([psdf, psdf["C"]], [pdf, pdf["C"]]),
- ([psdf["C"], psdf], [pdf["C"], pdf]),
Review Comment:
Thanks for reivew.
Yes, actually I also moved this to L347-L348, that means we will always
check all case with latest pandas, to avoid regression. I will also bump infra
pandas version to 1.4.3 after all fixes complete.
For pandas<1.4.3, these two cases will failed because pandas on Spark only
follow the latest pandas behaviors, so I just skip them.
If you have any other concern, feel free to comments. Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]