[
https://issues.apache.org/jira/browse/SPARK-42618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17727307#comment-17727307
]
Haejoon Lee edited comment on SPARK-42618 at 5/30/23 2:47 AM:
--------------------------------------------------------------
I apologize for the complexity. I have a bit simplified the tickets (now total
54 sub-tasks are opened) by creating a new ticket that consolidates multiple
subtasks. Please let me know if you still believe further simplification is
necessary.
was (Author: itholic):
I apologize for the complexity. I have a bit simplified the tickets (total 109
-> 54) by creating a new ticket that consolidates multiple subtasks. Please
let me know if you still believe further simplification is necessary.
> Support pandas 2.0.0
> --------------------
>
> Key: SPARK-42618
> URL: https://issues.apache.org/jira/browse/SPARK-42618
> Project: Spark
> Issue Type: Umbrella
> Components: Pandas API on Spark
> Affects Versions: 3.5.0
> Reporter: Haejoon Lee
> Priority: Major
>
> When pandas 2.0.0 is released, we should match the behavior in pandas API on
> Spark.
> See also
> https://pandas.pydata.org/pandas-docs/version/2.0/whatsnew/v2.0.0.html#.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]