[
https://issues.apache.org/jira/browse/SPARK-26412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiangrui Meng updated SPARK-26412:
----------------------------------
Summary: Allow Pandas UDF to take an iterator of pd.DataFrames (was: Allow
Pandas UDF to take an iterator of pd.DataFrames or Arrow batches)
> Allow Pandas UDF to take an iterator of pd.DataFrames
> -----------------------------------------------------
>
> Key: SPARK-26412
> URL: https://issues.apache.org/jira/browse/SPARK-26412
> Project: Spark
> Issue Type: New Feature
> Components: PySpark
> Affects Versions: 3.0.0
> Reporter: Xiangrui Meng
> Assignee: Weichen Xu
> Priority: Major
>
> Pandas UDF is the ideal connection between PySpark and DL model inference
> workload. However, user needs to load the model file first to make
> predictions. It is common to see models of size ~100MB or bigger. If the
> Pandas UDF execution is limited to batch scope, user need to repeatedly load
> the same model for every batch in the same python worker process, which is
> inefficient. I created this JIRA to discuss possible solutions.
> Essentially we need to support "start()" and "finish()" besides "apply". We
> can either provide those interfaces or simply provide users the iterator of
> batches in pd.DataFrame or Arrow table and let user code handle it.
> Another benefit is with iterator interface and asyncio from Python, it is
> flexible for users to implement data pipelining.
> cc: [~icexelloss] [~bryanc] [~holdenk] [~hyukjin.kwon] [~ueshin] [~smilegator]
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]