[
https://issues.apache.org/jira/browse/BEAM-14044?focusedWorklogId=774285&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-774285
]
ASF GitHub Bot logged work on BEAM-14044:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 24/May/22 23:12
Start Date: 24/May/22 23:12
Worklog Time Spent: 10m
Work Description: zwestrick commented on PR #17527:
URL: https://github.com/apache/beam/pull/17527#issuecomment-1136520325
> It seems like with the API, selecting batch_elements_kwargs is up to
implementers of `ModelLoader`/`InferenceRunner`.
>
> What if the implementer wants to enable users to set the values as
appropriate for their model? Then each implementation would need to decide on a
way to expose it, right?
Yes, although the motivating context for this is that we have a particular
ModelLoader (in TFX-BSL) for handling pre-batched inputs, and would like a way
to limit subsequent batching for all users of that ModelLoader. The goal isn't
really to expose knobs on a per-model basis, although you might want to (e.g.,
if a model is very big).
Issue Time Tracking
-------------------
Worklog Id: (was: 774285)
Time Spent: 2h (was: 1h 50m)
> Hook In Batching DoFn Apis to RunInference
> ------------------------------------------
>
> Key: BEAM-14044
> URL: https://issues.apache.org/jira/browse/BEAM-14044
> Project: Beam
> Issue Type: Sub-task
> Components: sdk-py-core
> Reporter: Ryan Thompson
> Assignee: Brian Hulette
> Priority: P2
> Time Spent: 2h
> Remaining Estimate: 0h
>
> Hook into the batching DoFn APIs to the base RunInference interface.
> We should also investigate what defaults we should set for batching, and
> perhaps make that part of the API.
> See
> [s.apache.org/batched-dofns|https://www.google.com/url?q=http://s.apache.org/batched-dofns&sa=D&source=docs&ust=1646063987404027&usg=AOvVaw1VO9QgWlbAhx0Rh2Bzl1nw]
> for more details.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)