[
https://issues.apache.org/jira/browse/BEAM-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kenneth Knowles updated BEAM-13599:
-----------------------------------
This Jira ticket has a pull request attached to it, but is still open. Did the
pull request resolve the issue? If so, could you please mark it resolved? This
will help the project have a clear view of its open issues.
> Overflow in Python Datastore RampupThrottlingFn
> -----------------------------------------------
>
> Key: BEAM-13599
> URL: https://issues.apache.org/jira/browse/BEAM-13599
> Project: Beam
> Issue Type: Bug
> Components: io-py-gcp
> Affects Versions: 2.32.0, 2.33.0, 2.34.0, 2.35.0
> Reporter: Daniel Thevessen
> Assignee: Daniel Thevessen
> Priority: P2
> Fix For: 2.36.0
>
> Time Spent: 5.5h
> Remaining Estimate: 0h
>
> {code:java}
> File
> "/usr/local/lib/python3.8/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py",
> line 74, in _calc_max_ops_budget
> max_ops_budget = int(self._BASE_BUDGET / self._num_workers * (1.5**growth))
> RuntimeError: OverflowError: (34, 'Numerical result out of range') `[while
> running 'Write to Datastore/Enforce throttling during ramp-up-ptransform-483']
> {code}
> An intermediate value is a float dependent on start time, meaning it can run
> into overflows in long-running pipelines (usually on the ~6th day).
> `max_ops_budget` should either clip to float(inf) or INT_MAX, or
> short-circuit the throttling decision [here|#L87] since it will long be
> irrelevant by then.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)