[
https://issues.apache.org/jira/browse/BEAM-4858?focusedWorklogId=148775&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-148775
]
ASF GitHub Bot logged work on BEAM-4858:
----------------------------------------
Author: ASF GitHub Bot
Created on: 27/Sep/18 15:01
Start Date: 27/Sep/18 15:01
Worklog Time Spent: 10m
Work Description: robertwb commented on issue #6375: [BEAM-4858] Clean up
division in batch size estimator.
URL: https://github.com/apache/beam/pull/6375#issuecomment-425126489
You're right, the a and b were switched in computing the error term when I
copied this to the PR. This meant that significantly more points were
considered outliers (but enough retained to typically give a reasonable
regression). Unfortunately this fix means that it's still pretty sensitive to
multiple outliers...
I'm trying a simpler approach: just assume the top quantile is outliers. We
have enough data to make this pretty robust. Running experiments now.
(As for computing h, I used sagemath.)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 148775)
Time Spent: 4.5h (was: 4h 20m)
> Clean up _BatchSizeEstimator in element-batching transform.
> -----------------------------------------------------------
>
> Key: BEAM-4858
> URL: https://issues.apache.org/jira/browse/BEAM-4858
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-core
> Reporter: Valentyn Tymofieiev
> Assignee: Robert Bradshaw
> Priority: Minor
> Time Spent: 4.5h
> Remaining Estimate: 0h
>
> Beam Python 3 conversion [exposed|https://github.com/apache/beam/pull/5729]
> non-trivial performance-sensitive logic in element-batching transform. Let's
> take a look at
> [util.py#L271|https://github.com/apache/beam/blob/e98ff7c96afa2f72b3a98426dc1e9a47224da5c8/sdks/python/apache_beam/transforms/util.py#L271].
>
> Due to Python 2 language semantics, the result of {{x2 / x1}} will depend on
> the type of the keys - whether they are integers or floats.
> The keys of key-value pairs contained in {{self._data}} are added as integers
> [here|https://github.com/apache/beam/blob/d2ac08da2dccce8930432fae1ec7c30953880b69/sdks/python/apache_beam/transforms/util.py#L260],
> however, when we 'thin' the collected entries
> [here|https://github.com/apache/beam/blob/d2ac08da2dccce8930432fae1ec7c30953880b69/sdks/python/apache_beam/transforms/util.py#L279],
> the keys will become floats. Surprisingly, using either integer or float
> division consistently [in the
> comparator|https://github.com/apache/beam/blob/e98ff7c96afa2f72b3a98426dc1e9a47224da5c8/sdks/python/apache_beam/transforms/util.py#L271]
> negatively affects the performance of a custom pipeline I was using to
> benchmark these changes. The performance impact likely comes from changes in
> the logic that depends on how division is evaluated, not from the
> performance of division operation itself.
> In terms of Python 3 conversion the best course of action that avoids
> regression seems to be to preserve the existing Python 2 behavior using
> {{old_div}} from {{past.utils.division}}, in the medium term we should clean
> up the logic. We may want to add a targeted microbenchmark to evaluate
> performance of this code, and maybe cythonize the code, since it seems to be
> performance-sensitive.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)