[
https://issues.apache.org/jira/browse/BEAM-1048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15878393#comment-15878393
]
ASF GitHub Bot commented on BEAM-1048:
--------------------------------------
GitHub user staslev opened a pull request:
https://github.com/apache/beam/pull/2073
[BEAM-1048] Added a per-batch read duration metric to SparkUnboundedSource.
Be sure to do all of the following to help us incorporate your contribution
quickly and easily:
- [ ] Make sure the PR title is formatted like:
`[BEAM-<Jira issue #>] Description of pull request`
- [ ] Make sure tests pass via `mvn clean verify`. (Even better, enable
Travis-CI on your fork and ensure the whole test matrix passes).
- [ ] Replace `<Jira issue #>` in the title with the actual Jira issue
number, if there is one.
- [ ] If this contribution is large, please file an Apache
[Individual Contributor License
Agreement](https://www.apache.org/licenses/icla.txt).
---
R: @amitsela
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/staslev/beam
BEAM-1048-reporting-batch-read-duration-metrics
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/beam/pull/2073.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #2073
----
commit 1e02ab5b6320eef82a9bed597884e6cf51770682
Author: Stas Levin <[email protected]>
Date: 2017-02-22T14:58:58Z
Added a per-batch read duration metric to SparkUnboundedSource.
----
> Spark Runner streaming batch duration does not include duration of reading
> from source
> ---------------------------------------------------------------------------------------
>
> Key: BEAM-1048
> URL: https://issues.apache.org/jira/browse/BEAM-1048
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Affects Versions: 0.4.0
> Reporter: Kobi Salant
> Assignee: Stas Levin
>
> Spark Runner streaming batch duration does not include duration of reading
> from source this is because we perform rdd.count in SparkUnboundedSourcewhich
> that invokes a regular spark job outside the streaming context.
> We do it for reporting the batch size both for UI and back pressure
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)