[
https://issues.apache.org/jira/browse/BEAM-5775?focusedWorklogId=236833&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236833
]
ASF GitHub Bot logged work on BEAM-5775:
----------------------------------------
Author: ASF GitHub Bot
Created on: 03/May/19 12:39
Start Date: 03/May/19 12:39
Worklog Time Spent: 10m
Work Description: iemejia commented on issue #8371: [BEAM-5775] Move
(most) of the batch spark pipelines' transformations to using lazy
serialization.
URL: https://github.com/apache/beam/pull/8371#issuecomment-489081256
Could you notice any performance improvement with this PR? I like it for
consistency, but I have found improvements and regressions depending on the
pipelines.
I also have a weird issue with this one @mikekap I was running nexmark to
see if I could find considerable improvements due to this PR, but when I invoke
it multiple times it fails, curiously this does not happen for example with
current master. Would you mind to take a look to see if maybe is some
configuration, it is strange.
```bash
./gradlew :beam-sdks-java-nexmark:run \
-Pnexmark.runner=":beam-runners-spark" \
-Pnexmark.args="
--runner=SparkRunner
--suite=SMOKE
--streamTimeout=60
--streaming=false
--manageResources=false
--monitorJobs=true"
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 236833)
Time Spent: 10h 40m (was: 10.5h)
> Make the spark runner not serialize data unless spark is spilling to disk
> -------------------------------------------------------------------------
>
> Key: BEAM-5775
> URL: https://issues.apache.org/jira/browse/BEAM-5775
> Project: Beam
> Issue Type: Improvement
> Components: runner-spark
> Reporter: Mike Kaplinskiy
> Assignee: Mike Kaplinskiy
> Priority: Minor
> Fix For: 2.13.0
>
> Time Spent: 10h 40m
> Remaining Estimate: 0h
>
> Currently for storage level MEMORY_ONLY, Beam does not coder-ify the data.
> This lets Spark keep the data in memory avoiding the serialization round
> trip. Unfortunately the logic is fairly coarse - as soon as you switch to
> MEMORY_AND_DISK, Beam coder-ifys the data even though Spark might have chosen
> to keep the data in memory, incurring the serialization overhead.
>
> Ideally Beam would serialize the data lazily - as Spark chooses to spill to
> disk. This would be a change in behavior when using beam, but luckily Spark
> has a solution for folks that want data serialized in memory -
> MEMORY_AND_DISK_SER will keep the data serialized.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)