[
https://issues.apache.org/jira/browse/BEAM-5500?focusedWorklogId=149445&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-149445
]
ASF GitHub Bot logged work on BEAM-5500:
----------------------------------------
Author: ASF GitHub Bot
Created on: 28/Sep/18 20:12
Start Date: 28/Sep/18 20:12
Worklog Time Spent: 10m
Work Description: aaltay closed pull request #6517: [BEAM-5500] Fix
memory leak in pickler.
URL: https://github.com/apache/beam/pull/6517
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):
diff --git a/sdks/python/apache_beam/internal/pickler.py
b/sdks/python/apache_beam/internal/pickler.py
index 211430bd60f..f93c5341594 100644
--- a/sdks/python/apache_beam/internal/pickler.py
+++ b/sdks/python/apache_beam/internal/pickler.py
@@ -165,7 +165,9 @@ def new_save_module_dict(pickler, obj):
if obj_id not in known_module_dicts:
for m in sys.modules.values():
try:
- if m and m.__name__ != '__main__':
+ if (m
+ and m.__name__ != '__main__'
+ and isinstance(m, dill.dill.ModuleType)):
d = m.__dict__
known_module_dicts[id(d)] = m, d
except AttributeError:
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 149445)
Time Spent: 0.5h (was: 20m)
> Portable python sdk worker leaks memory in streaming mode
> ---------------------------------------------------------
>
> Key: BEAM-5500
> URL: https://issues.apache.org/jira/browse/BEAM-5500
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-harness
> Reporter: Micah Wylde
> Assignee: Robert Bradshaw
> Priority: Major
> Labels: portability-flink
> Attachments: chart.png
>
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> When using the portable python sdk with flink in streaming mode, we see that
> the python worker processes steadily increase memory usage until they are OOM
> killed. This behavior is consistent across various kinds of streaming
> pipelines, including those with fixed windows and global windows.
> A simple wordcount-like pipeline demonstrates the issue for us (note this is
> run on the [Lyft beam fork|https://github.com/lyft/beam/], which provides
> access to kinesis as a portable streaming source):
> {code:java}
> counts = (p
> | 'Kinesis' >> FlinkKinesisInput().with_stream('test-stream')
> | 'decode' >> beam.FlatMap(decode) # parses from json into python objs
> | 'pair_with_one' >> beam.Map(lambda x: (x["event_name"], 1))
> | 'window' >> beam.WindowInto(window.GlobalWindows(),
> trigger=AfterProcessingTime(15 * 1000),
> accumulation_mode=AccumulationMode.DISCARDING)
> | 'group' >> beam.GroupByKey()
> | 'count' >> beam.Map(count_ones)
> | beam.Map(lambda x: logging.warn("count: %s", str(x)) or x))
> {code}
> When run, we see a steady increase in memory usage in the sdk_worker process.
> Using [heapy|http://guppy-pe.sourceforge.net/#Heapy] I've analyzed the memory
> usage over time and found that it's largely dicts and strings (see attached
> chart).
>
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)