[
https://issues.apache.org/jira/browse/BEAM-10200?focusedWorklogId=469468&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469468
]
ASF GitHub Bot logged work on BEAM-10200:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 12/Aug/20 00:36
Start Date: 12/Aug/20 00:36
Worklog Time Spent: 10m
Work Description: y1chi commented on a change in pull request #12537:
URL: https://github.com/apache/beam/pull/12537#discussion_r468939630
##########
File path: sdks/python/apache_beam/runners/worker/worker_status.py
##########
@@ -152,7 +170,11 @@ def generate_status_response(self):
all_status_sections = [
_active_processing_bundles_state(self._bundle_process_cache)
] if self._bundle_process_cache else []
+
all_status_sections.append(thread_dump())
+ if self._enable_heap_dump:
+ all_status_sections.append(heap_dump())
Review comment:
by default it is only top 10 objects.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 469468)
Time Spent: 40m (was: 0.5h)
> Improve memory profiling for users of Portable Beam Python
> ----------------------------------------------------------
>
> Key: BEAM-10200
> URL: https://issues.apache.org/jira/browse/BEAM-10200
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-harness
> Reporter: Valentyn Tymofieiev
> Assignee: Yichi Zhang
> Priority: P2
> Labels: stale-P2, starter
> Time Spent: 40m
> Remaining Estimate: 0h
>
> We have a Profiler[1] that is integrated with SDK worker[1a], however it only
> saves CPU metrics [1b].
> We have a MemoryReporter util[2] which can log heap dumps, however it is not
> documented on Beam Website and does not respect the --profile_memory and
> --profile_location options[3]. The profile_memory flag currently works only
> for Dataflow Runner users who run non-portable batch pipelines; profiles
> are saved only if memory usage between samples exceeds 1000M.
> We should improve memory profiling experience for Portable Python users and
> consider making a guide on how users can investigate OOMing pipelines on Beam
> website.
>
> [1]
> https://github.com/apache/beam/blob/095589c28f5c427bf99fc0330af91c859bb2ad6b/sdks/python/apache_beam/utils/profiler.py#L46
> [1a]
> https://github.com/apache/beam/blob/095589c28f5c427bf99fc0330af91c859bb2ad6b/sdks/python/apache_beam/runners/worker/sdk_worker_main.py#L157
> [1b]
> https://github.com/apache/beam/blob/095589c28f5c427bf99fc0330af91c859bb2ad6b/sdks/python/apache_beam/utils/profiler.py#L112
> [2]
> https://github.com/apache/beam/blob/095589c28f5c427bf99fc0330af91c859bb2ad6b/sdks/python/apache_beam/utils/profiler.py#L124
> [3]
> https://github.com/apache/beam/blob/095589c28f5c427bf99fc0330af91c859bb2ad6b/sdks/python/apache_beam/options/pipeline_options.py#L846
--
This message was sent by Atlassian Jira
(v8.3.4#803005)