[
https://issues.apache.org/jira/browse/BEAM-12515?focusedWorklogId=617215&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617215
]
ASF GitHub Bot logged work on BEAM-12515:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 30/Jun/21 18:31
Start Date: 30/Jun/21 18:31
Worklog Time Spent: 10m
Work Description: TheNeuralBit commented on a change in pull request
#15104:
URL: https://github.com/apache/beam/pull/15104#discussion_r661718852
##########
File path: sdks/python/apache_beam/options/pipeline_options_test.py
##########
@@ -215,6 +216,7 @@ def _add_argparse_args(cls, parser):
parser.add_argument(
'--fake_multi_option', action='append', help='fake multi option')
+ @pytest.mark.no_xdist
Review comment:
I think it's very unlikely that parallelization will cause flakiness due
to some shared state and a race condition. xdist works by starting up multiple
separate python processes each running a partition of the tests. The GIL would
eliminate any benefit in a threaded approach.
My guess would be that xdist makes this flaky by sometimes executing some
other test that modifies state in the same python process/worker. I'm not sure
what exactly no_xdist does, presumably it just causes the test to be run in the
main process? It makes sense that that would help since that problematic test
is then guaranteed to run in a separate process.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 617215)
Time Spent: 5h 10m (was: 5h)
> Python PreCommit flaking in PipelineOptionsTest.test_display_data
> -----------------------------------------------------------------
>
> Key: BEAM-12515
> URL: https://issues.apache.org/jira/browse/BEAM-12515
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-core, test-failures
> Reporter: Brian Hulette
> Priority: P1
> Labels: flake, sdk-py-core
> Time Spent: 5h 10m
> Remaining Estimate: 0h
>
> Seeing this failure pretty frequently on PreCommit since yesterday (06/17).
> The first failure in precommit cron was here:
> https://ci-beam.apache.org/job/beam_PreCommit_Python_cron/4327/
> Seems related to BEAM-10006, but I'm not sure what changed to make this start
> flaking recently.
> {code}
> self = <apache_beam.options.pipeline_options_test.PipelineOptionsTest
> testMethod=test_display_data>
> def test_display_data(self):
> for case in PipelineOptionsTest.TEST_CASES:
> options = PipelineOptions(flags=case['flags'])
> dd = DisplayData.create_from(options)
> > hc.assert_that(dd.items,
> > hc.contains_inanyorder(*case['display_data']))
> apache_beam/options/pipeline_options_test.py:222:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _
> target/.tox-py36-cloud/py36-cloud/lib/python3.6/site-packages/hamcrest/library/collection/issequence_containinginanyorder.py:68:
> in describe_mismatch
> self.matches(item, mismatch_description)
> target/.tox-py36-cloud/py36-cloud/lib/python3.6/site-packages/hamcrest/library/collection/issequence_containinginanyorder.py:64:
> in matches
> .describe_mismatch(sequence, mismatch_description)
> target/.tox-py36-cloud/py36-cloud/lib/python3.6/site-packages/hamcrest/core/base_matcher.py:34:
> in describe_mismatch
> mismatch_description.append_text('was ').append_description_of(item)
> target/.tox-py36-cloud/py36-cloud/lib/python3.6/site-packages/hamcrest/core/base_description.py:34:
> in append_description_of
> description = str(value)
> apache_beam/transforms/display.py:359: in __repr__
> return 'DisplayDataItem({})'.format(json.dumps(self._get_dict()))
> /usr/lib/python3.6/json/__init__.py:231: in dumps
> return _default_encoder.encode(obj)
> /usr/lib/python3.6/json/encoder.py:199: in encode
> chunks = self.iterencode(o, _one_shot=True)
> /usr/lib/python3.6/json/encoder.py:257: in iterencode
> return _iterencode(o, 0)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _
> self = <json.encoder.JSONEncoder object at 0x7fef54dd19b0>
> o = <apache_beam.options.value_provider.RuntimeValueProvider object at
> 0x7feeb2ed5c50>
> def default(self, o):
> """Implement this method in a subclass such that it returns
> a serializable object for ``o``, or calls the base implementation
> (to raise a ``TypeError``).
>
> For example, to support arbitrary iterators, you could
> implement default like this::
>
> def default(self, o):
> try:
> iterable = iter(o)
> except TypeError:
> pass
> else:
> return list(iterable)
> # Let the base class default method raise the TypeError
> return JSONEncoder.default(self, o)
>
> """
> raise TypeError("Object of type '%s' is not JSON serializable" %
> > o.__class__.__name__)
> E TypeError: Object of type 'RuntimeValueProvider' is not JSON
> serializable
> /usr/lib/python3.6/json/encoder.py:180: TypeError
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)