[ 
https://issues.apache.org/jira/browse/BEAM-11666?focusedWorklogId=621015&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-621015
 ]

ASF GitHub Bot logged work on BEAM-11666:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Jul/21 15:13
            Start Date: 09/Jul/21 15:13
    Worklog Time Spent: 10m 
      Work Description: AlikRodriguez commented on a change in pull request 
#15118:
URL: https://github.com/apache/beam/pull/15118#discussion_r667024903



##########
File path: sdks/python/apache_beam/runners/interactive/recording_manager_test.py
##########
@@ -463,14 +463,13 @@ def test_clear(self):
     recording.wait_until_finish()
 
     # Assert that clearing only one recording clears that recording.
-    self.assertGreater(rm_1.describe()['size'], 0)
-    self.assertGreater(rm_2.describe()['size'], 0)
-    rm_1.clear()
-    self.assertEqual(rm_1.describe()['size'], 0)
-    self.assertGreater(rm_2.describe()['size'], 0)
-
-    rm_2.clear()
-    self.assertEqual(rm_2.describe()['size'], 0)
+    if rm_1.describe()['size'] > 0 and rm_2.describe()['size'] > 0:

Review comment:
       You are right, I change it. As I can see, it's not really a problem but 
an expected behavior in some scenarios. There is actually a flag that indicates 
when the record stopped, and it can happen when it is not necessary to record 
the same data twice.
    I added another test for check single clear in one record and test this 
case apart.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 621015)
    Time Spent: 0.5h  (was: 20m)

> apache_beam.runners.interactive.recording_manager_test.RecordingManagerTest.test_basic_execution
>  is flaky
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: BEAM-11666
>                 URL: https://issues.apache.org/jira/browse/BEAM-11666
>             Project: Beam
>          Issue Type: Bug
>          Components: test-failures
>            Reporter: Valentyn Tymofieiev
>            Assignee: Irwin Alejandro Rodirguez Ramirez
>            Priority: P1
>              Labels: flake
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Happened in: https://ci-beam.apache.org/job/beam_PreCommit_Python_Commit/16819
> {noformat}
> self = 
> <apache_beam.runners.interactive.recording_manager_test.RecordingManagerTest 
> testMethod=test_basic_execution>
>     @unittest.skipIf(
>         sys.version_info < (3, 6, 0),
>         'This test requires at least Python 3.6 to work.')
>     def test_basic_execution(self):
>       """A basic pipeline to be used as a smoke test."""
>     
>       # Create the pipeline that will emit 0, 1, 2.
>       p = beam.Pipeline(InteractiveRunner())
>       numbers = p | 'numbers' >> beam.Create([0, 1, 2])
>       letters = p | 'letters' >> beam.Create(['a', 'b', 'c'])
>     
>       # Watch the pipeline and PCollections. This is normally done in a 
> notebook
>       # environment automatically, but we have to do it manually here.
>       ib.watch(locals())
>       ie.current_env().track_user_pipelines()
>     
>       # Create the recording objects. By calling `record` a new 
> PipelineFragment
>       # is started to compute the given PCollections and cache to disk.
>       rm = RecordingManager(p)
> >     numbers_recording = rm.record([numbers], max_n=3, max_duration=500)
> apache_beam/runners/interactive/recording_manager_test.py:331: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> apache_beam/runners/interactive/recording_manager.py:435: in record
>     self._clear(pipeline_instrument)
> apache_beam/runners/interactive/recording_manager.py:319: in _clear
>     self._clear_pcolls(cache_manager, set(to_clear))
> apache_beam/runners/interactive/recording_manager.py:323: in _clear_pcolls
>     cache_manager.clear('full', pc)
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> self = 
> <apache_beam.runners.interactive.testing.test_cache_manager.InMemoryCache 
> object at 0x7fa3903ac208>
> labels = ('full', 
> 'ee5c35ce3d-140340882711664-140340882712560-140340476166608')
>     def clear(self, *labels):
>       # type (*str) -> Boolean
>     
>       """Clears the cache entry of the given labels and returns True on 
> success.
>     
>       Args:
>         value: An encodable (with corresponding PCoder) value
>         *labels: List of labels for PCollection instance
>       """
> >     raise NotImplementedError
> E     NotImplementedError
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to