KevinGG commented on a change in pull request #14437:
URL: https://github.com/apache/beam/pull/14437#discussion_r608035343



##########
File path: sdks/python/apache_beam/runners/interactive/utils_test.py
##########
@@ -182,6 +183,9 @@ def 
test_child_module_logger_can_override_logging_level(self, mock_emit):
 @unittest.skipIf(
     not ie.current_env().is_interactive_ready,
     '[interactive] dependency is not installed.')
[email protected](

Review comment:
       >In such cases, I would rather the test failed with a descriptive error 
message. If dependencies are not installed >correctly, that indicates a real 
problem with either our test setup or the actual code, and we should be aware 
of it. >But I'm not sure what is the best way to differentiate between 
intentional skips (as described above) and accidental >skips.
   
   Make sense, let me change the logging to error level. In that case, if the 
test is not skipped but later somehow the dependency is not installed (or 
removed), it will error out and expose the problem.
   
   
   >How do you know?
   
   Based on the stacktrace in the reported JIRA ticket. The actual call is
   
   ```
   actual = [call('\n            <link rel="stylesheet" 
href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min...";>You
 have limited Interactive Beam features since your ipython kernel is not 
connected any notebook frontend.</div>')]
   ```
   This indicates `is_in_ipython and not is_in_notebook`, which contradicts 
with the setup in the test. It means the global instance `ie.current_env()` is 
modified during the test by other tests. By mocking the IPython environment and 
the usage of the global  instance, this test is isolated from potential race 
situations.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to