This is probably a little too specific for your particular case, but I'll 
suggest it anyway, because it's a set-up that I've found has worked well.

We've developed a number of post-processing validators at my employer and they 
are all Python scripts built with Click 
(https://click.palletsprojects.com/en/8.0.x/). They have a large number of 
options and parameters which interact with each other, plus they perform an 
important role, so we have to test them fully.

We use a 2-layered approach in the build pipeline:

* Firstly, a simple Make recipe calls the script (both plain and with each 
command) with `--help` and checks the output for some key phrases. This is 
effectively a smoke test to ensure that Click is wired in correctly.

* Secondly, we have "integration" tests built with pytest and using 
pytest_click plugin (https://pypi.org/project/pytest-click/). This allows easy 
access to the `CliRunner` documented here: 
https://click.palletsprojects.com/en/8.0.x/testing/ . It's in this layer that 
we test the permutations / combinations of options and parameters with "broad 
strokes". For example, we have a happy path test which ensures that a valid 
report passes checking - it looks like this:

def test(cli_runner, data_good_path: pathlib.Path) -> None:
    """
    Set of data in 'all_test' dir is valid, taken from audit snapshots.
    """
    in_dir = data_good_path / "all_test"

    result = cli_runner.invoke(main, ["all", str(in_dir), "201901"])

    assert result.exit_code == 0, result.output
    assert "--- Cross checking ---" in result.output
    assert "🎉 🎉 🎉" in result.output

Hope that's helpful,

James



On Fri, 11 Jun 2021, at 3:49 AM, Jeremy Bowman wrote:
> Maybe pexpect does what you want? https://pypi.org/project/pexpect/

> We recently started using it to test make targets in the Open edX development 
> environment, those tests are at 
> https://github.com/edx/devstack/tree/master/tests if they're useful for 
> reference.

> Jeremy Bowman

> 

> On 2021-06-10 13:43, Dan Stromberg wrote:

>>  
>> Hi folks.
>>  
>> Are there any tools available for system testing a large, shell-callable 
>> python script with many different command line options? 
>>  
>> I'm aware of pytest for unit tests, but what about running a shell command 
>> with some options, and checking its stdout for appropriate content?
>>  
>> Thanks.
>> 
>>  
>> -- 
>> 
>> Dan Stromberg
>> 
>> _______________________________________________
>> code-quality mailing list -- code-quality@python.org
>> To unsubscribe send an email to code-quality-le...@python.org
>> https://mail.python.org/mailman3/lists/code-quality.python.org/
>> Member address: code-qual...@portabase.org
> 

> _______________________________________________
> code-quality mailing list -- code-quality@python.org 
> <mailto:code-quality%40python.org>
> To unsubscribe send an email to code-quality-le...@python.org 
> <mailto:code-quality-leave%40python.org>
> https://mail.python.org/mailman3/lists/code-quality.python.org/
> Member address: m...@jamescooke.info <mailto:me%40jamescooke.info>
> 
_______________________________________________
code-quality mailing list -- code-quality@python.org
To unsubscribe send an email to code-quality-le...@python.org
https://mail.python.org/mailman3/lists/code-quality.python.org/
Member address: arch...@mail-archive.com

Reply via email to