[
https://issues.apache.org/jira/browse/MINIFICPP-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119716#comment-17119716
]
James Medel edited comment on MINIFICPP-1233 at 5/29/20, 3:39 PM:
------------------------------------------------------------------
[~phrocker] I have examined some of the Test programs in
[nifi-minifi-cpp/libminifi/test/ |
[https://github.com/apache/nifi-minifi-cpp/tree/master/libminifi/test]] folder.
For example, TensorFlowTests.cpp uses the C++ Catch Test library to create test
cases for each TensorFlow C++ processor. Let's look at the first test case:
[TEST_CASE("TensorFlow: Apply Graph",
"[tfApplyGraph]")|[https://github.com/apache/nifi-minifi-cpp/blob/master/libminifi/test/tensorflow-tests/TensorFlowTests.cpp]].
A testController object of type
[TestController|[https://github.com/apache/nifi-minifi-cpp/blob/master/libminifi/test/TestBase.h]]
is created to then create a plan object by calling testController's
createPlan. Then the input files and output files are defined using strings.
Then plan object is used to build the MiNiFi data flow aka processing graph,
which includes adding the processors and setting their properties. They then
build test TensorFlow graph, read TensorFlow graph into TFApplyGraph, write
test input tensor and read test output tensor.
It looks like they are testing the TFApplyGraph C++ processor by building a
MiNiFi flow in C++ code, providing input data and analyzing the expected output
data.
Should I follow a similar approach for the h2o Python processors? I was looking
into building and testing a hybrid Python/C++ program:
[https://www.benjack.io/2017/06/12/python-cpp-tests.html]
Or just focus on testing the two individual h2o Python processors without
building the MiNiFi flow in Python/C++ code?
was (Author: james.medel):
[~phrocker] I have examined some of the Test programs in
[nifi-minifi-cpp/libminifi/test/|[https://github.com/apache/nifi-minifi-cpp/tree/master/libminifi/test]]
folder. For example, TensorFlowTests.cpp uses the C++ Catch Test library to
create test cases for each TensorFlow C++ processor. Let's look at the first
test case: [TEST_CASE("TensorFlow: Apply Graph",
"[tfApplyGraph]")|[https://github.com/apache/nifi-minifi-cpp/blob/master/libminifi/test/tensorflow-tests/TensorFlowTests.cpp]].
A testController object of type
[TestController|[https://github.com/apache/nifi-minifi-cpp/blob/master/libminifi/test/TestBase.h]]
is created to then create a plan object by calling testController's
createPlan. Then the input files and output files are defined using strings.
Then plan object is used to build the MiNiFi data flow aka processing graph,
which includes adding the processors and setting their properties. They then
build test TensorFlow graph, read TensorFlow graph into TFApplyGraph, write
test input tensor and read test output tensor.
It looks like they are testing the TFApplyGraph C++ processor by building a
MiNiFi flow in C++ code, providing input data and analyzing the expected output
data.
Should I follow a similar approach for the h2o Python processors? I was looking
into building and testing a hybrid Python/C++ program:
[https://www.benjack.io/2017/06/12/python-cpp-tests.html]
Or just focus on testing the two individual h2o Python processors without
building the MiNiFi flow in Python/C++ code?
> Create Regression Tests for H2O Processors using pytest-mock
> ------------------------------------------------------------
>
> Key: MINIFICPP-1233
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1233
> Project: Apache NiFi MiNiFi C++
> Issue Type: Bug
> Reporter: James Medel
> Assignee: James Medel
> Priority: Blocker
> Fix For: 0.8.0
>
>
> Per https://issues.apache.org/jira/browse/MINIFICPP-1214
> We should resolve Marc's and Szasz's concerns about whether
> [https://github.com/apache/nifi-minifi-cpp/pull/784] breaks these two new
> processors by having regression tests. We will explore and plan to use
> pytest-mock ([https://github.com/pytest-dev/pytest-mock]) testing framework
> to implement the regression tests for the new processors. Therefore, these
> new processors would be easier to merge.
>
> regression test concerns
--
This message was sent by Atlassian Jira
(v8.3.4#803005)