jomarko commented on issue #874:
URL: 
https://github.com/apache/incubator-kie-issues/issues/874#issuecomment-1923469868

   # What
   We should introduce new e2e test suite for the 
[dmn-editor](https://github.com/apache/incubator-kie-tools/tree/main/packages/dmn-editor)
 module. It means we should focus on its key features, while we should not 
repeat the logic tested as part of other test suites.
   
   ## Strategy
   
   The most important question probably is the amount of test and the level of 
details we should focus on.
   
   We should probably avoid short test scenarios like:
   - open diagram, move node, close diagram
   - open diagram, rename node, save
   - open diagram, move node, undo, close diagram
   
   The reasoning behind is this:
   -    similar actions are usually also parts of more complex scenarios. If we 
imagine modeling large diagram from a real domain, we can be sure all actions 
from the above will be included.
   -    the shorter the scenario is, the higher chance is the test should be 
rather unit test or mocked test instead of the end-to-end test.
   
   What are then the scenarios for the end-to-end testing? We can split the 
scenarios into three categories.
   
   ### 01 Model from scratch
   We live in the age of the AI, however still, some DMN models, need to be 
created from scratch by real human users. But from the **dmn-editor** test 
suite point of view, what such set of models is. Or let’s reword the question. 
What is the set of models, the **dmn-editor** test suite will guarantee, they 
can be created. Is it a set of models present:
   - in 
[kogito-examples](https://github.com/apache/incubator-kie-kogito-examples)?
   - In community 
[documentation](https://www.drools.org/learn/documentation.html)?
   - In new, for now non existing storybook documentation?
   - In [ticket](https://github.com/IBM/bamoe-issues/issues/196) where new 
templates are discussed? 
   - …. ?
   
   Regardless, what the set of models we agree is needed to test, we should not 
try to test from scratch ‘every model from the universe’, that we want to 
support, because it would mean a duplication of testing the same:
   - adding nodes
   - connecting nodes
   - creating new data type
   - ….
   
   We should minimize the set of tested models but trying to be sure:
   - every node type was used in test suite.
   - simple custom data type was used in test suite.
   - data type with constraint was used in test suite.
   - structured data type was used in test suite.
   - …
   - And more from the ‘Key Features’ list
   
   We should also minimize the overlap with other test suites. For example, for 
modeling nodes content, i.e. expression, we should not create expression from 
scratch in the **dmn-editor** test suite. We should simplify this part using 
some feature like ctrl-c + ctrl-v probably.
   Reasoning behind this is creating expression from scratch should be part of 
the **boxed-expression-component** test suite.
   
   ### 02 Reusing model
   Second category is a complement to the first category. Or let’s say it 
depends on the first category. Here users reuse logic of other models. On high 
level, we can say it is two phases procedure.
   - import a model with logic to reuse.
   - reuse imported model’s logic.
   
   However, as for the first category, what are examples of such models, that 
we want the **dmn-editor** test suite will guarantee, they can be always 
created?
   Do we find them:
   - in 
[kogito-examples](https://github.com/apache/incubator-kie-kogito-examples)?
   - In community drools 
[documentation](https://www.drools.org/learn/documentation.html)?
   - In new, for now non existing storybook documentation?
   - In [ticket](https://github.com/IBM/bamoe-issues/issues/196) where new 
templates are discussed? 
   - … ?
   
   Regardless of the answer for the above, the situation is more complicated 
comparing **01**, because one test now needs to operate with more models. Right 
now, the entire API of the DMN Editor is defined by the Props of the 
`DmnEditor` component, which include some properties related to External model 
integration. I.e., `onRequestExternalModelByPath`, 
`onRequestExternalModelsAvailableToInclude`, `externalModelsByNamespace`. The 
last one, `externalModelsByNamespace`, really is where the `DmnEditor` 
component gets everything it needs from External models. This should allow us 
to do testing of including models, as part of **dmn-editor** test suite, 
however we will be not able to avoid mocking.
   
   That is why we need to introduce tests for including models, also into other 
test suites:
   - **vscode-extension** tesuite
   - **online-editor** test suite
   
   ### 03 Non-Functional tests
   **01** and **02** are both functional tests. However wee need to test also 
non-functional aspects as part of the **dmn-editor** test suite.
   - We should check components are non-necessarily re-rendered. Debug 
mechanism was introduced as part of 
[PR](https://github.com/apache/incubator-kie-tools/pull/2130) that should help 
us to do it.
   - We should check opening large models doesn't take too long
   - We should check opening other vendors models is not an issue ???
   
   ## Supported models
   We described some models in **01 From Scratch** and **02 Reusing model** 
sections. Let’s start to call them **supported** models. We need to merge and 
optimize both sets of models from **01** and **02** and create the final set of 
supported models.
   
   ## Consequences of supported models
   Once we agree on the set of supported models, that needs to be supported by 
DMN editor, we need to use them as inputs for other test suites, especially for:
   - **scesim-editor** test suite
   - **dmn-runner** test suite
   - **dev-deployment** test suite
   - **vscode-extension** test suite (**02 Reusing logic** models only)
   - **online-editor** test suite (**02 Reusing logic** models only)
   
   Then wee need to keep in mind, that any change in supported models we 
produce as part of **dmn-editor** test suite may affect other test suites and 
all listed test suites should be rechecked and potentially synced with the 
**dmn-editor** test suite.
   
   ## Supported environments
   If we speak in the context of the **dmn-editor** test suite, we need 
probably to decide what is the supported environment for the test suite? Is it 
probably some matrix of the [browsers] x [operating system]?
   
   ## Further restrictions
   - We should never store DMN models from the users in the kie-tools 
repository, only if those models were sanitized, anonymized, and randomized.
   - We should not exceed reasonable time for the **dmn-editor** test suite 
execution.
   
   ## Key Features
   - Putting nodes to the diagram, connecting them, and removing them
   - Changing nodes properties, most important seems to be names and data types
   - Creating custom data types
   - - - - - - - - - - - - - - - - - - - -
   - Autolayout feature
   - Keyboard accessibility
   - Undo Redo operations
   - Copy and Paste feature
   - Bend points
   - DRD
   - Validation (e.g. nodes names)
   
   If we have a look on the key features, it is highly recommended we organize 
the test into two phases:
   - Execute crucial operation first, those above the delimiting line
   - Execute nice to have features afterwards, those below the delimiting line
   
   If we imagine test with steps:
   - Create node A
   - Create node B
   - Undo
   - Rename node A
   - Paste node C
   - Make connection A->B, A->C
   - Add data type A
   - Add data type B
   - Move node C
   - Add Bend point to the A->C edge
   - Add constraint to data type A <--- **and here the test start to fail**
   - Add B-> C edge
   - Rename C node
   
   The scenario above makes the test more similar to real user interaction, 
however makes almost impossible to maintain the test and analyze its results.
   
   If the highlighted line with failure occurs, is that blocker for whole 
release? Is that some minor corner case bug? Hard to say as we mix a lot of 
different operations.
   
   If we reorder the things according to the recommendation, and still testing 
the same model can be created:
   -    Create node A
   -    Create node B
   -    Paste node C
   -    Move node C
   -    Make connection A->B, A->C
   -    Add B-> C edge
   -    Add data type A
   -    Add data type B
   -    Rename node A
   -    Rename C node
   -    -------------------------------------------------------
   -    Undo
   -    Add Bend point to the A->C edge
   -    Add constraint to data type A  and here the test start to fail
   
   This should be more readable and more effective form the testing time point 
of view as we do not switch the editor views so often.
   
   ## Features that should be tested in other modules
   - Modelling DMN nodes expressions - that should be tested as part of 
**boxed-expression-component** test suite
   - Testing the DMN models using SceSim - that should be tested as part of 
**scesim-editor** test suite
   - Executing the DMN models using DMN Runner - that should be tested as part 
of DMN Runner test suite
   - Executing DMN models once they are deployed to cloud – that should be 
tested as part of the **dev-deployment** test suite
   - Reusing models, as described in **02** should be somehow repeated, or 
let’s say sanity checked in **online-editor** and **vscode-extension** test 
suites
   - Creating custom DMN types from the java files - that should be part of the 
**vscode-extension** test suite as it requires java files to be present. Those 
are supported only in the **vscode-extension**. The kie **online-editor** 
doesn't support java files. However this is currently blocked by **[ticket]( 
https://github.com/apache/incubator-kie-issues/issues/782)**
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to