Re: [Bf-committers] [GSoC 2018] Questions regarding tests in Python

2018-03-02 Thread Brecht Van Lommel
Hi Łukasz,

I've updated the wiki page with more detail since it was quite vague:
https://wiki.blender.org/index.php/Dev:Ref/GoogleSummerOfCode/2018/Ideas#Tests_for_Regressions

On Fri, Mar 2, 2018 at 2:46 PM, Łukasz Hryniuk  wrote:

> 1. There are a list of areas to be tested. How they will be chosen?
>

It's up to you to pick some areas for your proposals. Testing multiple
areas from the wiki page seems doable, but if you want to propose others
you can.

2. How tests should look like?
>


> I don't understand, what does it exactly mean "we don't get 1:1 match with
> bmesh", but this comment is from 2012, so I assume that's not true any more.
>

These tests are a bit outdated, but this is referring to differences
between Carve and BMesh boolean implementations. We only have BMesh
booleans now so something should be updated there.


> The goal of "Tests for Regressions" project is to actually check results,
> so... I've started writing a test for Array modifier, created an object,
> then another - expected one - and in test, I've applied the modifier and
> I've compared the result with the expected mesh using
> bpy.types.Mesh.unit_test_compare(), which, as I've seen, compares data
> like vertices, edges and so on of two meshes (I haven't found many uses of
> that method in tests).
>
> Should a test in this project look like this?
>

What you are describing is more of a unit test for the "Tests for Core
Libraries" idea. Both can be useful. But mainly the idea I had in mind for
regression testing was to do it in a way that tests can be created quickly,
and that it is easy for developers to use and maintain. It could already be
used to check that master and blender2.8 are giving the same results for
example.

See the description on the wiki page.


> 3. Where they should be placed: .blend or .py?
>

I think it's best to create a .blend for the input data, and Python scripts
to test it with multiple modifiers, nodes, tools and settings.


> 4. How it will be evaluated, i.e. how much a participant is supposed to
> achieve by each evaluation?
>
> Will it be set by number of tests/coverage (I've got no idea how to check
> it)? Is it up to me to set milestones with a mentor, basing on my
> intuition, how much time testing each area will take?
>

As part of the proposal you create a planning, and mentors can give
feedback on that after the proposal is submitted to make it better. It's
not so much about specific number of tests, for evaluation we look at how
you are doing overall.

Regards,
Brecht.
___
Bf-committers mailing list
Bf-committers@blender.org
https://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] [GSoC 2018] Questions regarding tests in Python

2018-03-02 Thread Howard Trickey
On Fri, Mar 2, 2018 at 8:47 AM Łukasz Hryniuk  wrote:

> Hi,
>
> I'm messing for a while with Blender sources, getting familiar with the
> code and trying to find an area I could be the most effective during the
> GSoC. I'd like to ask about the "Tests for Regressions" idea:
>
>
> 1. There are a list of areas to be tested. How they will be chosen?
>
> Are there any usage statistics? Further development plans? From the most
> basic ones to the most complicated? Is it up to me to test e.g.
> modifiers first?
>
>
> 2. How tests should look like?
>
> E.g. under /lib/tests/modifier_stack/ are .blend files just showing, how
> modifier is supposed to affect given mesh (e.g. curve_modifier.blend;
> which in fact is subsurf + curve combination) and the other ones
> preparing scene, applying modifier and calling validate() on resulting
> mesh (array_test.blend). There is also
> /blender/tests/python/bl_mesh_modifiers.py file, with a comment:
>
> # Currently this script only generates images from different modifier
> # combinations and does not validate they work correctly,
> # this is because we don't get 1:1 match with bmesh.
> #
> # Later, we may have a way to check the results are valid.
>
> I don't understand, what does it exactly mean "we don't get 1:1 match
> with bmesh", but this comment is from 2012, so I assume that's not true
> any more.
>
> The goal of "Tests for Regressions" project is to actually check
> results, so... I've started writing a test for Array modifier, created
> an object, then another - expected one - and in test, I've applied the
> modifier and I've compared the result with the expected mesh using
> bpy.types.Mesh.unit_test_compare(), which, as I've seen, compares data
> like vertices, edges and so on of two meshes (I haven't found many uses
> of that method in tests).
>
> Should a test in this project look like this?
>
>
I have found a convenient way to test end-to-end operation of mesh
operations is to do what you said here (using unit_test_compare).
I wrote a mesh_ops.test.py which is in lib/tests/modeling that is a kind
of framework for specifying mesh ops to apply to different objects
with different elements selected, and the expected output meshes,
and then uses unit_test_compare to compare them.
Examples of use of this are in bevel_regression.blend and
bool_regression.blend in that directory (test specs are in
a text window).
The 'make test' target for blender can specify running blender
on these files and running a function such that success / failure
is tested in the usual way.


>
> 3. Where they should be placed: .blend or .py?
>
> I can create expected object using Python by giving vertices/joining
> primitives for some tests. They can be also, probably faster, created
> using GUI.
>
> What's recommended?
>
> Creating a .blend file is much easier and more convenient for reviewing
> what's happening in Blender, but it makes harder checking the actual
> code/scene details (like objects' positions, modifiers parameters) and
> searching for it (I haven't found any tool to grep a text inside .blend
> file; only blendfile.py, which as I see, can be used to do so with a
> not-so-little effort). Moreover, I think it's easier to organize tests
> in .py file. In .blend one idea is to use layers (?) to separate tests
> for different parameters (e.g. for array modifier I'd like to test merge
> option and constant offset separately).
>
>
> 4. How it will be evaluated, i.e. how much a participant is supposed to
> achieve by each evaluation?
>
> Will it be set by number of tests/coverage (I've got no idea how to
> check it)? Is it up to me to set milestones with a mentor, basing on my
> intuition, how much time testing each area will take?
>
>
> Regards,
> Łukasz Hryniuk
> ___
> Bf-committers mailing list
> Bf-committers@blender.org
> https://lists.blender.org/mailman/listinfo/bf-committers
>
___
Bf-committers mailing list
Bf-committers@blender.org
https://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] [GSoC 2018] Questions regarding tests in Python

2018-03-02 Thread Łukasz Hryniuk

Hi,

I'm messing for a while with Blender sources, getting familiar with the 
code and trying to find an area I could be the most effective during the 
GSoC. I'd like to ask about the "Tests for Regressions" idea:



1. There are a list of areas to be tested. How they will be chosen?

Are there any usage statistics? Further development plans? From the most 
basic ones to the most complicated? Is it up to me to test e.g. 
modifiers first?



2. How tests should look like?

E.g. under /lib/tests/modifier_stack/ are .blend files just showing, how 
modifier is supposed to affect given mesh (e.g. curve_modifier.blend; 
which in fact is subsurf + curve combination) and the other ones 
preparing scene, applying modifier and calling validate() on resulting 
mesh (array_test.blend). There is also 
/blender/tests/python/bl_mesh_modifiers.py file, with a comment:


# Currently this script only generates images from different modifier
# combinations and does not validate they work correctly,
# this is because we don't get 1:1 match with bmesh.
#
# Later, we may have a way to check the results are valid.

I don't understand, what does it exactly mean "we don't get 1:1 match 
with bmesh", but this comment is from 2012, so I assume that's not true 
any more.


The goal of "Tests for Regressions" project is to actually check 
results, so... I've started writing a test for Array modifier, created 
an object, then another - expected one - and in test, I've applied the 
modifier and I've compared the result with the expected mesh using 
bpy.types.Mesh.unit_test_compare(), which, as I've seen, compares data 
like vertices, edges and so on of two meshes (I haven't found many uses 
of that method in tests).


Should a test in this project look like this?


3. Where they should be placed: .blend or .py?

I can create expected object using Python by giving vertices/joining 
primitives for some tests. They can be also, probably faster, created 
using GUI.


What's recommended?

Creating a .blend file is much easier and more convenient for reviewing 
what's happening in Blender, but it makes harder checking the actual 
code/scene details (like objects' positions, modifiers parameters) and 
searching for it (I haven't found any tool to grep a text inside .blend 
file; only blendfile.py, which as I see, can be used to do so with a 
not-so-little effort). Moreover, I think it's easier to organize tests 
in .py file. In .blend one idea is to use layers (?) to separate tests 
for different parameters (e.g. for array modifier I'd like to test merge 
option and constant offset separately).



4. How it will be evaluated, i.e. how much a participant is supposed to 
achieve by each evaluation?


Will it be set by number of tests/coverage (I've got no idea how to 
check it)? Is it up to me to set milestones with a mentor, basing on my 
intuition, how much time testing each area will take?



Regards,
Łukasz Hryniuk
___
Bf-committers mailing list
Bf-committers@blender.org
https://lists.blender.org/mailman/listinfo/bf-committers