Hi Anil,

Le 18/06/2016 à 22:21, Anil Patel a écrit :
Jacques and others,

Thanks for your kind support to team of developers working so many tickets at 
the same time.

I'm very happy to see such an activity, and I must confess it's hard to follow 
all progresses but definitely exciting!
Some of the active engineers from HW are also skilled at QA practices and tools 
to automate testing and they will be super happy to start contributing selenium 
tests.

Looking forward :)

Before we can start writing selenium tests it will be good to have community 
agree on expected system behavior and document them. We do have good start on 
it,
https://cwiki.apache.org/confluence/display/OFBIZ/Business+Process+Stories+and+Use+Cases+Library
 
<https://cwiki.apache.org/confluence/display/OFBIZ/Business+Process+Stories+and+Use+Cases+Library>

Is the fact that the use cases are posted on OFBiz confluence, enough to say 
that these are the expected system behavior

OR

Do we need to take time to review them and declare them accepted (we may use 
Jira ticket to track the review activity and then close the ticket when we have 
the agreement)?

I think we can trust this work, and complete/step in when necessary

The developer team at HW is excited to do whatever it takes to increase OFBiz 
adoption and they will be happy to develop selenium tests.

Sounds like a start, great!

Jacques



Thanks and Regards
Anil Patel
COO
Hotwax Systems
http://www.hotwaxsystems.com/ <http://www.hotwaxmedia.com/>
Cell: +1 509 398 3120

On Jun 18, 2016, at 12:10 AM, Jacques Le Roux <[email protected]> 
wrote:


Le 17/06/2016 à 14:03, Ron Wheeler a écrit :
On 17/06/2016 5:19 AM, Jacques Le Roux wrote:
Le 16/06/2016 à 22:53, Ron Wheeler a écrit :
One of the side benefits of having a small number of committer's is that you 
prevent bad designs and poorly  tested code getting into the trunk.
The disadvantage is that the committers are easily overwhelmed by an active 
contributor community.
Would you say that with 31 committers (most active) we are currently a small 
number of committers?
Are the committers able to verify the code committed?
I believe so

How many of the regressions should have been detected before the code was 
committed?
I have no ideas

How many of the regressions were caused by lack of documentation of existing features so 
that people broke things that were "hidden" relationships?
One part of the project which cruelly lacks documentation is the UI of the 
content component. But the problems appeared mostly which changes related with 
FOP and Birt because upgrading/refactoring/improving code is not always a task 
as easy as it may look

It is hard to build and maintain a bullet-proof integration test suite so human 
engineering is still a big part of the solution.
Right, I'm still convinced some high level Selenium tests would help

You may want to put in some rules about unit tests so that code without 
adequate test coverage can not be updated until the unit tests are sufficient 
for the committer to feel confident about accepting it. This may cause people 
to work on tests for stuff that they did not write but are considered key 
functionality in the modules being updated.
There is no free ride and if you allow people to build up the technical debt of 
the project in order to meet their own deadlines, you will eventually have to 
face a large debt that comes due.

Taher is paying off the debt in the framework which is a great contribution.
It may be that others are going to have to take up the challenge in the 
application side.
You may have to have a short moratorium on enhancements until the debt is 
reduced to a manageable level.

There may also be the issue of people modifying too many layers at once so 
changes affect a lot of different services so unpleasant side-effects are 
easier to generate.

Are the regressions caused by a small group of contributors or from updates 
going through a few committers?
As I said it's recently fortunately small things. For now it's hard to answer 
to your question, because the HW effort is rip-roaring. I guess when it will 
settle a lot of things will be better/fixed, in the meantime me will certainly 
face some uncertainty.
My question was not about pointing finger put how to prevent issues. Hence my 
question about Selenium because our current set of tests is obviously not 
enough.
Your suggestion about more unit tests and rules is certainly to consider. I'd 
wait the end of the rip-roaring HW effort, for things to stabilise, and then 
try to introduce more constraints, or should we discuss it right away, 
community?

When you are in a hole, the first think to do is to stop digging!
I'm not sure how to interpret this injunction ;)

Jacques


It is an open source project so there has to be some sensitivity about asking 
people to do a bit more to clean up old debt but if that is a problem and it is 
not addressed, it can be a big mess.
I see a lot of skilled good will and clearly success for the last few years. I 
think we can achieve it, OFBiz is here to stay!

Jacques


Ron


On 16/06/2016 3:48 PM, Taher Alkhateeb wrote:
Hi Jacques,

Selenium tests cannot be unit tests in OFBiz because it requires firing up
the server. You can consider them part of the integration tests (existing
functionality). In fact, I would consider selenium tests to be functional
tests (higher than integration) ->
https://en.wikipedia.org/wiki/Functional_testing

So yeah we can add them, but I don't think we can do that to the raw
unit-tests (at least in the context discussed in the other proposal thread)

Taher Alkhateeb

On Thu, Jun 16, 2016 at 10:40 PM, Jacques Le Roux <
[email protected]> wrote:

Hi,

With the considerable HW effort, a lot of things are going on recently,
and it's hard to follow. I though noticed that we experience more and more
regressions (not all related to HW effort, far from it).

Fortunately it's so far mostly minor points and often related with the UI,
OFBIZ-7346 and OFBIZ-7363 being counter examples (OFBIZ-7346 can be
critical)

 From my experience, w/o a QA person or team, it's very hard to detect
those side effects at the UI level when you refactor or fix it. I remember
the (ex) Neogia team (mostly Erwan) tried to maintain a Selenium/Webdriver
set of tests. I don't know if they continue/d.

Since we spoke about Junit and unit tests recently, some prefer TestNG, at
least coupled with Selenium http://testng.org/doc/selenium.html

Does it make sense, do you think it's only an utopia?

Thanks

Jacques


Reply via email to