Re: Setting up a Quality Team within KDE

2012-04-21 Thread Alexander Neundorf
On Saturday 14 April 2012, Laszlo Papp wrote:
  Afterwards it runs cppcheck on the source code...
 
 That is a really nice advantage, if it can be integrated with handy
 tools. Although, I would personally like to use much more advanced and
 useful tools than cppcheck.
 
 [...]
 
  But overall these are two very useful solutions which do not compete with
  each other and we should make use of them in a useful combined way, e.g.
  by submitting build results from build.kde.org to CDash.
 
 Fair enough, thanks.
 
 One additional comment on my side is that, if I read these lines on
 the jenkins website, I have the impression Jenkins can cover the use
 case of CDash in the sense of running the build from cron jobs:

I think cdash can do more than you (and me) know.
I know there is the so called cdash@home feature, which can kind of trigger 
builds on machines.
It can also be integrated with gerrit.
Here Marcus explains the setup Kitware is using:
http://www.kitware.com/source/home/post/62

Alex


Re: Setting up a Quality Team within KDE

2012-04-17 Thread Andreas Pakulat
On 16.04.12 23:52:00, Andras Mantia wrote:
 On Monday, April 16, 2012 08:59:13 AM Volker Krause wrote:
  I don't think UI is necessarily easier to change than internal API,
  the key  difference is that you'll get a compile error for the unit
  test, while you wont notice a broken Squish test immediately.
 
 I was thinking about such innocent things like changing the title of a 
 message box/dialog. If the object is found by exact match for the title, 
 then chaning it will likely break the testcase whenever the dialog or 
 any child widget of the dialog is accessed. Eg. changing Error to 
 Error doing X can break your test.
  I agree, it is also easy to break a unit test, but the problem is the 
 above, as for a regular C++ developer user visible string change is 
 safe, while from Squish test point of view might be not.
  The same is true if you reorganize the UI e.g in Designer and the 
 objetc hierarchy is changed. With a not carefully set up test this will 
 cause major problems. With a not carefully set up test suite and a 
 dialog from a library it will cause a lot of problems in many of the 
 tests and that will be time consuming to fix.

Well, but you can have similar problems with badly designed unit-tests
(and thats what you describe above, a badly designed UI test). If the
API/Class it tests changes behaviour slightly you may end up fixing up a
lot of places too if you didn't take care of avoiding code/data
duplication in the beginning.

Andreas



Re: Setting up a Quality Team within KDE

2012-04-17 Thread Alexander Neundorf
On Monday 16 April 2012, Andras Mantia wrote:
 On Monday, April 16, 2012 08:59:13 AM Volker Krause wrote:
  I don't think UI is necessarily easier to change than internal API,
  the key  difference is that you'll get a compile error for the unit
  test, while you wont notice a broken Squish test immediately.
 
 I was thinking about such innocent things like changing the title of a
 message box/dialog. If the object is found by exact match for the title,
 then chaning it will likely break the testcase whenever the dialog or
 any child widget of the dialog is accessed. Eg. changing Error to
 Error doing X can break your test.

In most cases it is good enough to identify widgets via their name and the 
window they are contained in. Adding the type to it is usually no problem.
You can then easily leave out containers like groupboxes etc.
We use a macro
#define SET_OBJECT_NAME(object) object-setObjectName(#object)
to help with setting the object names.
Is there an easier way to do this ?

...
 Still I think we need a dedicated team who can create initial (nicely
 designed) tests, could assist the developers in case code changes
 require fixes that are not straightforward to do and could keep the
 tests up-to-date when new features are added.
 
 Hoestly I don't see KDE developers all learning Squish and writing,
 maintaining test cases. 

I think so too.

Alex


Re: Setting up a Quality Team within KDE

2012-04-16 Thread Andras Mantia
On Thursday, April 12, 2012 10:49:10 PM Alexander Neundorf wrote:
 Yes, how good squish works for you depends on at least two things:

We also use Squish, and it found bugs and regressions in our code. 
Still, there is a big problem with it: the test needs to be maintained 
constantly. If they are not, thing will quickly escalade to the wrong 
direction, ie. you will end up with test cases that just fail and are 
hard to adapt.
The difference between maintaining unit tests and squish tests is that 
squish tests the code through the UI and it is very easy to change the 
UI in a way it breaks the tests, unless the test case is well set up 
(which means it is not just record and save). 

 * how well you set up your squish scripts, object map, etc. If done ad
 hoc,  you end up in an unmaintainable mess. If done carefully, it
 works.

Exactly. :) 

 * and of course, what you actually test. We don't really test that
 dialogs  etc. still look the same, we use squish to drive our
 software, see that it doesn't crash while doing that, that the
 expected actions work properly, and the produced output data is
 correct. We actually try to do things directly in many cases to stay
 independent from GUI details, i.e. we call slots and check Qt
 properties directly from the python scripts.

This is something we might not agree, as the point of squish is to test 
the app through the UI. If you test through slots, in most cases you 
might write unit tests as well.

I thing Squish tests would improve KDE's code, but this will need a 
dedicated team (which we want to set up, i know :) ). The point is, that 
Squish is not magic, you will need to think, design, write code, just 
not in C++, but in Python for example.

Andras

signature.asc
Description: This is a digitally signed message part.


Re: Setting up a Quality Team within KDE

2012-04-16 Thread Volker Krause
On Monday 16 April 2012 08:02:51 Andras Mantia wrote:
 On Thursday, April 12, 2012 10:49:10 PM Alexander Neundorf wrote:
  Yes, how good squish works for you depends on at least two things:
 We also use Squish, and it found bugs and regressions in our code.
 Still, there is a big problem with it: the test needs to be maintained
 constantly. If they are not, thing will quickly escalade to the wrong
 direction, ie. you will end up with test cases that just fail and are
 hard to adapt.
 The difference between maintaining unit tests and squish tests is that
 squish tests the code through the UI and it is very easy to change the
 UI in a way it breaks the tests, unless the test case is well set up
 (which means it is not just record and save).

I don't think UI is necessarily easier to change than internal API, the key 
difference is that you'll get a compile error for the unit test, while you 
wont notice a broken Squish test immediately. Therefore it's IMHO essential to 
have continuous or at least nightly runs of the Squish tests, fixing them 
while you still know what you changed is usually quick and easy.

  * how well you set up your squish scripts, object map, etc. If done ad
  hoc,  you end up in an unmaintainable mess. If done carefully, it
  works.
 
 Exactly. :)
 
  * and of course, what you actually test. We don't really test that
  dialogs  etc. still look the same, we use squish to drive our
  software, see that it doesn't crash while doing that, that the
  expected actions work properly, and the produced output data is
  correct. We actually try to do things directly in many cases to stay
  independent from GUI details, i.e. we call slots and check Qt
  properties directly from the python scripts.
 
 This is something we might not agree, as the point of squish is to test
 the app through the UI. If you test through slots, in most cases you
 might write unit tests as well.
 
 I thing Squish tests would improve KDE's code, but this will need a
 dedicated team (which we want to set up, i know :) ). The point is, that
 Squish is not magic, you will need to think, design, write code, just
 not in C++, but in Python for example.

While a dedicated team can help to get this started, I think it has to be the 
developers job to keep the tests working when they break as a result of 
changing code (which is exactly how it works with unit tests right now). 
Otherwise there's the risk of ending up with rotting tests which are useless.

regards,
Volker


signature.asc
Description: This is a digitally signed message part.


Re: Re: Setting up a Quality Team within KDE

2012-04-14 Thread Laszlo Papp
 Afterwards it runs cppcheck on the source code...

That is a really nice advantage, if it can be integrated with handy
tools. Although, I would personally like to use much more advanced and
useful tools than cppcheck.

[...]
 But overall these are two very useful solutions which do not compete with each
 other and we should make use of them in a useful combined way, e.g. by
 submitting build results from build.kde.org to CDash.

Fair enough, thanks.

One additional comment on my side is that, if I read these lines on
the jenkins website, I have the impression Jenkins can cover the use
case of CDash in the sense of running the build from cron jobs:

Monitoring executions of externally-run jobs, such as cron jobs and
procmail jobs, even those that are run on a remote machine. For
example, with cron, all you receive is regular e-mails that capture
the output, and it is up to you to look at them diligently and notice
when it broke. Jenkins keeps those outputs and makes it easy for you
to notice when something is wrong.

As far as I have been told, there are no limitations in the future for
involving (certain?) playground projects. Although, since it is still
just an experiment, even though the outcome looks quite impressive
from what I can say, it is probably better to involve kde sc and
extragear projects first because of the limited resources.

Best Regards,
Laszlo Papp


Re: Setting up a Quality Team within KDE

2012-04-12 Thread dE .

On 04/05/12 17:12, Anne-Marie Mahfouf wrote:

Hi,

We would like to setup a Quality Testing Team within KDE in order to 
better test our next KDE SC and make the beta process more efficient.
Attached is the first draft of the ideas I brainstormed. Source .odt 
of this document is available at
http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git 



(We can make it a public Google doc if it is more convenient or an 
etherpad).


The document roughly describes 2 areas of action:
- reinforcement (labelled Reinforce) of existing structures (mainly 
targetted to developers and mainly relevant before the beta stage of 
the release). This could
be used as guidelines for new developers. Of course it won't be 
newbies which will develop Unit Tests or check the code quality. But 
some guidelines can

be useful for libs outside Frameworks.
An area of relevant reinforcement that can be currently done is the 
Review process for new integration.
- new infra-structures: this is labelled New in the doc: this is 
what I would see to happen for the 4.9 release.


A new mailing list has been set up in order to discuss all this: 
please subscribe to it if you would like to be part of this

https://mail.kde.org/mailman/listinfo/kde-testing
An IRC channel also was created on Freenode:
#kde-quality

Please join the mailing list and the IRC channel so we can setup a 
plan to start putting all this in gear!


Thanks in advance,

Anne-Marie and Myriam


Apart from whatever's being discussed here, the testing phase should be 
publicized and called for testers (for e.g. the first page should talk 
about this and call for testing as a contribution).


Instructions for testing on every popular distro should be given; we 
should go to each major distro's forum and publicize about this.


Distros like arch, Gentoo etc... should be hot targets since they 
contain experienced and advanced users, administrators and developers 
(who are users in this case). They also tend to have a lot of patience 
with bugs.


Re: Setting up a Quality Team within KDE

2012-04-12 Thread Alexander Neundorf
On Sunday 08 April 2012, Albert Astals Cid wrote:
 El Diumenge, 8 d'abril de 2012, a les 17:13:54, Pau Garcia i Quiles va
...
  I can tell you TestComplete's support for Qt is pretty limited. I have
  not tested LDTP because we needed support for Windows and Linux for our
  Qt projects.
  
  Squish is the best tool we evaluated at work for Qt, it does support Qt
  Quick and there is a company maintaining it (Froglogic, founded by KDE
  developers and employing many KDE developers). A few more arguments
  pro-Squish: it's cross-platform (which means we can run the same tests on
  Linux, Windows, Mac and any other platform we support) and the
  client-server architecture is very useful when testing client-server
  applications, actual environments and/or using virtualization to run the
  UI tests after each daily build.
 
 Did you guys ever try Testability? I've been using lately and works pretty
 well and has the added value of being Free Software.


We use Squish at work and it works well.

Beside that, there is a new project called QtTesting from Kitware which can be 
used kind of like Squish.

It's still very new, and I haven't tried it myself.
It's available for download here, 
http://paraview.org/gitweb?p=QtTesting.git;a=summary

Quoting Marcus:
git clone git://paraview.org/QtTesting.git should get it, we use it as
a submodule in ParaView, and their is some documentation/an example in
the repository.

Maybe somebody wants to give it a try.

Alex


Re: Setting up a Quality Team within KDE

2012-04-12 Thread Myriam Schweingruber
Hi dE

On Thu, Apr 12, 2012 at 15:25, dE . de.tec...@gmail.com wrote:
...
 Apart from whatever's being discussed here, the testing phase should be
 publicized and called for testers (for e.g. the first page should talk about
 this and call for testing as a contribution).

 Instructions for testing on every popular distro should be given; we should
 go to each major distro's forum and publicize about this.

I totally agree with you, but keep in mind that beta releases will
give additional work to the distributions.

 Distros like arch, Gentoo etc... should be hot targets since they contain
 experienced and advanced users, administrators and developers (who are users
 in this case). They also tend to have a lot of patience with bugs.

Sorry to contradict you on that, my experience with both Arch and
Gentoo users is not the same, they have a lot of inexperienced users
who think of themselves as experts they simply aren't, and their
patience with bugs is close to zero. This experience comes from user
support in @amarok, @kde and the fourm.kde.org. A distro's reputation
doesn't make their users experts automatically.

On the contrary, I think we should target mass-market distributions
like Fedora, Kubuntu and Opensuse in the same way, they have a much
bigger user base and most likely not less experts than Arch or Gentoo.

But back to the beta testing: to have efficient beta testing we need
to tell people what to test specifically and under what conditions,
else it's benefit is not very high, as some cases we see as important
are seen as corner cases by the testers and what the users thinks is
essentials is a corner case for us

So it is very important that we work out a list of what should be
tested and how.

Regards, Myriam

-- 
Proud member of the Amarok and KDE Community
Protect your freedom and join the Fellowship of FSFE:
http://www.fsfe.org
Please don't send me proprietary file formats,
use ISO standard ODF instead (ISO/IEC 26300)


Re: Setting up a Quality Team within KDE

2012-04-12 Thread Thomas Zander
On Thursday 12 April 2012 21.01.53 Alexander Neundorf wrote:
 We use Squish at work and it works well.

I've been on many projects and often the project manager thinks they need 
squish, but in the end it just doesn't have any positive impact on the product 
or the quality.
The most recent project I was on for almost a year had a whole testing team 
and they have tried to get our Qt app to be tested using squish, this is a 
project that followed very specific UI specifications so its a dream to have 
the 
squish kind of testing for it.
In the end it was hardly used for mostly technical reasons, and we never got 
to a point where we had a positive report of a regression. And we certainly 
had regressions ;)  The tool just never found them.

Its not squish itself thats a problem per sé, its the concept of testing the 
way the application looks which is broken by design.  I think it would be good 
to avoid spending resources on this.
-- 
Thomas Zander


Re: Setting up a Quality Team within KDE

2012-04-12 Thread Alexander Neundorf
On Thursday 12 April 2012, Thomas Zander wrote:
 On Thursday 12 April 2012 21.01.53 Alexander Neundorf wrote:
  We use Squish at work and it works well.
 
 I've been on many projects and often the project manager thinks they need
 squish, but in the end it just doesn't have any positive impact on the
 product or the quality.
 The most recent project I was on for almost a year had a whole testing team
 and they have tried to get our Qt app to be tested using squish, this is a
 project that followed very specific UI specifications so its a dream to
 have the squish kind of testing for it.
 In the end it was hardly used for mostly technical reasons, and we never
 got to a point where we had a positive report of a regression. And we
 certainly had regressions ;)  The tool just never found them.
 
 Its not squish itself thats a problem per sé, its the concept of testing
 the way the application looks which is broken by design.  I think it would
 be good to avoid spending resources on this.

Yes, how good squish works for you depends on at least two things:

* how well you set up your squish scripts, object map, etc. If done ad hoc, 
you end up in an unmaintainable mess. If done carefully, it works.

* and of course, what you actually test. We don't really test that dialogs 
etc. still look the same, we use squish to drive our software, see that it 
doesn't crash while doing that, that the expected actions work properly, and 
the produced output data is correct. We actually try to do things directly in 
many cases to stay independent from GUI details, i.e. we call slots and check 
Qt properties directly from the python scripts.

Alex


Re: Setting up a Quality Team within KDE

2012-04-11 Thread Andreas Pakulat
On 10.04.12 06:45:17, Laszlo Papp wrote:
 [...]
  all I care about is that its easy to get a project set up to
  be build continously (and the unit-tests executed) and wether it
  provides more than just build errors/warnings and test-results. Since
  some of these things are handy - especially when working on libraries.
 
 Could you please precisely enumerate the technical pros and cons so
 that we can all understand ? Thank you in advance!

As I said before, I gave up pretty quickly on CDash when I tried a few
years back. So I don't have an in-depth view on what it can and cannot
do.

What I can say is that it takes me considerably more effort and
guesswork at how to find trend-graphs and the like for cdash, this is
more easily visible for the projects I'm interested in in jenkins.

Andreas



Re: Re: Setting up a Quality Team within KDE

2012-04-11 Thread Martin Gräßlin
On Tuesday 10 April 2012 06:45:17 Laszlo Papp wrote:
  all I care about is that its easy to get a project set up to
  be build continously (and the unit-tests executed) and wether it
  provides more than just build errors/warnings and test-results. Since
  some of these things are handy - especially when working on libraries.
 
 Could you please precisely enumerate the technical pros and cons so
 that we can all understand ? Thank you in advance!
There is actually nothing to compare the technical pros and cons. Jenkins and 
CDash solve different problems. CDash relies on someone running a build and 
submitting the result. Jenkins can be used to run a build after a push to the 
repository.

What I use Jenkins for is just not possible with CDash. I have a private 
Jenkins installation in additon to build.kde.org to get more or less instant 
results whenever kde-workspace changes. KWin for example has multiple build 
flags to e.g. hide some parts for Plasma Active. Whenever there is a commit 
Jenkins nicely compiles it for me with default build flags and if successful 
with other build flags. If it fails I get a mail telling me in most cases 
exactly which commit broke the build.

Afterwards it runs cppcheck on the source code and has the change in a nice 
graph. That means I see exactly in which build and by that in which commit an 
issue had been introduced. I have at least the plan to write a plugin for 
Krazy2 to do the same so that I see exactly with which commit an issue was 
introduced. The whole process takes on my system about one hour, which means I 
have more or less instant replies about introduced issues.

Now that is just something which is not possible with CDash, but CDash seems 
to solve other problems which Jenkins cannot solve that is crowd sourcing the 
test running.

But overall these are two very useful solutions which do not compete with each 
other and we should make use of them in a useful combined way, e.g. by 
submitting build results from build.kde.org to CDash.

Cheers
Martin


signature.asc
Description: This is a digitally signed message part.


Re: Setting up a Quality Team within KDE

2012-04-11 Thread Allen Winter
On Tuesday 10 April 2012 6:38:52 PM Michael Jansen wrote:
 On Tuesday, April 10, 2012 09:20:57 AM Allen Winter wrote:
  On Thursday 05 April 2012 7:42:48 AM Anne-Marie Mahfouf wrote:
   Hi,
   
   
   A new mailing list has been set up in order to discuss all this: please
   subscribe to it if you would like to be part of this
   https://mail.kde.org/mailman/listinfo/kde-testing
   An IRC channel also was created on Freenode:
   #kde-quality
   
   Please join the mailing list and the IRC channel so we can setup a plan
   to start putting all this in gear!
  
  Let's please end this discussion about the various tools and stuff here on
  k-c-d.
  
  If you want to talk about those things, please use the new kde-testing
  mailing list or the IRC channel which are dedicated to the broader topic.
 
 I am starting to get annoyed by that please follow me to the secluded place 
 over there so noone can listen in thing that is creeping up in kde.
 
 I do no consider this ml a to high volume one and this stuff is in my opinion 
 exactly what this list should be about. By moving the discussion out of here 
 you just guarantee that whatever is chosen will sit in obscurity like before.
 
 Because i won't follow you. And i guess some others won't either.
 

I don't get your objection.

Well, I sorta do.

I agree that CDash vs. Jenkins vs. Foo is a valid topic for core-devel.

But  KDE Quality Testing is a valid subproject, not of interest
to everyone, and therefore should be discussed in separate channels,
as per our time-honored tradition.

Partly this is a matter of not knowing exactly what KDE Quality Testing is all 
about yet.




Re: Setting up a Quality Team within KDE

2012-04-11 Thread Alexander Neundorf
On Wednesday 11 April 2012, Matthew Dawson wrote:
 On April 10, 2012 06:45:17 AM Laszlo Papp wrote:
   That having said, CDash was designed with CMake in mind. We already
   depend on CMake and CTest.
   
   We actually do not depend on CTest, that is an optional tool, one can
   run the tests without ctest (in fact I've never used CTest on my
   machines).
  
  I stand corrected, you are right about that. You can use make test
  without getting involved with CTest.
 
 Actually, make test invokes CTest behind the scenes[1].  That being said,
 if we aren't uploading results anywhere, CTest only runs the test
 executables and offers nothing else (in this case).  If tomorrow we drop
 CMake/CTest, the tests would still run fine as is.

Yes.
If you use ctest directly instead of building make test, you get more 
features, mainly on selecting which tests you want to run.
E.g. you can dio ctest -R SomeRegExp to run only tests whose names match the 
given regular expression, or selecting tests by their numbers etc.
This is useful if you have a big number of tests and want to run only some of 
them, without having to manually go into the subdir or something like this.

Alex



Re: Setting up a Quality Team within KDE

2012-04-11 Thread Alexander Neundorf
On Wednesday 11 April 2012, Michael Jansen wrote:
 On Tuesday, April 10, 2012 09:20:57 AM Allen Winter wrote:
  On Thursday 05 April 2012 7:42:48 AM Anne-Marie Mahfouf wrote:
   Hi,
   
   
   A new mailing list has been set up in order to discuss all this: please
   subscribe to it if you would like to be part of this
   https://mail.kde.org/mailman/listinfo/kde-testing
   An IRC channel also was created on Freenode:
   #kde-quality
   
   Please join the mailing list and the IRC channel so we can setup a plan
   to start putting all this in gear!
  
  Let's please end this discussion about the various tools and stuff here
  on k-c-d.
  
  If you want to talk about those things, please use the new kde-testing
  mailing list or the IRC channel which are dedicated to the broader topic.
 
 I am starting to get annoyed by that please follow me to the secluded place
 over there so noone can listen in thing that is creeping up in kde.
 
 I do no consider this ml a to high volume one and this stuff is in my
 opinion exactly what this list should be about. By moving the discussion
 out of here you just guarantee that whatever is chosen will sit in
 obscurity like before.
 
 Because i won't follow you. And i guess some others won't either.

+1 

Alex


Re: Setting up a Quality Team within KDE

2012-04-11 Thread Alexander Neundorf
On Wednesday 11 April 2012, Allen Winter wrote:
 On Tuesday 10 April 2012 6:38:52 PM Michael Jansen wrote:
  On Tuesday, April 10, 2012 09:20:57 AM Allen Winter wrote:
   On Thursday 05 April 2012 7:42:48 AM Anne-Marie Mahfouf wrote:
Hi,


A new mailing list has been set up in order to discuss all this:
please subscribe to it if you would like to be part of this
https://mail.kde.org/mailman/listinfo/kde-testing
An IRC channel also was created on Freenode:
#kde-quality

Please join the mailing list and the IRC channel so we can setup a
plan to start putting all this in gear!
   
   Let's please end this discussion about the various tools and stuff here
   on k-c-d.
   
   If you want to talk about those things, please use the new kde-testing
   mailing list or the IRC channel which are dedicated to the broader
   topic.
  
  I am starting to get annoyed by that please follow me to the secluded
  place over there so noone can listen in thing that is creeping up in
  kde.
  
  I do no consider this ml a to high volume one and this stuff is in my
  opinion exactly what this list should be about. By moving the discussion
  out of here you just guarantee that whatever is chosen will sit in
  obscurity like before.
  
  Because i won't follow you. And i guess some others won't either.
 
 I don't get your objection.
 
 Well, I sorta do.
 
 I agree that CDash vs. Jenkins vs. Foo is a valid topic for core-devel.
 
 But  KDE Quality Testing is a valid subproject, not of interest
 to everyone, and therefore should be discussed in separate channels,
 as per our time-honored tradition.

IMO this is not so much a time-honored tradition, it just became more in the 
last time.
It may be also worth considering whether it is really a good idea to start 
from scratch with a new mailing list, instead of reusing an existing one in 
the beginning, and switch to a separate list once a certain critical mass has 
been reached. By using an already existing list in the beginning more people 
will follow, offer advice, and maybe help.

Alex


Re: Setting up a Quality Team within KDE

2012-04-11 Thread Laszlo Papp
Okay, back to this list then from kde-testing. I have at least tried. :)

 Actually, make test invokes CTest behind the scenes[1].

Actually, yes. :-)

 That being said, if
 we aren't uploading results anywhere, CTest only runs the test executables and
 offers nothing else (in this case).  If tomorrow we drop CMake/CTest, the
 tests would still run fine as is.

Yes, they can be run separately. I believe that is also my workflow
when a test fails. I run the desired binary in question to see which
test methods pass and fail to localize the issue further on. I think
make test, and hence ctest, is a good and handy service after all in
my opinion. I am not worried about that, and it comes with cmake (at
least on the distributions I use).

Best Regards,
Laszlo Papp

PS.: Plus what Alex said about other ctest usages.


Re: Setting up a Quality Team within KDE

2012-04-11 Thread Percy Camilo Triveño Aucahuasi
Anne,

On Sun, Apr 8, 2012 at 9:03 AM, Anne-Marie Mahfouf
Yes this is maybe the best free tool to do the job.
DO you or anybody have used it already?

Thanks for your input,

Unfortunately I have no experience on this tool, but when I check all the
next emails I'm glad there is a lot of feedback.

Percy





On Sun, Apr 8, 2012 at 9:03 AM, Anne-Marie Mahfouf 
annemarie.mahf...@free.fr wrote:

 **
 On 04/08/2012 07:11 AM, Percy Camilo Triveño Aucahuasi wrote:

 Hi Anne, hi all

 On Fri, Apr 6, 2012 at 12:03 PM, Anne-Marie Mahfouf
 This is addressed for 4.9 as putting in place a few
 selected areas of functional testing and hopefully
 we will assess some automated UI testing tools
 and start using them in the following releases.
 I hope we can gather enough beta testers and
 make this working.

 Indeed, Nice idea, I think this is the right focus to (auto)test the
 functionality/features of the app. I've searched some info about this topic
 and found this:

 http://ldtp.freedesktop.org/wiki/Home

 It has full support for KDE/Qt (4.x) apps and the scripts (for
 autotesting) can be written with Python.

 My 0.5 cents :)

 Cheers,
 Percy

 Yes this is maybe the best free tool to do the job. DO you or anybody have
 used it already?

 Thanks for your input,

 Anne-Marie


 On Fri, Apr 6, 2012 at 12:03 PM, Anne-Marie Mahfouf 
 annemarie.mahf...@free.fr wrote:

  On 04/06/2012 02:23 PM, Aleix Pol wrote:

 On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
 annemarie.mahf...@free.fr  wrote:

 Hi,

 We would like to setup a Quality Testing Team within KDE in order to
 better
 test our next KDE SC and make the beta process more efficient.
 Attached is the first draft of the ideas I brainstormed. Source .odt of
 this
 document is available at

 http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git

 (We can make it a public Google doc if it is more convenient or an
 etherpad).

 The document roughly describes 2 areas of action:
 - reinforcement (labelled Reinforce) of existing structures (mainly
 targetted to developers and mainly relevant before the beta stage of the
 release). This could

 be used as guidelines for new developers. Of course it won't be newbies
 which will develop Unit Tests or check the code quality. But some
 guidelines
 can
 be useful for libs outside Frameworks.
 An area of relevant reinforcement that can be currently done is the
 Review
 process for new integration.
 - new infra-structures: this is labelled New in the doc: this is what
 I
 would see to happen for the 4.9 release.


 A new mailing list has been set up in order to discuss all this: please
 subscribe to it if you would like to be part of this
 https://mail.kde.org/mailman/listinfo/kde-testing
 An IRC channel also was created on Freenode:
 #kde-quality

 Please join the mailing list and the IRC channel so we can setup a plan
 to
 start putting all this in gear!

 Thanks in advance,

 Anne-Marie and Myriam

 Hi!
 I think this project is a very interesting idea and definitely
 something we need. Said that, I'd like to propose some small changes
 on how this should be done.

 In the document there are some actions to be taken that sound a bit
 too technical, for example it specifies reinforcing the unit testing.
 This is something that we should have been doing already and I don't
 see someone from the Quality Team going to a development team and
 improving that.

  Making unit tests is the developer task (and the document mentions it)
 and we do have solit Unit Tests in kdelibs and Frameworks and kdepimlibs.
 However there are other libs within KDE where maybe unit tests are not as
 present as they should (I did not research that though). Also, about unit
 tests, it's not only writing them, it's also running them. This is not done
 in a regular basis and needs to be automated in the future and the fails
 need to be fixed.
 No newbie will ever touch to any Unit Tests of course. And any action
 will be discussed with the gurus in this field (/me does not point to
 David).

 We have many tools for developers which are not fully used: latest great
 tool is Jenkins which I was aware of only recently. My label Reinforce is
 to take full advantage of those existing tools.

 (It would be cool if at Akademy there are some talks focusing on using
 these tools.)


  What I'd like to see is this new team testing KDE and its applications
 from a user perspective, specifying the different perspectives the KDE
 end user will face (different OS, different form factors, etc) and
 reporting problems (not necessarily bugs, as in crashes) and proposing
 new solutions.

  This is addressed for 4.9 as putting in place a few selected areas of
 functional testing and hopefully we will assess some automated UI testing
 tools and start using them in the following releases. I hope we can gather
 enough beta testers and make this working.

  I'm really hopeful about such a team, I think it's a good opportunity
 for KDE to be able to reach 

Re: Setting up a Quality Team within KDE

2012-04-10 Thread Allen Winter
On Thursday 05 April 2012 7:42:48 AM Anne-Marie Mahfouf wrote:
 Hi,
 

 A new mailing list has been set up in order to discuss all this: please 
 subscribe to it if you would like to be part of this
 https://mail.kde.org/mailman/listinfo/kde-testing
 An IRC channel also was created on Freenode:
 #kde-quality
 
 Please join the mailing list and the IRC channel so we can setup a plan 
 to start putting all this in gear!
 

Let's please end this discussion about the various tools and stuff here on 
k-c-d.

If you want to talk about those things, please use the new kde-testing mailing 
list
or the IRC channel which are dedicated to the broader topic.



Re: Setting up a Quality Team within KDE

2012-04-10 Thread Michael Jansen
On Tuesday, April 10, 2012 09:20:57 AM Allen Winter wrote:
 On Thursday 05 April 2012 7:42:48 AM Anne-Marie Mahfouf wrote:
  Hi,
  
  
  A new mailing list has been set up in order to discuss all this: please
  subscribe to it if you would like to be part of this
  https://mail.kde.org/mailman/listinfo/kde-testing
  An IRC channel also was created on Freenode:
  #kde-quality
  
  Please join the mailing list and the IRC channel so we can setup a plan
  to start putting all this in gear!
 
 Let's please end this discussion about the various tools and stuff here on
 k-c-d.
 
 If you want to talk about those things, please use the new kde-testing
 mailing list or the IRC channel which are dedicated to the broader topic.

I am starting to get annoyed by that please follow me to the secluded place 
over there so noone can listen in thing that is creeping up in kde.

I do no consider this ml a to high volume one and this stuff is in my opinion 
exactly what this list should be about. By moving the discussion out of here 
you just guarantee that whatever is chosen will sit in obscurity like before.

Because i won't follow you. And i guess some others won't either.

My 2 cent.


-- 
Michael Jansen
http://michael-jansen.biz


Re: Setting up a Quality Team within KDE

2012-04-10 Thread Myriam Schweingruber
On Wed, Apr 11, 2012 at 00:38, Michael Jansen k...@michael-jansen.biz wrote:
 On Tuesday, April 10, 2012 09:20:57 AM Allen Winter wrote:
 On Thursday 05 April 2012 7:42:48 AM Anne-Marie Mahfouf wrote:
  Hi,
 
 
  A new mailing list has been set up in order to discuss all this: please
  subscribe to it if you would like to be part of this
  https://mail.kde.org/mailman/listinfo/kde-testing
  An IRC channel also was created on Freenode:
  #kde-quality
 
  Please join the mailing list and the IRC channel so we can setup a plan
  to start putting all this in gear!

 Let's please end this discussion about the various tools and stuff here on
 k-c-d.

 If you want to talk about those things, please use the new kde-testing
 mailing list or the IRC channel which are dedicated to the broader topic.

 I am starting to get annoyed by that please follow me to the secluded place
 over there so noone can listen in thing that is creeping up in kde.

It is neither secret nor secluded, the archives are public and it was
announce all over the place.

 I do no consider this ml a to high volume one and this stuff is in my opinion
 exactly what this list should be about. By moving the discussion out of here
 you just guarantee that whatever is chosen will sit in obscurity like before.

 Because i won't follow you. And i guess some others won't either.

The aim of the Quality team stretches way beyond kde-core as it
concerns the whole KDE community, and I am sure you would soon be
annoyed enough if testing a Game or Amarok start showing up in here,
together with questions from dozens of beta-testers on how to test one
particular feature you are not interested and which doesn't concern
kde-core-devel.

Regards, Myriam
-- 
Proud member of the Amarok and KDE Community
Protect your freedom and join the Fellowship of FSFE:
http://www.fsfe.org
Please don't send me proprietary file formats,
use ISO standard ODF instead (ISO/IEC 26300)


Re: Setting up a Quality Team within KDE

2012-04-10 Thread Matthew Dawson
On April 10, 2012 06:45:17 AM Laszlo Papp wrote:
  That having said, CDash was designed with CMake in mind. We already
  depend on CMake and CTest.
  
  We actually do not depend on CTest, that is an optional tool, one can
  run the tests without ctest (in fact I've never used CTest on my
  machines).
 
 I stand corrected, you are right about that. You can use make test
 without getting involved with CTest.
 
Actually, make test invokes CTest behind the scenes[1].  That being said, if 
we aren't uploading results anywhere, CTest only runs the test executables and 
offers nothing else (in this case).  If tomorrow we drop CMake/CTest, the 
tests would still run fine as is.


Matthew

[1] http://www.cmake.org/Wiki/CMake/Testing_With_CTest


Re: Setting up a Quality Team within KDE

2012-04-09 Thread Andreas Pakulat
On 09.04.12 12:58:02, Alexander Neundorf wrote:
 On Friday 06 April 2012, Anne-Marie Mahfouf wrote:
  On 04/06/2012 02:23 PM, Aleix Pol wrote:
   On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
   
   annemarie.mahf...@free.fr  wrote:
   Hi,
   
   We would like to setup a Quality Testing Team within KDE in order to
   better test our next KDE SC and make the beta process more efficient.
   Attached is the first draft of the ideas I brainstormed. Source .odt of
   this document is available at
   http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-do
   c.git
   
   (We can make it a public Google doc if it is more convenient or an
   etherpad).
   
   The document roughly describes 2 areas of action:
   - reinforcement (labelled Reinforce) of existing structures (mainly
   targetted to developers and mainly relevant before the beta stage of the
   release). This could
   
   be used as guidelines for new developers. Of course it won't be newbies
   which will develop Unit Tests or check the code quality. But some
   guidelines can
   be useful for libs outside Frameworks.
   An area of relevant reinforcement that can be currently done is the
   Review process for new integration.
   - new infra-structures: this is labelled New in the doc: this is what
   I would see to happen for the 4.9 release.
   
   
   A new mailing list has been set up in order to discuss all this: please
   subscribe to it if you would like to be part of this
   https://mail.kde.org/mailman/listinfo/kde-testing
   An IRC channel also was created on Freenode:
   #kde-quality
   
   Please join the mailing list and the IRC channel so we can setup a plan
   to start putting all this in gear!
   Hi!
   I think this project is a very interesting idea and definitely
   something we need. Said that, I'd like to propose some small changes
   on how this should be done.
   
   In the document there are some actions to be taken that sound a bit
   too technical, for example it specifies reinforcing the unit testing.
   This is something that we should have been doing already and I don't
   see someone from the Quality Team going to a development team and
   improving that.
  
  Making unit tests is the developer task (and the document mentions it)
  and we do have solit Unit Tests in kdelibs and Frameworks and
  kdepimlibs. However there are other libs within KDE where maybe unit
  tests are not as present as they should (I did not research that
  though). Also, about unit tests, it's not only writing them, it's also
  running them. This is not done in a regular basis and needs to be
  automated in the future and the fails need to be fixed.
  No newbie will ever touch to any Unit Tests of course. And any action
  will be discussed with the gurus in this field (/me does not point to
  David).
  
  We have many tools for developers which are not fully used: latest great
  tool is Jenkins which I was aware of only recently. My label Reinforce
  is to take full advantage of those existing tools.
  
  (It would be cool if at Akademy there are some talks focusing on using
  these tools.)
 
 There is not only Jenkins, but also CDash, which integrates nicely with our 
 cmake-based build system.
 
 I was trying to set up nightly builds for the KDE modules two years ago or so 
 (hosted on http://my.cdash.org, some of our projects are actually using it), 
 gave a talk at Akademy in Tampere and tried really hard, but there was not 
 much response to it.

Well, I'd say jenkins has a lot more to offer than cdash. Its also a lot
simpler to use, setup and understand for newcomers in my opinion. With
jenkins I can have a shell-script job which runs cmake  make  make
test and be done. Setting up a build for submission to cdash takes a lot
more effort - at least it did when I tried to do this maybe things got
easier meanwhile.

Andreas

PS: Personally I find the Web-GUI also a bit nicer, but thats really just a
personal preference.



Re: Re: Setting up a Quality Team within KDE

2012-04-09 Thread Martin Gräßlin
On Monday 09 April 2012 13:15:26 Andreas Pakulat wrote:
  
  There is not only Jenkins, but also CDash, which integrates nicely with
  our
  cmake-based build system.
  
  I was trying to set up nightly builds for the KDE modules two years ago or
  so (hosted on http://my.cdash.org, some of our projects are actually
  using it), gave a talk at Akademy in Tampere and tried really hard, but
  there was not much response to it.
 
 Well, I'd say jenkins has a lot more to offer than cdash. Its also a lot
 simpler to use, setup and understand for newcomers in my opinion. With
 jenkins I can have a shell-script job which runs cmake  make  make
 test and be done. Setting up a build for submission to cdash takes a lot
 more effort - at least it did when I tried to do this maybe things got
 easier meanwhile.
I quite agree. Jenkins offers in my opinion much, much more. I have been using 
my own custom Jenkins installation for at least half a year now and that 
allows me to easily build KWin with various compile flags, etc. etc. Also the 
community around Jenkins is very active and there are lots of plugins 
available to improve the testing (e.g. cppcheck which got recently added to 
our build.kde.org instance)
 
 Andreas
 
 PS: Personally I find the Web-GUI also a bit nicer, but thats really just a
 personal preference.
+1 

Cheers
Martin


signature.asc
Description: This is a digitally signed message part.


Re: Setting up a Quality Team within KDE

2012-04-09 Thread Michael Jansen
On Monday, April 09, 2012 02:05:28 PM Martin Gräßlin wrote:
 On Monday 09 April 2012 13:15:26 Andreas Pakulat wrote:
   There is not only Jenkins, but also CDash, which integrates nicely with
   our
   cmake-based build system.
   
   I was trying to set up nightly builds for the KDE modules two years ago
   or
   so (hosted on http://my.cdash.org, some of our projects are actually
   using it), gave a talk at Akademy in Tampere and tried really hard, but
   there was not much response to it.
  
  Well, I'd say jenkins has a lot more to offer than cdash. Its also a lot
  simpler to use, setup and understand for newcomers in my opinion. With
  jenkins I can have a shell-script job which runs cmake  make  make
  test and be done. Setting up a build for submission to cdash takes a lot
  more effort - at least it did when I tried to do this maybe things got
  easier meanwhile.
 
 I quite agree. Jenkins offers in my opinion much, much more. I have been
 using my own custom Jenkins installation for at least half a year now and
 that allows me to easily build KWin with various compile flags, etc. etc.
 Also the community around Jenkins is very active and there are lots of
 plugins available to improve the testing (e.g. cppcheck which got recently
 added to our build.kde.org instance)

As far as i understand things here (which probably is not much), the two 
complement it other.

CDash is in no way a substitute or alternative to jenkins. But another way to 
congregate and present the unit test results from different builds, machines, 
configurations side by side.

Jenkins as far is i know only presents the results of one build. But i know it 
can congregate the test results of more than one build. At least it can for 
java. But i am not sure it can do it for builds running on different jenkins 
instances.

CDash can. So each jenkins build could run the test and give them to cdash.

Best of both worlds.

Mike

-- 
Michael Jansen
http://michael-jansen.biz


Re: Setting up a Quality Team within KDE

2012-04-09 Thread Andreas Pakulat
On 09.04.12 13:28:31, Anne-Marie Mahfouf wrote:
 On 04/09/2012 01:15 PM, Andreas Pakulat wrote:
 On 09.04.12 12:58:02, Alexander Neundorf wrote:
 On Friday 06 April 2012, Anne-Marie Mahfouf wrote:
 On 04/06/2012 02:23 PM, Aleix Pol wrote:
 On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
 
 annemarie.mahf...@free.fr   wrote:
 Hi,
 
 We would like to setup a Quality Testing Team within KDE in order to
 better test our next KDE SC and make the beta process more efficient.
 Attached is the first draft of the ideas I brainstormed. Source .odt of
 this document is available at
 http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-do
 c.git
 
 (We can make it a public Google doc if it is more convenient or an
 etherpad).
 
 The document roughly describes 2 areas of action:
 - reinforcement (labelled Reinforce) of existing structures (mainly
 targetted to developers and mainly relevant before the beta stage of the
 release). This could
 
 be used as guidelines for new developers. Of course it won't be newbies
 which will develop Unit Tests or check the code quality. But some
 guidelines can
 be useful for libs outside Frameworks.
 An area of relevant reinforcement that can be currently done is the
 Review process for new integration.
 - new infra-structures: this is labelled New in the doc: this is what
 I would see to happen for the 4.9 release.
 
 
 A new mailing list has been set up in order to discuss all this: please
 subscribe to it if you would like to be part of this
 https://mail.kde.org/mailman/listinfo/kde-testing
 An IRC channel also was created on Freenode:
 #kde-quality
 
 Please join the mailing list and the IRC channel so we can setup a plan
 to start putting all this in gear!
 Hi!
 I think this project is a very interesting idea and definitely
 something we need. Said that, I'd like to propose some small changes
 on how this should be done.
 
 In the document there are some actions to be taken that sound a bit
 too technical, for example it specifies reinforcing the unit testing.
 This is something that we should have been doing already and I don't
 see someone from the Quality Team going to a development team and
 improving that.
 Making unit tests is the developer task (and the document mentions it)
 and we do have solit Unit Tests in kdelibs and Frameworks and
 kdepimlibs. However there are other libs within KDE where maybe unit
 tests are not as present as they should (I did not research that
 though). Also, about unit tests, it's not only writing them, it's also
 running them. This is not done in a regular basis and needs to be
 automated in the future and the fails need to be fixed.
 No newbie will ever touch to any Unit Tests of course. And any action
 will be discussed with the gurus in this field (/me does not point to
 David).
 
 We have many tools for developers which are not fully used: latest great
 tool is Jenkins which I was aware of only recently. My label Reinforce
 is to take full advantage of those existing tools.
 
 (It would be cool if at Akademy there are some talks focusing on using
 these tools.)
 There is not only Jenkins, but also CDash, which integrates nicely with our
 cmake-based build system.
 
 I was trying to set up nightly builds for the KDE modules two years ago or 
 so
 (hosted on http://my.cdash.org, some of our projects are actually using it),
 gave a talk at Akademy in Tampere and tried really hard, but there was not
 much response to it.
 Well, I'd say jenkins has a lot more to offer than cdash. Its also a lot
 simpler to use, setup and understand for newcomers in my opinion. With
 jenkins I can have a shell-script job which runs cmake  make  make
 test and be done. Setting up a build for submission to cdash takes a lot
 more effort - at least it did when I tried to do this maybe things got
 easier meanwhile.
 
 What's a bit sad is that nobody talks very much about Jenkins

There's been several blogposts in the past weeks/months about
build.kde.org, but indeed its more of a Developer-Resource in that
it builds KDE modules and runs their unit-tests to inform developers
asap when something broke.

 Andreas, your script should be public.

Uhm I'd use this as a start:

cd build  cmake ../  make  make test

But I don't own any of the jobs at build.kde.org, so you should
probably talk to the admins of the jenkins server to find out how the
builds are actually run.

Andreas



Re: Setting up a Quality Team within KDE

2012-04-09 Thread Andreas Pakulat
On 09.04.12 14:29:27, Michael Jansen wrote:
 On Monday, April 09, 2012 02:05:28 PM Martin Gräßlin wrote:
  On Monday 09 April 2012 13:15:26 Andreas Pakulat wrote:
There is not only Jenkins, but also CDash, which integrates nicely with
our
cmake-based build system.

I was trying to set up nightly builds for the KDE modules two years ago
or
so (hosted on http://my.cdash.org, some of our projects are actually
using it), gave a talk at Akademy in Tampere and tried really hard, but
there was not much response to it.
   
   Well, I'd say jenkins has a lot more to offer than cdash. Its also a lot
   simpler to use, setup and understand for newcomers in my opinion. With
   jenkins I can have a shell-script job which runs cmake  make  make
   test and be done. Setting up a build for submission to cdash takes a lot
   more effort - at least it did when I tried to do this maybe things got
   easier meanwhile.
  
  I quite agree. Jenkins offers in my opinion much, much more. I have been
  using my own custom Jenkins installation for at least half a year now and
  that allows me to easily build KWin with various compile flags, etc. etc.
  Also the community around Jenkins is very active and there are lots of
  plugins available to improve the testing (e.g. cppcheck which got recently
  added to our build.kde.org instance)
 
 As far as i understand things here (which probably is not much), the two 
 complement it other.
 
 CDash is in no way a substitute or alternative to jenkins. But another way to 
 congregate and present the unit test results from different builds, machines, 
 configurations side by side.
 
 Jenkins as far is i know only presents the results of one build. But i know 
 it 
 can congregate the test results of more than one build. At least it can for 
 java. But i am not sure it can do it for builds running on different jenkins 
 instances.

You usually have only 1 jenkins instance and a so-called jenkins-slave
on each machine that actually runs builds. The build.kde.org server is
its own slave atm, presumably because there's no further hardware
available.

In jenkins each configuration would be 1 job, each job has a number of
past builds and associated test results as well as output from various
other plugins (if configured, like static code-checkers). The website
can aggregate the information and generate graphs (or rather the various
plugins do this) to see things like test-trends etc. It doesn't matter
on which of the slaves the build actually runs.

If you don't depend on volunteers submitting their own local builds to
cdash, then this is similar to cdash, except that cdash only does the
aggregation of results. So CDash leaves setting up build infrastructure,
automatic VCS-check etc. up to you to do. Relying on volunteers to
submit their own local builds has proven to be not working well for us,
there are few if any builds regularly submitted.

 CDash can. So each jenkins build could run the test and give them to cdash.
 
Except, this would have absolutely no benefit compared to just
displaying the data in Jenkins itself :)

Andreas



Re: Setting up a Quality Team within KDE

2012-04-09 Thread Andreas Pakulat
On 09.04.12 16:27:17, Laszlo Papp wrote:
  Well, I'd say jenkins has a lot more to offer than cdash. Its also a lot
  simpler to use, setup and understand for newcomers in my opinion. With
  jenkins I can have a shell-script job which runs cmake  make  make
  test and be done. Setting up a build for submission to cdash takes a lot
  more effort - at least it did when I tried to do this maybe things got
  easier meanwhile.
 
 It has been simpler than I thought when I have set the functionality
 up with a little bit of guidance last summer. I think we could
 probably even make the setup easier with some automated script or so
 by solving the boilerplate part of the relevant cmake files. As far as
 I see the admin setting this up needs to know some parameters if I am
 not too mistaken.

 I do not have any opinions about the web interface differences because
 I do not visit that. I have been getting the warnings, errors and unit
 test failures via email, if any.

Hmm, that may work if your project has usually no warnings, but I find
this for warnings to be too much noise. The CI mails should immediately
tell me if CI is considered broken (and warnings are often not
considered that) or not and if its broken show me the error so I don't
need to go to the website to fix it - ideally.

 I will also try out this Jenkins in the future once I find the time
 for that, but I have been a happy CDash user for about ten months by
 now. :-)

build.kde.org can give you a pretty good idea of how jenkins looks and
can be used to do CI.

Andreas



Re: Setting up a Quality Team within KDE

2012-04-09 Thread Laszlo Papp
 Hmm, that may work if your project has usually no warnings, but I find
 this for warnings to be too much noise.

We have also had many warnings back then. :-)

 The CI mails should immediately
 tell me if CI is considered broken (and warnings are often not
 considered that) or not and if its broken show me the error so I don't
 need to go to the website to fix it - ideally.

That can be done as far as I know. It also shows the warnings, but it
might be that you need to click on the link you receive in the email,
if you have many warnings. It redirects you to the list so that you
can go through your codebase to fix those. I have never found the web
interface for this goal disadvantageous yet.

 I will also try out this Jenkins in the future once I find the time
 for that, but I have been a happy CDash user for about ten months by
 now. :-)

 build.kde.org can give you a pretty good idea of how jenkins looks and
 can be used to do CI.

I do not personally find the page aforementioned having that
professional layout as the CDash site, but that might be just me.
Also, I personally find the CDash site simpler to use.

My personal worry is that, I would not like to refactor the setup
already existing and working just fine for a while unless there is a
compelling reason and the switch is simple. Also, I do not see
Playground projects on the build.kde.org site. In addition, if it is
just for KDE, it would result me using different services for Qt
Playground and KDE projects. That would be a bit unhandy. I could use
CDash for both.

That having said, CDash was designed with CMake in mind. We already
depend on CMake and CTest. Those three projects are maintained by the
same company, if I am not mistaken, called Kitware (and probably
also by community volunteers). Therefore, we can be sure about the
integrity and so forth. On the contrary I can somewhat understand the
need for a community driven site even if it is quite unlike Kitware
just goes away. They have not been doing that with cmake either for
many years. They respect KDE, and Alex has always been very helpful
with those topics. :-)

The current setup is comfortable and good enough to me, but I am open,
thus I will check out Jenkins.

Best Regards,
Laszlo Papp


Re: Setting up a Quality Team within KDE

2012-04-09 Thread Andreas Pakulat
On 09.04.12 23:29:17, Laszlo Papp wrote:
  I will also try out this Jenkins in the future once I find the time
  for that, but I have been a happy CDash user for about ten months by
  now. :-)
 
  build.kde.org can give you a pretty good idea of how jenkins looks and
  can be used to do CI.
 
 I do not personally find the page aforementioned having that
 professional layout as the CDash site, but that might be just me.
 Also, I personally find the CDash site simpler to use.

I noted my personal impression only as a PS since I'm well aware that
there's much personal preference here. One thing I do like about
jenkins: I see the status of the projects immediately when opening the
page. With some better filtering/views set up this can be quite handy.

 My personal worry is that, I would not like to refactor the setup
 already existing and working just fine for a while unless there is a
 compelling reason and the switch is simple.

I'm not sure what exactly the setup is on build.kde.org, looking at the
console output some envvars are set and then as I said in another reply
all it takes is 

cd build  cmake ../  make  make install

No further setup needed (unless the unit-tests have some specific
requirements). If thats encoded into the ctest script you could also
just run ctest.

 Also, I do not see Playground projects on the build.kde.org site.

The service is comparatively new, but as can be seen already has a lot
more builds on it then ever where done for CDash. Also IIRC there was a
blog-post inviting people to submit requests for the addition of their
projects.

I also don't see any KDE playground apps on my.cdash.org :)

 In addition, if it is just for KDE, it would result me using different
 services for Qt Playground and KDE projects. That would be a bit
 unhandy. I could use CDash for both.

If one is involved with different projects outside of Qt/KDE then its
quite normal to have to deal with various types of web-services. So
personally I don't think thats a good enough reason to ignore the
benefits that jenkins does provide.

In addition as was said elsewhere in the thread, the jenkins job could
submit their data to cdash too if the projects are set up for that.

 That having said, CDash was designed with CMake in mind. We already
 depend on CMake and CTest.

We actually do not depend on CTest, that is an optional tool, one can
run the tests without ctest (in fact I've never used CTest on my
machines).

 Those three projects are maintained by the
 same company, if I am not mistaken, called Kitware (and probably
 also by community volunteers). Therefore, we can be sure about the
 integrity and so forth. On the contrary I can somewhat understand the
 need for a community driven site even if it is quite unlike Kitware
 just goes away. They have not been doing that with cmake either for
 many years. They respect KDE, and Alex has always been very helpful
 with those topics. :-)

I'm not questioning the usefulness of CDash or wether the free-hosting
goes ever away. I don't actually care who hosts the CI-server for KDE
projects, all I care about is that its easy to get a project set up to
be build continously (and the unit-tests executed) and wether it
provides more than just build errors/warnings and test-results. Since
some of these things are handy - especially when working on libraries.

Andreas



Re: Setting up a Quality Team within KDE

2012-04-09 Thread Laszlo Papp
 Also IIRC there was a blog-post inviting people to submit requests for the 
 addition of their
 projects.

I believe, I missed that.

 I also don't see any KDE playground apps on my.cdash.org :)

http://my.cdash.org/index.php?project=Gluon
http://my.cdash.org/index.php?project=QtOpenAL (You do not find it in
playground since it was deleted 1-2 days ago after migrating to Qt
Playground, but it has been hosted there).

Unfortunately, I had some server issues recently though that I did not
have time to deal with. At any rate, playground projects should be
able to have quality services on wish as well IMO.

 If one is involved with different projects outside of Qt/KDE then its
 quite normal to have to deal with various types of web-services. So
 personally I don't think thats a good enough reason to ignore the
 benefits that jenkins does provide.

That is not so simple. Once you get used to the feeling and you settle
down you can use the same service for multiple projects, it is a bit
difficult to persuade yourself it is worth changing in order to have
more services to get familiar with. Like I said, as for me, it needs
to have a compelling reason with smooth transition and simple usage in
the future as well. As for my case, I would not just personally like
to get involved in two different services, if I am fine with one.

 In addition as was said elsewhere in the thread, the jenkins job could
 submit their data to cdash too if the projects are set up for that.

:-) Meanwhile, I trust you it is possible to do that, I prefer to
avoid the multilayering, if possible.

 That having said, CDash was designed with CMake in mind. We already
 depend on CMake and CTest.

 We actually do not depend on CTest, that is an optional tool, one can
 run the tests without ctest (in fact I've never used CTest on my
 machines).

I stand corrected, you are right about that. You can use make test
without getting involved with CTest.

[...]
 all I care about is that its easy to get a project set up to
 be build continously (and the unit-tests executed) and wether it
 provides more than just build errors/warnings and test-results. Since
 some of these things are handy - especially when working on libraries.

Could you please precisely enumerate the technical pros and cons so
that we can all understand ? Thank you in advance!

Best Regards,
Laszlo Papp


Re: Setting up a Quality Team within KDE

2012-04-09 Thread Laszlo Papp
 I also don't see any KDE playground apps on my.cdash.org :)

 http://my.cdash.org/index.php?project=Gluon
 http://my.cdash.org/index.php?project=QtOpenAL (You do not find it in
 playground since it was deleted 1-2 days ago after migrating to Qt
 Playground, but it has been hosted there).

Ah yes, and also my playground dictionary I worked on:
http://my.cdash.org/index.php?project=Mula

Best Regards,
Laszlo Papp


Re: Setting up a Quality Team within KDE

2012-04-08 Thread Aleix Pol
On Fri, Apr 6, 2012 at 7:03 PM, Anne-Marie Mahfouf
annemarie.mahf...@free.fr wrote:
 On 04/06/2012 02:23 PM, Aleix Pol wrote:

 On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
 annemarie.mahf...@free.fr  wrote:

 Hi,

 We would like to setup a Quality Testing Team within KDE in order to
 better
 test our next KDE SC and make the beta process more efficient.
 Attached is the first draft of the ideas I brainstormed. Source .odt of
 this
 document is available at

 http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git

 (We can make it a public Google doc if it is more convenient or an
 etherpad).

 The document roughly describes 2 areas of action:
 - reinforcement (labelled Reinforce) of existing structures (mainly
 targetted to developers and mainly relevant before the beta stage of the
 release). This could

 be used as guidelines for new developers. Of course it won't be newbies
 which will develop Unit Tests or check the code quality. But some
 guidelines
 can
 be useful for libs outside Frameworks.
 An area of relevant reinforcement that can be currently done is the
 Review
 process for new integration.
 - new infra-structures: this is labelled New in the doc: this is what I
 would see to happen for the 4.9 release.


 A new mailing list has been set up in order to discuss all this: please
 subscribe to it if you would like to be part of this
 https://mail.kde.org/mailman/listinfo/kde-testing
 An IRC channel also was created on Freenode:
 #kde-quality

 Please join the mailing list and the IRC channel so we can setup a plan
 to
 start putting all this in gear!

 Thanks in advance,

 Anne-Marie and Myriam

 Hi!
 I think this project is a very interesting idea and definitely
 something we need. Said that, I'd like to propose some small changes
 on how this should be done.

 In the document there are some actions to be taken that sound a bit
 too technical, for example it specifies reinforcing the unit testing.
 This is something that we should have been doing already and I don't
 see someone from the Quality Team going to a development team and
 improving that.

 Making unit tests is the developer task (and the document mentions it) and
 we do have solit Unit Tests in kdelibs and Frameworks and kdepimlibs.
 However there are other libs within KDE where maybe unit tests are not as
 present as they should (I did not research that though). Also, about unit
 tests, it's not only writing them, it's also running them. This is not done
 in a regular basis and needs to be automated in the future and the fails
 need to be fixed.
 No newbie will ever touch to any Unit Tests of course. And any action will
 be discussed with the gurus in this field (/me does not point to David).

 We have many tools for developers which are not fully used: latest great
 tool is Jenkins which I was aware of only recently. My label Reinforce is
 to take full advantage of those existing tools.

 (It would be cool if at Akademy there are some talks focusing on using these
 tools.)


 What I'd like to see is this new team testing KDE and its applications
 from a user perspective, specifying the different perspectives the KDE
 end user will face (different OS, different form factors, etc) and
 reporting problems (not necessarily bugs, as in crashes) and proposing
 new solutions.

 This is addressed for 4.9 as putting in place a few selected areas of
 functional testing and hopefully we will assess some automated UI testing
 tools and start using them in the following releases. I hope we can gather
 enough beta testers and make this working.

 I'm really hopeful about such a team, I think it's a good opportunity
 for KDE to be able to reach contributions for a less common sector of
 our community and keep working together for a greater KDE experience.

 Cheers!
 Aleix

 Thanks for your input!

 Anne-Marie


Well, FWIW, I proposed a talk about some code checking tool I worked
on based on KDevelop. In any case, I think that we should define very
well what's this Quality Team responsabilities, because we cannot put
all the weight on any kind of Quality to this team. That's why I said
that it shouldn't be that much about code but the final user
impression.

Anyhow, is the mailing list created already?

Aleix


Re: Setting up a Quality Team within KDE

2012-04-08 Thread Myriam Schweingruber
Hi Aleix,

On Sat, Apr 7, 2012 at 20:34, Aleix Pol aleix...@kde.org wrote:

 Anyhow, is the mailing list created already?

Yes, it was set up the same days as the mail was sent out:
https://mail.kde.org/mailman/listinfo/kde-testing

Regards, Myriam

-- 
Proud member of the Amarok and KDE Community
Protect your freedom and join the Fellowship of FSFE:
http://www.fsfe.org
Please don't send me proprietary file formats,
use ISO standard ODF instead (ISO/IEC 26300)


Re: Setting up a Quality Team within KDE

2012-04-08 Thread Anne-Marie Mahfouf

On 04/08/2012 07:11 AM, Percy Camilo Triveño Aucahuasi wrote:

Hi Anne, hi all

On Fri, Apr 6, 2012 at 12:03 PM, Anne-Marie Mahfouf
This is addressed for 4.9 as putting in place a few
selected areas of functional testing and hopefully
we will assess some automated UI testing tools
and start using them in the following releases.
I hope we can gather enough beta testers and
make this working.

Indeed, Nice idea, I think this is the right focus to (auto)test the 
functionality/features of the app. I've searched some info about this 
topic and found this:


http://ldtp.freedesktop.org/wiki/Home

It has full support for KDE/Qt (4.x) apps and the scripts (for 
autotesting) can be written with Python.


My 0.5 cents :)

Cheers,
Percy
Yes this is maybe the best free tool to do the job. DO you or anybody 
have used it already?


Thanks for your input,

Anne-Marie


On Fri, Apr 6, 2012 at 12:03 PM, Anne-Marie Mahfouf 
annemarie.mahf...@free.fr mailto:annemarie.mahf...@free.fr wrote:


On 04/06/2012 02:23 PM, Aleix Pol wrote:

On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
annemarie.mahf...@free.fr mailto:annemarie.mahf...@free.fr
 wrote:

Hi,

We would like to setup a Quality Testing Team within KDE
in order to better
test our next KDE SC and make the beta process more efficient.
Attached is the first draft of the ideas I brainstormed.
Source .odt of this
document is available at

http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git

(We can make it a public Google doc if it is more
convenient or an
etherpad).

The document roughly describes 2 areas of action:
- reinforcement (labelled Reinforce) of existing
structures (mainly
targetted to developers and mainly relevant before the
beta stage of the
release). This could

be used as guidelines for new developers. Of course it
won't be newbies
which will develop Unit Tests or check the code quality.
But some guidelines
can
be useful for libs outside Frameworks.
An area of relevant reinforcement that can be currently
done is the Review
process for new integration.
- new infra-structures: this is labelled New in the doc:
this is what I
would see to happen for the 4.9 release.


A new mailing list has been set up in order to discuss all
this: please
subscribe to it if you would like to be part of this
https://mail.kde.org/mailman/listinfo/kde-testing
An IRC channel also was created on Freenode:
#kde-quality

Please join the mailing list and the IRC channel so we can
setup a plan to
start putting all this in gear!

Thanks in advance,

Anne-Marie and Myriam

Hi!
I think this project is a very interesting idea and definitely
something we need. Said that, I'd like to propose some small
changes
on how this should be done.

In the document there are some actions to be taken that sound
a bit
too technical, for example it specifies reinforcing the unit
testing.
This is something that we should have been doing already and I
don't
see someone from the Quality Team going to a development team and
improving that.

Making unit tests is the developer task (and the document mentions
it) and we do have solit Unit Tests in kdelibs and Frameworks and
kdepimlibs. However there are other libs within KDE where maybe
unit tests are not as present as they should (I did not research
that though). Also, about unit tests, it's not only writing them,
it's also running them. This is not done in a regular basis and
needs to be automated in the future and the fails need to be fixed.
No newbie will ever touch to any Unit Tests of course. And any
action will be discussed with the gurus in this field (/me does
not point to David).

We have many tools for developers which are not fully used: latest
great tool is Jenkins which I was aware of only recently. My label
Reinforce is to take full advantage of those existing tools.

(It would be cool if at Akademy there are some talks focusing on
using these tools.)


What I'd like to see is this new team testing KDE and its
applications
from a user perspective, specifying the different perspectives
the KDE
end user will face (different OS, different form factors, etc) and
reporting problems (not necessarily bugs, as in crashes) and
proposing
new solutions.

This is addressed for 4.9 as putting in place a 

Re: Setting up a Quality Team within KDE

2012-04-08 Thread Pau Garcia i Quiles
On Sun, Apr 8, 2012 at 4:03 PM, Anne-Marie Mahfouf 
annemarie.mahf...@free.fr wrote:

 **


 Indeed, Nice idea, I think this is the right focus to (auto)test the
 functionality/features of the app. I've searched some info about this topic
 and found this:

 http://ldtp.freedesktop.org/wiki/Home

 It has full support for KDE/Qt (4.x) apps and the scripts (for
 autotesting) can be written with Python.

 My 0.5 cents :)

 Cheers,
 Percy

 Yes this is maybe the best free tool to do the job. DO you or anybody have
 used it already?


Does that tool support QML? Is there an active team behind it?

Writing UI tests (functional tests) is a hell of a lot of work and
choosing the wrong tool means in 2 years we may need to maintain the tool
ourselves or rewrite all the tests for another tool.

I can tell you TestComplete's support for Qt is pretty limited. I have not
tested LDTP because we needed support for Windows and Linux for our Qt
projects.

Squish is the best tool we evaluated at work for Qt, it does support Qt
Quick and there is a company maintaining it (Froglogic, founded by KDE
developers and employing many KDE developers). A few more arguments
pro-Squish: it's cross-platform (which means we can run the same tests on
Linux, Windows, Mac and any other platform we support) and the
client-server architecture is very useful when testing client-server
applications, actual environments and/or using virtualization to run the UI
tests after each daily build.

Talking about virtualization, what we do at work is we have daily builds
for master and stabilization branches and we run tests in virtual machines.
We are currently testing on 12 platforms. We have several testing
profiles (test suites) so that we can quickly say the equivalent of this
version of kdelibs is broken, do not bother performing any further testing:
just flag this build as broken. Everything is automated and launched from
the continuous integration tool as the build finishes.

I'm a bit out of the testing stuff at work, and very very busy, but if we
are serious about it, I can still give some advice and ask some of the
verification and validation people if they are interested in joining.

PS: Let's continue the discussion in kde-testing@

-- 
Pau Garcia i Quiles
http://www.elpauer.org
(Due to my workload, I may need 10 days to answer)


Re: Setting up a Quality Team within KDE

2012-04-08 Thread Albert Astals Cid
El Diumenge, 8 d'abril de 2012, a les 17:13:54, Pau Garcia i Quiles va 
escriure:
 On Sun, Apr 8, 2012 at 4:03 PM, Anne-Marie Mahfouf 
 
 annemarie.mahf...@free.fr wrote:
  **
  
  
  Indeed, Nice idea, I think this is the right focus to (auto)test the
  functionality/features of the app. I've searched some info about this
  topic
  and found this:
  
  http://ldtp.freedesktop.org/wiki/Home
  
  It has full support for KDE/Qt (4.x) apps and the scripts (for
  autotesting) can be written with Python.
  
  My 0.5 cents :)
  
  Cheers,
  Percy
  
  Yes this is maybe the best free tool to do the job. DO you or anybody have
  used it already?
 
 Does that tool support QML? Is there an active team behind it?
 
 Writing UI tests (functional tests) is a hell of a lot of work and
 choosing the wrong tool means in 2 years we may need to maintain the tool
 ourselves or rewrite all the tests for another tool.
 
 I can tell you TestComplete's support for Qt is pretty limited. I have not
 tested LDTP because we needed support for Windows and Linux for our Qt
 projects.
 
 Squish is the best tool we evaluated at work for Qt, it does support Qt
 Quick and there is a company maintaining it (Froglogic, founded by KDE
 developers and employing many KDE developers). A few more arguments
 pro-Squish: it's cross-platform (which means we can run the same tests on
 Linux, Windows, Mac and any other platform we support) and the
 client-server architecture is very useful when testing client-server
 applications, actual environments and/or using virtualization to run the UI
 tests after each daily build.

Did you guys ever try Testability? I've been using lately and works pretty 
well and has the added value of being Free Software.

Albert

 
 Talking about virtualization, what we do at work is we have daily builds
 for master and stabilization branches and we run tests in virtual machines.
 We are currently testing on 12 platforms. We have several testing
 profiles (test suites) so that we can quickly say the equivalent of this
 version of kdelibs is broken, do not bother performing any further testing:
 just flag this build as broken. Everything is automated and launched from
 the continuous integration tool as the build finishes.
 
 I'm a bit out of the testing stuff at work, and very very busy, but if we
 are serious about it, I can still give some advice and ask some of the
 verification and validation people if they are interested in joining.
 
 PS: Let's continue the discussion in kde-testing@


Re: Setting up a Quality Team within KDE

2012-04-08 Thread Pau Garcia i Quiles
On Sun, Apr 8, 2012 at 5:21 PM, Albert Astals Cid aa...@kde.org wrote:


 Did you guys ever try Testability? I've been using lately and works pretty
 well and has the added value of being Free Software.


Do you mean this tool?

http://code.google.com/p/testability-explorer/

TestComplete, Squish, LDTP, etc are completely different tools with a
complete different scope.


-- 
Pau Garcia i Quiles
http://www.elpauer.org
(Due to my workload, I may need 10 days to answer)


Re: Setting up a Quality Team within KDE

2012-04-08 Thread Albert Astals Cid
El Diumenge, 8 d'abril de 2012, a les 17:48:03, Pau Garcia i Quiles va 
escriure:
 On Sun, Apr 8, 2012 at 5:21 PM, Albert Astals Cid aa...@kde.org wrote:
  Did you guys ever try Testability? I've been using lately and works pretty
  well and has the added value of being Free Software.
 
 Do you mean this tool?
 
 http://code.google.com/p/testability-explorer/
 
 TestComplete, Squish, LDTP, etc are completely different tools with a
 complete different scope.

No, i mean this one.

http://projects.developer.nokia.com/Testabilitydriver/

To be honest it's not greatly documented, but it's basically a Free squish 
replacement (albeit a bit worse), has a client/server separation, has the 
hability to introspect Qt objects, etc.

It's what the Unity-Qt project uses for their automated testing.

Cheers,
  Albert


Re: Setting up a Quality Team within KDE

2012-04-07 Thread Percy Camilo Triveño Aucahuasi
Hi Anne, hi all

On Fri, Apr 6, 2012 at 12:03 PM, Anne-Marie Mahfouf
This is addressed for 4.9 as putting in place a few
selected areas of functional testing and hopefully
we will assess some automated UI testing tools
and start using them in the following releases.
I hope we can gather enough beta testers and
make this working.

Indeed, Nice idea, I think this is the right focus to (auto)test the
functionality/features of the app. I've searched some info about this topic
and found this:

http://ldtp.freedesktop.org/wiki/Home

It has full support for KDE/Qt (4.x) apps and the scripts (for
autotesting) can be written with Python.

My 0.5 cents :)

Cheers,
Percy








On Fri, Apr 6, 2012 at 12:03 PM, Anne-Marie Mahfouf 
annemarie.mahf...@free.fr wrote:

 On 04/06/2012 02:23 PM, Aleix Pol wrote:

 On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
 annemarie.mahf...@free.fr  wrote:

 Hi,

 We would like to setup a Quality Testing Team within KDE in order to
 better
 test our next KDE SC and make the beta process more efficient.
 Attached is the first draft of the ideas I brainstormed. Source .odt of
 this
 document is available at
 http://quickgit.kde.org/index.**php?p=scratch%2Fannma%**
 2Fquality-testing-doc.githttp://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git

 (We can make it a public Google doc if it is more convenient or an
 etherpad).

 The document roughly describes 2 areas of action:
 - reinforcement (labelled Reinforce) of existing structures (mainly
 targetted to developers and mainly relevant before the beta stage of the
 release). This could

 be used as guidelines for new developers. Of course it won't be newbies
 which will develop Unit Tests or check the code quality. But some
 guidelines
 can
 be useful for libs outside Frameworks.
 An area of relevant reinforcement that can be currently done is the
 Review
 process for new integration.
 - new infra-structures: this is labelled New in the doc: this is what I
 would see to happen for the 4.9 release.


 A new mailing list has been set up in order to discuss all this: please
 subscribe to it if you would like to be part of this
 https://mail.kde.org/mailman/**listinfo/kde-testinghttps://mail.kde.org/mailman/listinfo/kde-testing
 An IRC channel also was created on Freenode:
 #kde-quality

 Please join the mailing list and the IRC channel so we can setup a plan
 to
 start putting all this in gear!

 Thanks in advance,

 Anne-Marie and Myriam

 Hi!
 I think this project is a very interesting idea and definitely
 something we need. Said that, I'd like to propose some small changes
 on how this should be done.

 In the document there are some actions to be taken that sound a bit
 too technical, for example it specifies reinforcing the unit testing.
 This is something that we should have been doing already and I don't
 see someone from the Quality Team going to a development team and
 improving that.

 Making unit tests is the developer task (and the document mentions it) and
 we do have solit Unit Tests in kdelibs and Frameworks and kdepimlibs.
 However there are other libs within KDE where maybe unit tests are not as
 present as they should (I did not research that though). Also, about unit
 tests, it's not only writing them, it's also running them. This is not done
 in a regular basis and needs to be automated in the future and the fails
 need to be fixed.
 No newbie will ever touch to any Unit Tests of course. And any action will
 be discussed with the gurus in this field (/me does not point to David).

 We have many tools for developers which are not fully used: latest great
 tool is Jenkins which I was aware of only recently. My label Reinforce is
 to take full advantage of those existing tools.

 (It would be cool if at Akademy there are some talks focusing on using
 these tools.)


  What I'd like to see is this new team testing KDE and its applications
 from a user perspective, specifying the different perspectives the KDE
 end user will face (different OS, different form factors, etc) and
 reporting problems (not necessarily bugs, as in crashes) and proposing
 new solutions.

 This is addressed for 4.9 as putting in place a few selected areas of
 functional testing and hopefully we will assess some automated UI testing
 tools and start using them in the following releases. I hope we can gather
 enough beta testers and make this working.

  I'm really hopeful about such a team, I think it's a good opportunity
 for KDE to be able to reach contributions for a less common sector of
 our community and keep working together for a greater KDE experience.

 Cheers!
 Aleix

  Thanks for your input!

 Anne-Marie




Re: Setting up a Quality Team within KDE

2012-04-06 Thread Aleix Pol
On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
annemarie.mahf...@free.fr wrote:
 Hi,

 We would like to setup a Quality Testing Team within KDE in order to better
 test our next KDE SC and make the beta process more efficient.
 Attached is the first draft of the ideas I brainstormed. Source .odt of this
 document is available at
 http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git

 (We can make it a public Google doc if it is more convenient or an
 etherpad).

 The document roughly describes 2 areas of action:
 - reinforcement (labelled Reinforce) of existing structures (mainly
 targetted to developers and mainly relevant before the beta stage of the
 release). This could

 be used as guidelines for new developers. Of course it won't be newbies
 which will develop Unit Tests or check the code quality. But some guidelines
 can
 be useful for libs outside Frameworks.
 An area of relevant reinforcement that can be currently done is the Review
 process for new integration.
 - new infra-structures: this is labelled New in the doc: this is what I
 would see to happen for the 4.9 release.


 A new mailing list has been set up in order to discuss all this: please
 subscribe to it if you would like to be part of this
 https://mail.kde.org/mailman/listinfo/kde-testing
 An IRC channel also was created on Freenode:
 #kde-quality

 Please join the mailing list and the IRC channel so we can setup a plan to
 start putting all this in gear!

 Thanks in advance,

 Anne-Marie and Myriam

Hi!
I think this project is a very interesting idea and definitely
something we need. Said that, I'd like to propose some small changes
on how this should be done.

In the document there are some actions to be taken that sound a bit
too technical, for example it specifies reinforcing the unit testing.
This is something that we should have been doing already and I don't
see someone from the Quality Team going to a development team and
improving that.

What I'd like to see is this new team testing KDE and its applications
from a user perspective, specifying the different perspectives the KDE
end user will face (different OS, different form factors, etc) and
reporting problems (not necessarily bugs, as in crashes) and proposing
new solutions.

I'm really hopeful about such a team, I think it's a good opportunity
for KDE to be able to reach contributions for a less common sector of
our community and keep working together for a greater KDE experience.

Cheers!
Aleix


Re: Setting up a Quality Team within KDE

2012-04-06 Thread Anne-Marie Mahfouf

On 04/06/2012 02:23 PM, Aleix Pol wrote:

On Thu, Apr 5, 2012 at 1:42 PM, Anne-Marie Mahfouf
annemarie.mahf...@free.fr  wrote:

Hi,

We would like to setup a Quality Testing Team within KDE in order to better
test our next KDE SC and make the beta process more efficient.
Attached is the first draft of the ideas I brainstormed. Source .odt of this
document is available at
http://quickgit.kde.org/index.php?p=scratch%2Fannma%2Fquality-testing-doc.git

(We can make it a public Google doc if it is more convenient or an
etherpad).

The document roughly describes 2 areas of action:
- reinforcement (labelled Reinforce) of existing structures (mainly
targetted to developers and mainly relevant before the beta stage of the
release). This could

be used as guidelines for new developers. Of course it won't be newbies
which will develop Unit Tests or check the code quality. But some guidelines
can
be useful for libs outside Frameworks.
An area of relevant reinforcement that can be currently done is the Review
process for new integration.
- new infra-structures: this is labelled New in the doc: this is what I
would see to happen for the 4.9 release.


A new mailing list has been set up in order to discuss all this: please
subscribe to it if you would like to be part of this
https://mail.kde.org/mailman/listinfo/kde-testing
An IRC channel also was created on Freenode:
#kde-quality

Please join the mailing list and the IRC channel so we can setup a plan to
start putting all this in gear!

Thanks in advance,

Anne-Marie and Myriam

Hi!
I think this project is a very interesting idea and definitely
something we need. Said that, I'd like to propose some small changes
on how this should be done.

In the document there are some actions to be taken that sound a bit
too technical, for example it specifies reinforcing the unit testing.
This is something that we should have been doing already and I don't
see someone from the Quality Team going to a development team and
improving that.
Making unit tests is the developer task (and the document mentions it) 
and we do have solit Unit Tests in kdelibs and Frameworks and 
kdepimlibs. However there are other libs within KDE where maybe unit 
tests are not as present as they should (I did not research that 
though). Also, about unit tests, it's not only writing them, it's also 
running them. This is not done in a regular basis and needs to be 
automated in the future and the fails need to be fixed.
No newbie will ever touch to any Unit Tests of course. And any action 
will be discussed with the gurus in this field (/me does not point to 
David).


We have many tools for developers which are not fully used: latest great 
tool is Jenkins which I was aware of only recently. My label Reinforce 
is to take full advantage of those existing tools.


(It would be cool if at Akademy there are some talks focusing on using 
these tools.)



What I'd like to see is this new team testing KDE and its applications
from a user perspective, specifying the different perspectives the KDE
end user will face (different OS, different form factors, etc) and
reporting problems (not necessarily bugs, as in crashes) and proposing
new solutions.
This is addressed for 4.9 as putting in place a few selected areas of 
functional testing and hopefully we will assess some automated UI 
testing tools and start using them in the following releases. I hope we 
can gather enough beta testers and make this working.

I'm really hopeful about such a team, I think it's a good opportunity
for KDE to be able to reach contributions for a less common sector of
our community and keep working together for a greater KDE experience.

Cheers!
Aleix


Thanks for your input!

Anne-Marie