keep us posted before breakthrough as well, please. I'm very interested.

and good hunting of course,
Daan

On Sat, Nov 2, 2013 at 2:35 PM, Sudha Ponnaganti
<sudha.ponnaga...@citrix.com> wrote:
> That is only for Unit tests - we need to instrument code coverage for BVTs 
> and Regressions i.e integration tests.  We are pursuing this in our lab. If 
> we get any break through will post it to the forum. Because of customized 
> nature of automation framework there are few challenges there.
>
> ________________________________________
> From: Laszlo Hornyak [laszlo.horn...@gmail.com]
> Sent: Friday, November 01, 2013 10:48 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
>
> I have heard about commercial tools that do more advanced coverage
> tracking. But if you think in open source, not sure Sonar really has an
> alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth trying
> something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <daan.hoogl...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this
>> way but it is kind of kidding ourselves if we don't face up to the
>> fact that coverage of lines of code or branches is not a completeness
>> criterium of some kind. I don't know if any of the mentioned tools
>> does analysis this thorough. But if any does we should go for that
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
>> <darren.s.sheph...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy some
>> times to relate lines of code to functionality.  Also just because it hits
>> a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.eduku...@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.horn...@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available in the
>> >> jenkins environment. However, this could be a problem in any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <t...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How many
>> >>> lines of code hit on a deployed system that corresponds to the
>> >>> component donated/committed. We don't have that for existing tests so
>> >>> it makes it hard to judge if a feature that comes with tests covers
>> >>> enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>> >>>> darren.s.sheph...@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.horn...@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>> >>>>>> darren.s.sheph...@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before but I
>> >>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>> >>>>>>> quality is that we have unit test and integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>> >>>>>>> the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean some
>> >>>>>>> useful code/feature will not get in because of some coverage wasn't
>> >>>>>>> possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put it in a
>> >>>>>>> tier of what is the quality of it (very similar to how OpenStack
>> >>>>>>> qualifies their hypervisor integration).  Tier A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

Reply via email to