>> do we have plan to integrate with a dependency injection framework?

No, we (the maintainers) have been pretty much against more frameworks due
to performance reasons, overhead, and dependency management problems.

On Thu, Mar 16, 2017 at 2:04 PM, Qingcun Zhou <zhouqing...@gmail.com> wrote:

> Since we're here, do we have plan to integrate with a dependency injection
> framework like Dagger2? Otherwise it'll be difficult to write unit test
> cases.
>
> On Thu, Mar 16, 2017 at 1:16 PM, Edward Capriolo <edlinuxg...@gmail.com>
> wrote:
>
> > On Thu, Mar 16, 2017 at 3:10 PM, Jeff Jirsa <jji...@apache.org> wrote:
> >
> > >
> > >
> > > On 2017-03-16 10:32 (-0700), François Deliège <franc...@instagram.com>
> > > wrote:
> > > >
> > > > To get this started, here is an initial proposal:
> > > >
> > > > Principles:
> > > >
> > > > 1. Tests always pass.  This is the starting point. If we don't care
> > > about test failures, then we should stop writing tests. A recurring
> > failing
> > > test carries no signal and is better deleted.
> > > > 2. The code is tested.
> > > >
> > > > Assuming we can align on these principles, here is a proposal for
> their
> > > implementation.
> > > >
> > > > Rules:
> > > >
> > > > 1. Each new release passes all tests (no flakinesss).
> > > > 2. If a patch has a failing test (test touching the same code path),
> > the
> > > code or test should be fixed prior to being accepted.
> > > > 3. Bugs fixes should have one test that fails prior to the fix and
> > > passes after fix.
> > > > 4. New code should have at least 90% test coverage.
> > > >
> > > First I was
> > > I agree with all of these and hope they become codified and followed. I
> > > don't know anyone who believes we should be committing code that breaks
> > > tests - but we should be more strict with requiring green test runs,
> and
> > > perhaps more strict with reverting patches that break tests (or cause
> > them
> > > to be flakey).
> > >
> > > Ed also noted on the user list [0] that certain sections of the code
> > > itself are difficult to test because of singletons - I agree with the
> > > suggestion that it's time to revisit CASSANDRA-7837 and CASSANDRA-10283
> > >
> > > Finally, we should also recall Jason's previous notes [1] that the
> actual
> > > test infrastructure available is limited - the system provided by
> > Datastax
> > > is not generally open to everyone (and not guaranteed to be permanent),
> > and
> > > the infrastructure currently available to the ASF is somewhat limited
> > (much
> > > slower, at the very least). If we require tests passing (and I agree
> that
> > > we should), we need to define how we're going to be testing (or how
> we're
> > > going to be sharing test results), because the ASF hardware isn't going
> > to
> > > be able to do dozens of dev branch dtest runs per day in its current
> > form.
> > >
> > > 0: https://lists.apache.org/thread.html/f6f3fc6d0ad1bd54a6185ce7bd7a2f
> > > 6f09759a02352ffc05df92eef6@%3Cuser.cassandra.apache.org%3E
> > > 1: https://lists.apache.org/thread.html/5fb8f0446ab97644100e4ef987f36e
> > > 07f44e8dd6d38f5dc81ecb3cdd@%3Cdev.cassandra.apache.org%3E
> > >
> > >
> > >
> > Ed also noted on the user list [0] that certain sections of the code
> itself
> > are difficult to test because of singletons - I agree with the suggestion
> > that it's time to revisit CASSANDRA-7837 and CASSANDRA-10283
> >
> > Thanks for the shout out!
> >
> > I was just looking at a patch about compaction. The patch was to
> calculate
> > free space correctly in case X. Compaction is not something that requires
> > multiple nodes to test. The logic on the surface seems simple: find
> tables
> > of similar size and select them and merge them. The reality is it turns
> out
> > now to be that way. The coverage itself both branch and line may be very
> > high, but what the code does not do is directly account for a wide
> variety
> > of scenarios. Without direct tests you end up with a mental approximation
> > of what it does and that varies from person to person and accounts for
> the
> > cases that fit in your mind. For example, you personally are only running
> > LevelDB inspired compaction.
> >
> > Being that this this is not a multi-node problem you should be able to re
> > factor this heavily. Pulling everything out to a static method where all
> > the parameters are arguments, or inject a lot of mocks in the current
> code,
> > and develop some scenario based coverage.
> >
> > That is how i typically "rescue" code I take over. I look at the
> nightmare
> > and say, "damn i am really afraid to touch this". I construct 8 scenarios
> > that test green. Then I force some testing into it through careful re
> > factoring. Now, I probably know -something- about it. Now, you are fairly
> > free to do a wide ranging refactor, because you at least counted for 8
> > scenarios and you put unit test traps so that some rules are enforced.
> (Or
> > the person changing the code has to actively REMOVE your tests asserting
> it
> > was not or no longer is valid). Later on you (or someone else)  __STILL__
> > might screw the entire thing up, but at least you can now build forward.
> >
> > Anyway that patch on compaction was great and I am sure it improved
> things.
> > That being said it did not add any tests :). So it can easily be undone
> by
> > the next person who does not understand the specific issue trying to be
> > addressed. Inline comments almost scream to me 'we need a test' not
> > everyone believes that.
> >
>
>
>
> --
> Thank you & Best Regards,
> --Simon (Qingcun) Zhou
>

Reply via email to