Sounds like everyone's on the same page here, but thought I'd add that I had the same thing when making CLI changes. Many of the tests break when they shouldn't.
Want to give a big +1 to deleting tests that are testing implementation rather than results. Having such tests is a real drain on productivity. I think we can still say that new code should have tests, but if your new code causes 10 unrelated tests to fail when they shouldn't, then I think it's fair to write one new test for your code and to delete the 10 others. On Fri, Nov 8, 2013 at 2:17 PM, Braden Shepherdson <bra...@chromium.org>wrote: > On Fri, Nov 8, 2013 at 1:48 PM, Lorin Beer <lorin.beer....@gmail.com> > wrote: > > > >What we're aiming for here is that we mock as little as is practical, > > > > I think that kind of defeats the idea of unit tests in the first place, I > > like the clean delineation between unit tests and integration tests, and > it > > sounds like both need to be improved. Mocking things at the boundary of > the > > CLI is in line with this. > > > > > Who said I was writing a unit test? We're writing a separate directory of > integration tests, and we're hoping to keep them fast enough to call every > time. > > > > >We'll do it first, and then solve the speed problems later if any arise. > > > > agreed, and we can mitigate potential problems by keeping unit tests > small > > and within the boundaries of the CLI. > > > > >My main frustration with the current state of the tests is how fragile > > some > > of them are with respect to even minor changes in the CLI. As I see it, > > this is closely related to mocking the file system calls. > > > > Yeah, mocking file system calls is a little goofy, especially considering > > the scope of the CLI as a project. We can reasonably expect a user to > have > > an HD, no? > > > > My philosophy on testing is not to waste time optimizing tests, it's to > > keep the tests small and lightweight. Each test deals with the smallest > > unit of code or functionality possible. That's what makes them fast. > > Integration tests deal with functionality that encompasses > > inter-process/inter-project/network communication, and where latency is > > expected. > > > > > Many of our existing "unit" tests, at least of the main workflows like > platform.add and plugin.add, are more like audits of the implementation > than real unit tests. They're not testing the behavior of the functions > ("should install the plugin") but the way the function works ("should check > this file exists, then copy this file, then exec this command"). That's > fragile and lame, and doesn't test as much as we'd like. > > The tests for things like the lazy-loader or hooker are much better, > testing things like "I'm going to fire the hooks for some event and it had > better call my registered hooks". > > I hope that eventually these "integration" tests are (a) fast enough to run > every time, and (b) a replacement for the existing `platform` and `plugin` > tests. > > Braden > > > > > > On Thu, Nov 7, 2013 at 4:32 PM, Mark Koudritsky <kam...@google.com> > wrote: > > > > > My main frustration with the current state of the tests is how fragile > > some > > > of them are with respect to even minor changes in the CLI. As I see it, > > > this is closely related to mocking the file system calls. CLI is all > > about > > > massaging the files around, most of the state we care about lives on > the > > > file system. So once we mock the fs calls and can't check for existence > > and > > > contents of files, we don't have much to check for in the tests, so we > > > start checking implementation details like specific function calls and > > then > > > it gets fragile. As far as I understand, the kind of relationship that > > CLI > > > has with the file system is not so common with other Node.js based > > projects > > > and the distinction between unit and integration tests is somewhat > > blurred > > > here. > > > > > > Anyway, as Braden said, we'll see what parts of the end-to-end tests > will > > > be slow and work on speeding some of them up and mocking others away. > > I've > > > fallen prey to premature optimization traps way too many times by now > :) > > > > > > - Mark > > > > > > > > > On Thu, Nov 7, 2013 at 3:06 PM, Braden Shepherdson < > bra...@chromium.org > > > >wrote: > > > > > > > What we're aiming for here is that we mock as little as is practical, > > and > > > > ideally only mock things at the boundaries of CLI: calls to the > > platform > > > > scripts, some calls to Plugman (generally we want to let these > > end-to-end > > > > tests call the real Plugman, but there are exceptions). > > > > > > > > We'll do it first, and then solve the speed problems later if any > > arise. > > > > There are some options here, like I mentioned above: Use a fake fs > > > > implementation[1] or a ramdisk with (much!) better performance than > the > > > > real disk. And just make CLI faster in general. > > > > > > > > Braden > > > > > > > > [1] https://github.com/eldargab/node-fake-fs > > > > > > > > > > > > On Thu, Nov 7, 2013 at 2:57 PM, Anis KADRI <anis.ka...@gmail.com> > > wrote: > > > > > > > > > The reason why every FS call is mocked is speed but speed is > > > > > subjective in my opinion. Given the features of CLI, my opinion is > > > > > that anything < 1 minute is acceptable. When I run tests, I am not > > > > > actively watching them execute. > > > > > I think the only calls that should be mocked are network calls > > because > > > > > you should be able to run tests offline. > > > > > I think we should convert (trash?) existing tests back to what they > > > > > were instead of adding more end-to-end tests. We should also pay > more > > > > > attention de Windows. > > > > > > > > > > On Thu, Nov 7, 2013 at 10:31 AM, Braden Shepherdson < > > > bra...@chromium.org > > > > > > > > > > wrote: > > > > > > Alright, Mark and I have discussed this further, and we will be > > > > beginning > > > > > > the effort with some end-to-end tests that will supplement the > > > existing > > > > > > tests. > > > > > > > > > > > > To some extent this is duplicating things that go on in the CI, > > since > > > > it > > > > > > checks out various plugins using the tools. But we think it's > > still a > > > > > > worthwhile effort since it will make it much easier to catch any > > > > problems > > > > > > at commit time. > > > > > > > > > > > > The main pain point in the existing tests that we'll want to fix > is > > > > their > > > > > > fragility. The worst offenders here are the platform parsers, > > > > especially > > > > > > wp7+8. So Steve, if you're looking for ways to help, I'd suggest > > > > looking > > > > > at > > > > > > those (wp in particular, but all of them). These are some of the > > > worst, > > > > > > most vacuous tests we have. They're not providing sample inputs > and > > > > > > checking the outputs, they are checking that the right functions > > get > > > > > > called. I propose that they should be operating on a real, > > > checked-in, > > > > > > example project, in the spec/fixtures directory. The tests should > > run > > > > the > > > > > > parsers, and make sure all the files are in the right places and > > have > > > > the > > > > > > right contents. > > > > > > > > > > > > Braden > > > > > > > > > > > > > > > > > > On Thu, Nov 7, 2013 at 1:02 PM, Steve Gill < > stevengil...@gmail.com > > > > > > > > wrote: > > > > > > > > > > > >> I don't think we should scrap the current tests. I am totally in > > > favor > > > > > of > > > > > >> having new end to end tests. We need to catch regressions > better. > > > > > >> > > > > > >> Braden, let me know how I can help. > > > > > >> > > > > > >> > > > > > >> > > > > > >> > On Nov 7, 2013, at 9:02 AM, Michal Mocny <mmo...@chromium.org > > > > > > wrote: > > > > > >> > > > > > > >> > Discussing locally with Braden.. these tests seem to be > testing > > > > > internal > > > > > >> > details instead of expected functionality. Its quite common > for > > > > valid > > > > > >> > patches to break the tests and invalid patches to leave the > > tests > > > > > >> passing. > > > > > >> > There have been several occurrences recently where things > landed > > > > even > > > > > >> > though "cordova create" or "plugin add" were hosed, which > seems > > > > fairly > > > > > >> > fundamental to keep working ;) > > > > > >> > > > > > > >> > > > > > > >> >> On Thu, Nov 7, 2013 at 11:57 AM, Michal Mocny < > > > mmo...@chromium.org > > > > > > > > > > >> wrote: > > > > > >> >> > > > > > >> >> +1 to testing end-to-end, but I thought thats what > > BuildBot/Medic > > > > > does? > > > > > >> >> Likely we want to ship those end-to-end test scripts so users > > can > > > > > test > > > > > >> >> changes locally, but I think the tests are are inside CLI now > > are > > > > > meant > > > > > >> to > > > > > >> >> be unit tests, which yes, have very limited usefulness in > > > > isolation, > > > > > but > > > > > >> >> perhaps shouldn't be entirely replaced? If they really are > > > useless > > > > > (not > > > > > >> >> sure) then they should be fixed/removed. > > > > > >> >> > > > > > >> >> > > > > > >> >> On Thu, Nov 7, 2013 at 11:40 AM, Braden Shepherdson < > > > > > >> bra...@chromium.org>wrote: > > > > > >> >> > > > > > >> >>> The CLI tests are bad. I propose making them better. > > > > > >> >>> > > > > > >> >>> The tests are bad for two reasons: > > > > > >> >>> 1. They're fragile because the tests depend on exactly the > > right > > > > > >> functions > > > > > >> >>> being called, sometimes in the right order. > > > > > >> >>> 2. They don't test what we really want, which is that > projects > > > get > > > > > >> created > > > > > >> >>> and all the files are in the right places. > > > > > >> >>> > > > > > >> >>> I propose letting the tests actually run filesystem and > > related > > > > > calls, > > > > > >> >>> instead of always mocking them out. In the simplest form, > that > > > > means > > > > > >> >>> running them on the real filesystem. If that's too slow, we > > can > > > > > >> >>> investigate > > > > > >> >>> other alternatives, like using a ramdisk, or using that > > emulated > > > > fs > > > > > >> that > > > > > >> >>> runs everything in RAM inside Node. > > > > > >> >>> > > > > > >> >>> Mark and I are planning to work on this, starting with the > CLI > > > > > tests. > > > > > >> The > > > > > >> >>> Plugman tests are better in this regard, but could probably > > > stand > > > > > to be > > > > > >> >>> really called as well. > > > > > >> >>> > > > > > >> >>> Any thoughts or objections? > > > > > >> >>> > > > > > >> >>> Braden > > > > > >> >> > > > > > >> >> > > > > > >> > > > > > > > > > > > > > > >