In my opinion the last few "omfg shit's broken" emails comes mainly from lack of testing. If a committer puts something into master, they need to do "due diligence" for it, and that boils down to one question: do tests pass on devices+platforms affected by the commit?
iOS this is easier (per se) than Android as there are less devices to worry about. Android is a wild west. I suppose the "gimme feedback on this feature" threads are a good opportunity to reach out to the rest of the committers and ask for a sanity test across more devices (and should be done at a minimum if you feel you don't have good device coverage on your own). Here are some rules of thumb I follow: - if I want to check something into cordova-js that is common across platforms, I need to put in extra work to do the testing. That means I grab a couple devices from a couple different platforms and make sure the tests pass at a reasonable rate (at or around the previous passing % for that platform). And sometimes issue come up that are not caught by the cordova-js unit tests (esp. if they are run on node). Sometimes these issues are platform/device-specific. - if I'm checking something into platform code, I make sure to test getting the sample app (or better yet mobile-spec) running using the getting started instructions. - for iOS, I usually run a smoke test on an iPod and since I got an iPhone5 to test on, on that too. - on Android I run tests on either a galaxy nexus or nexus 7 (both run 4.2.1) and a nexus S (2.3.6). That said, there are certainly some issues uncovered. As you brought up Andrew, I cannot find a "hey lets tag next week" email to the list for 2.4.0rc1 either. I think the Adobe team was well aware internally that we wanted to cut a release now-ish. But, clearly, we failed to communicate that on the list. Definitely won't happen again and that's our bad. As long as everyone runs tests accordingly, this isn't a huge issue. It is on its way to being resolved and we'll release the rc1 of 2.4.0 in the coming days. On 1/21/13 1:24 PM, "Joe Bowser" <bows...@gmail.com> wrote: >On Mon, Jan 21, 2013 at 1:06 PM, Andrew Grieve <agri...@chromium.org> >wrote: >> One tough thing about this is knowing when you have agreement or not. >>E.g. >> for adding File.slice(), Braden sent out an email on the 7th saying that >> he'd like to add it for iOS & Android. Simon gave it a +1, and no one >>else >> responded. He then implemented the feature in a feature branch and then >> merged it when it was working. Since there was an email with few >>replies, I >> gathered that it had been discussed, and that no one really had >>anything to >> say about it. >> >> Joe - How do you propose that we make sure that things work? Or are you >> suggesting that a code-review serves this purpose? >> > >I do think a code review serves this purpose. I need to see a >published branch somewhere that I can test to make sure that things >don't break on my end before I can say "Hey, this looks good!", and >I'd prefer if mobile-spec was updated as part of the feature so I >don't have to figure out what's being done myself. > >> As for code reviews: >> >> I'd certainly be interested in more code-reviews. I think it's really >> useful to get feedback on changes. The only time when it becomes a >>burden >> is when turn-around time gets too long (e.g. you submit for review and >>no >> one looks at it for over a day). >> >> Up until now, we've been using the github pull-request interface to have >> others review our changes, but this isn't done very frequently. I also >> don't love this approach because comments through it don't get posted >>back >> to the cordova mailing-list. > >I'm not super thrilled by this either, because our GitHub pull request >system is completely broken since we can't actually close requests and >indicate when we think things are a good idea or not. I think we >should do what Android does with Gerrit (see >https://android-review.googlesource.com) , but that'll involve >additional infrastructure and another war with INFRA about whether >it's the Apache way or whatever. > >> >> Another example is just using feature branches: >> Last week I tried out the branch approach for the CB-2227. I sent an >>email >> saying what I wanted to do, created a branch, and sent my first "I've >>made >> progress" out on Wednesday. On Thursday, I responded to the thread >>saying >> that the initial work was done and asked for feedback. I got no >>feedback, >> so today I merged into Master and updated the bug. I did this because I >> feel pretty confident about the change, but also because it will force >> everyone to test it out, and we're still early in the current >>release-cycle. >> > >I think this is fine and is exactly what needs to happen. It didn't >seem to break anything ,and there was a branch that we could look at. >The thing is that I don't remember seeing a branch for ArrayBuffer >that we could look at. I'll dig through my e-mail to find it, but I >don't remember seeing it. > >> Maybe there's a problem with people not noticing when feedback is being >> requested? Should we have a more structured way of asking for feedback >>on a >> branch? E.g. Start a new thread on the ML with the subject "Review >>Request: >> ..." or something like that? And if you're proposing a new API, say "API >> Review: ...". WDYT? >> > >I think that having a proper header on the e-mail would be a good plan.