Re: Commit loss prevention

2013-11-14 Thread Kohsuke Kawaguchi
of repositories you can do it that way, too. This has a benefit of not getting bombarded by notification e-mails for repositories you don't care. I think this is actually tangential to the commit loss prevention, as I can make the same mistake Luca did and mass update all the remote refs

Re: Commit loss prevention

2013-11-14 Thread Kohsuke Kawaguchi
On 11/14/2013 09:54 AM, domi wrote: I think this was an exception and we should treat it as a such… Yes, I agree. And we were able to recover all the commits after all, so I don't think we need to throw the baby out with the bath water. Sure this could happen again but by doing some

Re: Commit loss prevention

2013-11-14 Thread Kohsuke Kawaguchi
On 11/13/2013 11:58 PM, Luca Milanesio wrote: We need to make some tests on the scalability of the events API because of: 1) need to monitor over 1000 repos (one call per repo ? one call for all ?) 2) by monitoring the entire jenkinsci org, 300 events could be not enough in case of catastrophic

Re: Commit loss prevention

2013-11-13 Thread Kohsuke Kawaguchi
On 11/11/2013 11:05 PM, Luca Milanesio wrote: Seems a very good idea, it is basically a remote audit trail. The only concern is the throttling on the GitHub API: it would be better then to do the scripting on a local mirror of the GitHub repos. When you receive a forced update you do have

Re: Commit loss prevention

2013-11-13 Thread Kohsuke Kawaguchi
OK, that's a fair point. I do recall writing a daemon that cleans up access control on repositories (among other things like disabling issue tracker), but I'm not too sure if we are running it regularly or not. Maybe we can extend https://jenkins-ci.org/account so that people can

Re: Commit loss prevention

2013-11-12 Thread Christopher Orr
On 12/11/13 07:25, Kohsuke Kawaguchi wrote: I still feel strongly that we maintain the open commit access policy. This is how we've been operating for the longest time, and it's also because otherwise adding/removing developers to repositories would be prohibitively tedious. I agree that the

Re: Commit loss prevention

2013-11-12 Thread Stephen Connolly
I think part of the issue is that our canonical repositories are on github... I would favour jenkins-ci.org being masters of its own destiny... hence I would recommend hosting canonical repos on project owned hardware and using GIT as a mirror of those canonical repositories... much like the way

Re: Commit loss prevention

2013-11-12 Thread Dariusz Łuksza
In CollabNet we already implemented so called History Protection. We already put some thoughts on this topic and come up with solution for unintended force pushes and branch deletion. Maybe you can reuse some of our approaches. Here is short description of this feature. History Protection it

Re: Commit loss prevention

2013-11-12 Thread Kevin Fleming (BLOOMBERG/ 731 LEXIN)
When you say 'canonical' in this proposal, do you mean the repositories used for making releases, or the repositories where development (and especially, pull requests) would be handled? If it's the former, I could see that being worthwhile, especially if *nobody* has permissions to push to the

Re: Commit loss prevention

2013-11-12 Thread Stephen Connolly
On 12 November 2013 14:40, Kevin Fleming (BLOOMBERG/ 731 LEXIN) kpflem...@bloomberg.net wrote: When you say 'canonical' in this proposal, do you mean the repositories used for making releases I mean that they are the official repositories and all others are just mirrors... this is the way

Re: Commit loss prevention

2013-11-12 Thread Kevin Fleming (BLOOMBERG/ 731 LEXIN)
Well, that would mean that merging a pull request on GitHub (especially the quick way, using the web UI) wouldn't update the canonical repository; the repo maintainer would need to push that change to the canonical repository, potentially dealing with a second round of merge conflicts if that

Re: Commit loss prevention

2013-11-12 Thread Stephen Connolly
I am less keen on Gerrit. If anything this recent experience has me feeling that I don;t want Gerrit anywhere near my workflow On 12 November 2013 15:43, Kevin Fleming (BLOOMBERG/ 731 LEXIN) kpflem...@bloomberg.net wrote: Well, that would mean that merging a pull request on GitHub (especially

Re: Commit loss prevention

2013-11-12 Thread Slide
+1 On Nov 12, 2013 9:48 AM, Stephen Connolly stephen.alan.conno...@gmail.com wrote: I am less keen on Gerrit. If anything this recent experience has me feeling that I don;t want Gerrit anywhere near my workflow On 12 November 2013 15:43, Kevin Fleming (BLOOMBERG/ 731 LEXIN)

Commit loss prevention

2013-11-11 Thread Kohsuke Kawaguchi
Now that the commits have been recovered and things are almost back to normal, I think it's time to think about how to prevent this kind of incidents in the future. Our open commit access policy was partly made possible by the idea that any bad commits can be always rolled back. But where I

Re: Commit loss prevention

2013-11-11 Thread Luca Milanesio
Seems a very good idea, it is basically a remote audit trail. The only concern is the throttling on the GitHub API: it would be better then to do the scripting on a local mirror of the GitHub repos. When you receive a forced update you do have anyway all the previous commits and the full