Re: [Catalyst] Managing module regressions.
Here's what we do: - we have a (VCS-managed) set of tarballs downloaded from CPAN - we run a CPAN-like server providing those tarballs - we have a rather large set of distroprefs to skip unreliable tests and apply local patches - we usually update to the latest CPAN (and perl) releases - sometimes we have to hold back on a set of modules because of problems (currently we can't update Catalyst, for example, due to some internal libraries exploiting undocumented behaviours; yes, we're going to fix our libraries) - we smoke the whole set of modules whenever we update a batch of them - every iteration, we build a package (in our case, RPM) with the version of perl and all modules that we are going to use - we develop and run all our test suites against that package - the application packages require the specific perl/cpan package version they were developed on It works, and it's not that much work. -- Dakkar - GPG public key fingerprint = A071 E618 DD2C 5901 9574 6FE2 40EA 9883 7519 3F88 key id = 0x75193F88 River: "Mal. Bad. In the Latin." --Episode #2, "The Train Job" signature.asc Description: PGP signature ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
On Fri, Jun 29, 2012 at 1:12 PM, Gianni Ceccarelli wrote: > Here's what we do: > > - we have a (VCS-managed) set of tarballs downloaded from CPAN > - we run a CPAN-like server providing those tarballs > - we have a rather large set of distroprefs to skip unreliable tests > and apply local patches > - we usually update to the latest CPAN (and perl) releases > - sometimes we have to hold back on a set of modules because of > problems (currently we can't update Catalyst, for example, due to > some internal libraries exploiting undocumented behaviours; yes, > we're going to fix our libraries) > - we smoke the whole set of modules whenever we update a batch of them > - every iteration, we build a package (in our case, RPM) with the > version of perl and all modules that we are going to use > - we develop and run all our test suites against that package > - the application packages require the specific perl/cpan package > version they were developed on > > It works, and it's not that much work. > You have tarballs of every single dependency? How do you determine what those are? And you build a single RPM with perl and all dependencies? Do you use Perlbrew for that? I'd love to hear more about that process. Here's the difference between my thinking and the operations manager: He wants to make sure that the same code is used from development to production. The idea is to reduce the risk of bugs from using different versons of dependencies. (That's one reason we are stuck running very old code). My view, as a developer, is I want good test coverage. When it's time to cut a release what is important to me is to see 100% tests passing. If the code works, well, it works. Small sample size, but I haven't heard that regressions from CPAN are a big problem -- I know they happen, of course, but the question I'm after is it so significant to warrant building much more complex build systems instead of using CPAN in the normal away? I just don't think so. I also don't see a need to manage multiple stacks of modules for different stages of the application. So, I'm looking at this environment, which seems about as simple as you can hope for. Anyone see any any holes in this approach? - Run a local CPAN ("DarkPAN") repo for our in-house modules. e.g. CPAN::Site with pass through. Our cpan(m) clients are configured to first fetch from local CPAN, if if not found then fetch from public CPAN. - For rare CPAN regressions install those distributions into our local DarkPAN which clients will install in preference over the version on CPAN. Add test for to catch a regression in the future. (Hopefully a rare and short-lived situation.) - Developers check out and install dependencies as normal locally (local::lib, perlbrew) and make sure code has good test coverage. - In-house modules (as well as apps) are "released" to our local DarkPAN. Dist::Zilla's "release" makes this trivial. - Automated testing can check both "trunk" and "release" -- trunk by checking out and running tests, and "release" by installing the most recent version with a cpan client and running tests. And the release process is very similar: - QA team (or a developer) runs "cpan Our-App" on the target platform, letting it bring in any dependencies as normal. - Run the the unit tests to satisfy the development team that the app is working as expected with the installed dependencies. This is essentially developer "hand-off" to the QA team confirming the app works as expected. - Then QA team tests the app and, if passes tests, the app is moved to staging and then production. I think the significant thing here is I'm not really worrying about specific versions of dependencies. Sure, code depends on a minimun version of a module but that's just so tests have a chance of passing. What's important is that the unit tests pass. It's really no different than running "cpan Catalyst::Runtime" and making sure all tests pass. Sure, it's possible that a newer module ends up on production than dev, but that would mean unit tests AND QA failed to detect a bug. And let's be honest, the vast, vast majority of bugs that find their way to production are in our own code. -- Bill Moseley mose...@hank.org ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
On 2012-07-02 Bill Moseley wrote: > You have tarballs of every single dependency? How do you determine > what those are? IIRC (and the system works well so I don't have to remember how it works :) ) we have a wrapper for CPAN.pm that logs the downloads and adds them to the repository. > And you build a single RPM with perl and all dependencies? Yes. > Do you use Perlbrew for that? We built our system before Perlbrew existed. Some details can been seen in this presentation: http://www.presentingperl.org/lpw2010/packaging-perl/ > [ big snip ] > > I think the significant thing here is I'm not really worrying about > specific versions of dependencies. Sure, code depends on a minimun > version of a module but that's just so tests have a chance of > passing. What's important is that the unit tests pass. It's > really no different than running "cpan Catalyst::Runtime" and making > sure all tests pass. If you have a very good coverage, that may well work. What makes me feel not very safe is a scenario I'm having right now at work: the same application works on some VMs, but fail with networking problems on others. Is it a code problem, or a network problem? Two "rpm --verify" later, I know that the exact same code is running on all those VMs, two "puppetd" proves that the configuration is the same, and at least I've ruled "different code" as a source of differences. Doesn't help me prove it's the network, but at least I don't have to write a tool to check all the installed distributions… -- Dakkar - GPG public key fingerprint = A071 E618 DD2C 5901 9574 6FE2 40EA 9883 7519 3F88 key id = 0x75193F88 Captain: "You would have done the same." Mal: "You can already see I haven't. Now get the hell off my ship." --Episode #8, "Out of Gas" signature.asc Description: PGP signature ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
On Mon, Jul 2, 2012 at 3:32 PM, Gianni Ceccarelli wrote: > > > Do you use Perlbrew for that? > > We built our system before Perlbrew existed. Some details can been > seen in this presentation: > > http://www.presentingperl.org/lpw2010/packaging-perl/ Well "some details" is accurate. ;) It's a bit hard to follow. I'll try later with headphones when it's quite. But, essentially you build a separate Perl on a staging machine with all dependencies and then use RPM to copy to production, correct? Do you also use RPM for bringing in non-perl dependencies? I assume by building Perl and installing your modules you are not depending on any existing RPMs of Perl modules. Thanks, -- Bill Moseley mose...@hank.org ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
On 2012-07-02 Bill Moseley wrote: > But, essentially you build a separate Perl on a staging machine with > all dependencies and then use RPM to copy to production, correct? Yes, to production and to any development VM. > Do you also use RPM for bringing in non-perl dependencies? Yes, like C libraries and external utilities. > I assume by building Perl and installing your modules you are not > depending on any existing RPMs of Perl modules. Precisely. -- Dakkar - GPG public key fingerprint = A071 E618 DD2C 5901 9574 6FE2 40EA 9883 7519 3F88 key id = 0x75193F88 "Trillian did a little research in the ship's copy of THHGTTG. It had some advice to offer on drunkenness. `Go to it,' it said, `and good luck.' It was cross-referenced to the entry concerning the size of the Universe and ways of coping with that." - One of the more preferable pieces of advice contained in the Guide. signature.asc Description: PGP signature ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
I meant to reply to this a month ago but haven't had the time until now. On Fri, Jun 29, 2012 at 8:24 AM, Bill Moseley wrote: > This is a "how do others do this?" post. > > In your large (or even not so large) apps I assume at times you experience > dependency regressions. My question is do you manage it on a case-by-case > basis (simply install an older version or fix internally) or is it such a > significant issue that you have a system for managing modules outside of > CPAN? I've been working with a few apps where we haven't experienced a lot of dependency regressions per se, but have been dealing with several modules that had major api changes. We have had to spend a significant amount of time on maintenance in the apps to upgrade the code to handle these updated module apis. > The reason I ask is because at work we are considering managing different > stacks of CPAN and in-house dependencies -- so maybe "dev", "testing", and > "production/staging" stacks all in separate private CPAN mirrors. And now > multiply that times the number of different apps we work on. I would tend to stay away from this approach, as we have managed several apps which use different versions of modules. We have a few different development teams where developers upgrade modules without any set schedule. Some of them are writing modules which are used by all the apps, and the unannounced api changes in those modules have had cascading effects which have slowed down deployment a lot. Only 2 of these developers have modules on CPAN, so I think the others haven't experienced the real world pain that comes with rapid api changes and are less cautious with their coding. My feedback here would be try to stick with a fixed set of modules as much as possible, no matter how tempting a new module looks. > My experience is that even with the very large number of dependencies in a > Catalyst app that it's pretty rare to have a regression. It happens, sure, > and when it happens just deal with it. (And thinking of our own code it's > typically not a regression in a module but a fix in a module where our code > was depending on some broken behavior). > > So, the question is: does anyone else find it necessary to manage > dependencies as I described above? And if so, what is your process? We looked at implementing Pinto for module version management, but that effort seemed to have never really taken off. I don't think the part of the team implementing it dealt with the pain the rest of us were experiencing on a day to day basis. > Another argument that is floated around is we don't want to upgrade > dependencies often because of potential new bugs. That seems a bit silly to > me because it's ignoring known bug fixes for the chance that there might > some unknown new bug.(Yes, we have apps running Catalyst from 2010!) This has been a big pain point for us, enough that the build engineer I work with to deploy the rpms with is now averse to upgrading modules. The config module we developed in house changed its api 3 times in one year. A couple of the other core modules changed significantly, causing weeks/months of delays. We had to upgrade to Catalyst 5.9 and Moose 2.0 recently, and that was quite painful (this is not a dig at either of those modules, just a reflection of our upgrade experience). We ran into this issue with the Moose 2 upgrade - took a few days to track down. http://freebsd.so14k.com/problems_with_perl_catalyst.shtml So while we haven't run into unknown new bugs, the cost of upgrading modules on an ad hoc schedule has been large, and outweighing most business value brought by the upgraded modules. I'm not saying that you shouldn't upgrade modules when they are available. But such upgrades should be scheduled apart from feature releases, and not done ad hoc. This can be difficult when you have a loosely organized team and developers want to try out the latest release of a module. Of course, this is all in the context of an application that generates revenue and has a user base that expects the app to remain largely bug free. If I'm developing something that has an alpha user base, I'll always go for the most recent release. Also, these experiences are that of a large team developing large apps to hundreds of servers. The experience of one developer deploying to a small environment will certainly be different. ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
On 24 Jul 2012, at 19:17, Fred Moyer wrote: > Also, these experiences are that of a large team developing large apps > to hundreds of servers. The experience of one developer deploying to a > small environment will certainly be different. I wonder how life would be different if you just deployed an entire perlbrew per app, so /opt/MyApp/bin/perl - this would make the cost of upgrading things much more trivial, as the cost would be per project, rather than having a wider impact.. It would also allow teams for each product to be strongly conservative (if that suited the team in question), or running much newer versions of stuff (on younger apps / more agile teams) - rather than having to dictate a version policy organisation wide. I'm _not_ saying it would be better - everyone's environment and constraints are different, but thinking 'what if' about an entirely different strategy is entirely worthwhile. A lot of your pain seems to come from the fact that you can only have one version of every library on each system. Cheers t0m ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] Managing module regressions.
On Tue, Jul 24, 2012 at 1:03 PM, Tomas Doran wrote: > > On 24 Jul 2012, at 19:17, Fred Moyer wrote: >> Also, these experiences are that of a large team developing large apps >> to hundreds of servers. The experience of one developer deploying to a >> small environment will certainly be different. > > > I wonder how life would be different if you just deployed an entire perlbrew > per app, so /opt/MyApp/bin/perl - this would make the cost of upgrading > things much more trivial, as the cost would be per project, rather than > having a wider impact.. It would also allow teams for each product to be > strongly conservative (if that suited the team in question), or running much > newer versions of stuff (on younger apps / more agile teams) - rather than > having to dictate a version policy organisation wide. Agreed on the heterogenous module approach for different apps. That part of it has worked well, but sometimes dependencies leak up the chain into your application. It's definitely not an easy problem. So far though I think we've had success in the current approach despite the pain points on certain part. > > I'm _not_ saying it would be better - everyone's environment and constraints > are different, but thinking 'what if' about an entirely different strategy is > entirely worthwhile. A lot of your pain seems to come from the fact that you > can only have one version of every library on each system. > > Cheers > t0m > > > ___ > List: Catalyst@lists.scsys.co.uk > Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst > Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ > Dev site: http://dev.catalyst.perl.org/ ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/