On Wed, Sep 24, 2014 at 7:34 AM, Paul Julius <[email protected]> wrote:
> Hi Ansible folks! > > Cross posting here from the Vagrant and Packer mailing lists, because I > thought that people on the Ansible mailing list would probably have really > great ideas to share with me. > > I am really enjoying my current workflow with Ansible. I love it because > it models my Dev workflow, almost precisely, thereby getting me close to > the "infrastructure as code" nirvana. > > We just wrapped up CITCON Zagreb [footnote:x] where I was talking to other > DevOps folks about how they use Ansible. There were some interesting ideas! > I wanted to ask on this mailing list what other people are doing. I would > love any feedback. > > > 1. Pickup story to automate deployment of something > 2. Write broken acceptance test (in something like Cucumber) > 1. Put acceptance test in "In Progress" bucket [footnote:xx] > > If you're talking about your own unit tests, this is up to you here. What follows may be percieved as a bit of a rant, and I don't want it to be percieved as much, but I think most people who come from this uber-testing culture have made things incredibly too hard, create more work for themselves, and as a result move slower - not really breaking less - but doing extra work. Work that ansible (and declarative CM systems in general) are designed to not make to be a thing. As such, I strongly favor integration tests run against stage environments to make sure things work, and coupling that in a rolling update against a production environment as a condition to decide whether to add something back to a load balanced environment there as well - ideally using the same tests, but that's not always possible. While more of a unit test thing, I personally find Cucumber to be wasted effort because most product-management types (I guess I qualify) are not going to be the ones writing test stubs, so it's usually fine to just go straight to tests. That being said, I think there's a lot of niceness to come out of the Ruby testing community - I just never felt Cucumber was one of those things. Good integration tests for ops are more important -- is this web service responsive? Not things like "is the file permission of this file the same thing it is listed in the configuration system - as that just duplicates configuration in two places. > > 1. Write broken functional test (in something like Serverspec) > > I'm strongly not a fan of ServerSpec, because I think it fails to understand the basis of configuration management in most places - that you do not have to check for what the config tool already asserts. I'm much more of a fan of checking to make sure a deployed application works. We've written about this here: http://docs.ansible.com/test_strategies.html > > 1. > 2. Write just enough code to make it pass - Vagrant + Ansible + > Virtualbox > 3. Refactor - Good sense and Fowler's patterns > > While some of his posts are useful, Fowler's refactoring suggests some rather silly things for code - change one thing, recompile, re-run tests, that would utterly sabotage development efficiency in most cases. He tries to make code design a bit too mechanical, IMHO. Unrelated, but somewhat on the Fowler-worship front: See somewhat related - http://perl.plover.com/yak/design/samples/slide001.html I'm also not really sure how Design Patterns apply so much for a configuration system :) > > 1. > 2. Run my pre-commit build - Packer + Ansible + AWS (or whatever > target platform) > 3. Commit/push - Git (or VCS of choice) > 4. Go to step 3, until acceptance test passes > 5. Review with customer, maybe go back to step 2 > 6. Move acceptance test into the "Complete" bucket > 7. Story complete > > > At step 7, of course, my CI server picks up the change and sends it > through the following stages of my pipeline: > Here is the outline of my slides from my talk to the NYC Continuous Deployment group that suggests a good dev->stage->test workflow and how to incorporate tests into a CD pipeline: https://gist.githubusercontent.com/brokenthumbs/7fd7992fc1af0cfcc63d/raw/e0c750e00aeb6e62da04fd680346516cb88f8ae5/gistfile1.txt > > > 1. Checkout from Git > 2. Runs Vagrant+Ansible+AWS > 1. Executes functional tests - Serverspec - 0% tolerance for broken > tests > 2. Executes "Complete" Acceptance tests against the Vagrant > instance - 0% tolerance for breakages > 3. Executes "In Progress" Acceptance tests against the Vagrant > instance - reporting on results and fail if a test passes [footnote:xxx] > > Using Vagrant to push to AWS here seems weird to me, I'd probably just use the AWS modules in Ansible directly from from Jenkins to trigger my tests towards AWS, rather than kicking them off from a laptop. I guess TLDR is: (A) try to keep it simple (B) unit tests don't usually make sense in prod - integration tests DO matter, and are supremely important, but spend time writing tests for your application, not tests for the config tool (C) monitoring is supremely important (D) build -> qa/stage -> prod > -- You received this message because you are subscribed to the Google Groups "Ansible Project" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgxr0fV7hXYiwBpvG7knAQMV0z8GHSJKxbcvC67hScTvCA%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
