On Mon, Feb 18, 2013 at 3:21 AM, fatmcgav <[email protected]> wrote:

>
> On 18 February 2013 10:50, Andy Parker <[email protected]> wrote:
>
>> I just took a look and see that you got no responses on puppet-users.
>> That is unfortunate :(
>>
>> On Mon, Feb 18, 2013 at 12:13 AM, Gavin Williams <[email protected]>wrote:
>>
>>> Morning All
>>>
>>> I posted this on Puppet-users a few days ago, but I thought i'd post it
>>> on here aswell to get a Dev's view-point...
>>>
>>> Firstly, apologies for the length of this post, however I thought it
>>> probably most useful to fully outline the challenge and the desired
>>> result...
>>>
>>> Ok, so we're in the process of Puppetizing our Oracle/NetApp platform
>>> for Live/DR running.
>>>
>>> The current manual process, upon setting up a new database, a set of
>>> volumes are created to contain the various Oracle DB elements, and these
>>> are then SnapMirror'd to the DR site.
>>> This SnapMirror process requires a period of time to copy the base data
>>> over... This time period is directly relational to the amount of data
>>> required... I.e. a copy of 20Gb may take an hour, 200Gb may take 10
>>> hours...
>>> During this period, the SnapMirror resource is in an 'initializing'
>>> state. Once the data copy is complete, then the resource will change to an
>>> 'initialized' state.
>>> The next step in the process is then to break the relationship so that
>>> the DR end can be used in a R/W mode...
>>>
>>> Now, in order to Puppetize this, I need to be able to replicate the
>>> above behaviour...
>>> I've got Puppet to create and initialize the relationship, and that
>>> works as expected. However Puppet doesn't currently care about the
>>> relationship state. Now that's easy enough to add in as a new property
>>> against the type/provider.
>>>
>>
>> Based on how you are describing this, I'm not sure that expressing it as
>> a parameter is best. It sounds like you are describing a situation where
>> there are a few states that you care about, but transitioning between those
>> states requires sitting in other "non-interesting" states for a while.
>> Describing the "non-interesting" states pushes the management of those
>> state transitions outside of puppet and possibly makes them harder to work
>> with.
>>
>
> Ok, that makes sense... Unless I do lots of masking and mapping of the
> intermediate status' into something that Puppet knows, but again, that adds
> complication etc...
>
>
>>
>>
>>> However what I'm struggling to understand is how, or if it's even
>>> possible, to automate the switch from 'Initialized' state to a 'Broken'
>>> state upon completion of the initialization stage???
>>>
>>>
>> Yeah. Normally puppet deals with achieving the desired state in a single
>> run of puppet. So one possible solution is to have puppet block! I really
>> don't think that in this situation that would be a good idea, since it
>> would leave everything else on the machine unmanaged for an unknown length
>> of time.
>>
>
> Yeh, we could be looking at transfer times of 24-48 hours on some of our
> larger datasets, so wouldn't want Puppet blocking for that long a period...
>

So just to explore this a bit. An ensurable resource by default have a
present absent state, and the transition between them is pretty
straightforward.

present -> absent (def destroy)
absent -> present (def create)

I'm assuming present -> absent is short enough you can wait for the process
to complete, so create is the only problematic state.

For now the closest thing appears to be a transition state that fails
(intending to block dependent resource):

absent -> initializing -> present

The custom ensurable block:
  ensurable do
    newvalue(:present) do
      unless provider.initialized?
        provider.create
      else
        provider.progress
      end
    end

    newvalue(:absent) do
      provider.destroy
    end

    newvalue(:initializing) do
      provider.progress
    end
  end

So when a resource is in an initializing state just report back the
progress status and fail:
  def progress
    percent = File.stat(resource[:name]).size / 100.0
    fail("Creation in progress #{percent}% complete.")
  end

Here's the output example:

# initial create
$ puppet apply tests/transition.pp
err: /Stage[main]//Transition[/tmp/demo_a]/ensure: change from absent to
present failed: Creation in progress 0.0% complete.
notice: /Stage[main]//Notify[complete]: Dependency Transition[/tmp/demo_a]
has failures: true
warning: /Stage[main]//Notify[complete]: Skipping because of failed
dependencies
notice: Finished catalog run in 0.08 seconds

# in progress
$ puppet apply tests/transition.pp
err: /Stage[main]//Transition[/tmp/demo_a]/ensure: change from absent to
present failed: Creation in progress 12% complete.
notice: /Stage[main]//Notify[complete]: Dependency Transition[/tmp/demo_a]
has failures: true
warning: /Stage[main]//Notify[complete]: Skipping because of failed
dependencies
notice: Finished catalog run in 0.08 seconds

# finished:
$ puppet apply tests/transition.pp
notice: complete
notice: /Stage[main]//Notify[complete]/message: defined 'message' as
'complete'
notice: Finished catalog run in 0.08 seconds

 Now these databases definitions are currently driven from a YAML backend
>>> which maintains information such as database name, volume information,
>>> primary netapp controller, replication netapp controller, etc... Currently,
>>> this YAML file is just a file on the puppet master... However there are
>>> ambitions to move this into a more dynamic backend, such as CouchDB or
>>> similar... So that opens the possibility to automatically update the YAML
>>> resource state.. However Puppet still needs to be able to support updating
>>> that backend based on the information it gets from the actual resource...
>>>
>>> So to flow it out:
>>>
>>>    1. Create a new database in backend ->
>>>    2. Puppet creates volumes on primary ->
>>>    3. Data is added to volumes ->
>>>    4. Backend updated to indicate replication is required ->
>>>    5. Puppet creates volumes on Secondary and adds Snapmirror
>>>    relationship ->
>>>    6. Snapmirror initializes in background ->
>>>    7. Puppet periodically runs against network device and checks
>>>    resource state ->
>>>    8. Backend resource state is updated following each run? ->
>>>    9. Snapmirror initialization completes ->
>>>    10. Puppet runs, detects new resource state and then triggers break?
>>>    11. Backend resource state updated to 'broken'?
>>>
>>> Now 1 to 7 above are fine, but 8 to 11 are where I get a bit unsure...
>>>
>> I think you have most of the picture here. Puppet manages some of the
>> transitions between states in order to get to that final "broken" state.
>> Using defined resource types or parameterized classes won't get you there
>> since the information about whether the next step of the management of the
>> resource can be taken is on the node. As you said earlier, it is once the
>> snapmirror process reaches the "initialized" state that puppet should
>> finish its job.
>>
>> Since the data needs to come from the node, then there are a couple of
>> choices:
>>   * a custom fact: doesn't seem good since you would be encoding in
>> facter the presence of particular resources
>>   * an ENC the probes the Snapmirror system: seems doable, but once again
>> encodes the presence of particular resources outside the manifests
>>   * a custom type: probably the best solution, the replication itself is
>> a kind of resource that you want to manage, and what needs to be done is
>> heavily dependent on the current state and desired state of the resource
>>
>> So I would suggest creating a custom type and provider for a "replicated
>> data" resource, or even try splitting it up into several different
>> resources. Doing this will let you make the final transition without having
>> to change the catalog.
>>
>> I'll admit, though, that puppet doesn't really have a concept of an "in
>> progress" convergence of a resource, so I'm not sure how the report will
>> work out for these kinds of resources. I suspect that it would show a
>> change every time that puppet runs and the replication is still in progress.
>>
>
The problem is failing is a bit misleading. Certainly an interesting use
case if we can mark the resource as pending and subsequent resources simply
noop, but as it stands we can't do anything like this:

$ puppet apply tests/transition.pp
warning: Could not retrieve fact fqdn
notice: /Stage[main]//Transition[/tmp/demo_a]/ensure: current_value absent,
should be present Progress: 0.0 % (pending)
notice: /Stage[main]//Notify[complete]/message: current_value absent,
should be complete (noop)
notice: Finished catalog run in 0.08 seconds

Andy, is this worth filing a feature request?

Thanks,

Nan

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/puppet-dev?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to