Hi Christopher,
I've spent some time getting my head around Hiera now, and would appreciate
some help with how to implement your suggestion:
>
> hiera hash with each environment and its associated mount point
> hiera lookup on a non-management node grabs its environment
> environment is used to determine which mount point via hiera_hash
> management node uses create_resources and hiera_hash to make its mount
> points
>
I've now got this :hierarchy:
- "nodes/%{::hostname}"
- "application_env/%{::application_env}"
- common
So first I'm assigning a node via hiera to an environment:
hiera/nodes/box1.yaml:
---
application::env: "live"
Then I'm setting an external fact named application_env which I pick up by
Hiera later on (Not sure this construct is good practice, suggestions
welcome).
Then I'm configuring unrelated environment-specific settings:
hiera/application_env/live.yaml:
---
application::setting1: true
application::setting2: false
application::setting3:
- 'foo'
- 'bar'
So the thing is that as soon as I know the environment name I know
everything I need to know to create a nfs mount resource inside the puppet
module:
mount { appnfs:
device => "${mountip}:/application_${env},
fstype => "nfs",
name => "$mountdir",
}
The same would be true on my Management server, with the difference that
the name would be name => "${mountdir}_${env}".
So what I don't understand is the hiera hash per environment bit. So I
guess I could create a single Hash with all the environments instead of the
above one yaml file per environment. Then I can put hashes in each of these
hash values with the unrelated application settings. But wouldn't I need
for create_resources a second unrelated hash with all environments with all
the settings for the mount resource, like device, fstype, name and so on?
At that point I would again have double configuration, two separate hashes,
one for the whole rest, one for the mount resources. Also I actually have 3
NFS mounts per environment, just mentioned one for simplification. Would
that be a 3rd and 4th hash with mostly duplicate data? What if I need
something else on the Management server in future per environment ... a 5th
hash for that?
My hypothetical virtual exported resource somehow sounds like a more
intuitive approach. an exported resource, which is only virtual, and can
therefore realized more than once, namely by every application server which
is exporting it. This would also help in case I have to configure an
exception, like I need this nfs mount on all environments, except on the
environment called data_migration. A hash of all environments, used with
create_resources wouldn't pick that exception up, right? But maybe I'm not
fully understanding Hiera's possibilities here?
Thanks
Stephan
> Possibly I'm getting closer?
>
> > --
> > You received this message because you are subscribed to the Google
> Groups
> > "Puppet Users" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send an
> > email to [email protected] <javascript:>.
> > To view this discussion on the web visit
> > [1]
> https://groups.google.com/d/msgid/puppet-users/26e1282e-45f0-46cd-a7b5-36d50fd012e2%40googlegroups.com.
>
>
> > For more options, visit [2]https://groups.google.com/groups/opt_out.
> >
> > References
> >
> > Visible links
> > 1.
> https://groups.google.com/d/msgid/puppet-users/26e1282e-45f0-46cd-a7b5-36d50fd012e2%40googlegroups.com
>
> > 2. https://groups.google.com/groups/opt_out
>
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/puppet-users/79ec4486-4808-4982-9c25-9d0eddf1a844%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.