[Puppet-dev] Feature proposal: "metadata" metaparameter

2020-05-14 Thread Reid Vandewiele
ching data to Puppet resources 
non-operatively, and fit the use cases given above. 

https://en.wikipedia.org/wiki/Metadata

The term "annotation" was considered as well, based on looking at similar 
constructs in Kubernetes. The definitions of both of those nouns more or 
less matches the use case for the proposed feature, but the natural use and 
rich explanations for "metadata" are a clear and obvious match to the use 
case, whereas the rich explanations for "annotation" are less so.

https://en.wikipedia.org/wiki/Annotation


*~ fin ~*




The corresponding ticket for this proposal is 
https://tickets.puppetlabs.com/browse/PUP-10491.

Initial discussion is best suited for the mailing list. This thread is now 
open for feedback and discussion. After any initial discussion, updates 
will be made to the ticket.

-- 
Reid Vandewiele
Puppet Solutions Architect

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/7b1e01ab-20b3-4b6e-90c0-90677585e217%40googlegroups.com.


[Puppet-dev] Re: Has anyone already developed an Elasticsearch backend to Hiera?

2018-04-02 Thread Reid Vandewiele
Hey Nick,

A particular phrase you used caught my attention: "Elasticsearch holds the 
Hiera config for a number of nodes."

There's a lot about putting together the words "elasticsearch" and "hiera 
backend" that can sound scary if it's done wrong, but I have seen backends 
built to solve the "config for individual nodes" problem in a way that 
complements Hiera's default yaml backend system, without noticeably 
sacrificing performance, by using a carefully limited number of calls to 
the external backend per catalog compile. Most generalized data that 
doesn't need to change frequently or programmatically is still stored in 
yaml files alongside the code.

When that's done, the implementing hiera.yaml file may look something like 
this:

hierarchy:
  - name: 'Per-node data'
data_hash: elasticsearch_data
uri: 'http://localhost:9200'
path: %{trusted.certname}"  
  - name: 'Yaml data'data_hash: yaml_datapaths:  - 
"role/%{trusted.extensions.pp_role}"  - 
"datacenter/%{trusted.extensions.pp_datacenter}"  - "common"


The most important bit showcased here is that for performance, the 
*data_hash* backend type is used. Hiera can make thousands of lookup calls 
per catalog compile, so something like lookup_key can get expensive over an 
API. data_hash front-loads all the work, returning a batch of data from one 
operation which is then cached and consulted for the numerous lookups 
that'll come from automatic parameter lookup.

There's an example of how to do that 
in https://github.com/uphillian/http_data_hash.

To John's point, I wouldn't hesitate to run your use case by an expert if 
you have the option.

Cheers,
~Reid

On Monday, April 2, 2018 at 7:47:37 AM UTC-7, John Bollinger wrote:
>
>
>
> On Saturday, March 31, 2018 at 5:59:12 AM UTC-5, nick@countersight.co 
> wrote:
>>
>> Thanks for your response John, 
>>
>> I appreciate you taking a quick look around to see if anyone else has 
>> already done this. I had come to the same conclusion, that if someone has 
>> already, they mostly likely haven't shared it. 
>>
>> You raise valid points about EL being generally pretty unsuitable as a 
>> Hiera backend. However, the project I am working on already has an 
>> Elasticsearch instance running in it, so there would be next to no 
>> performance overhead for me. It uses a web interface to write out YAML 
>> files that are fed into a Hiera for a 'puppet apply' run which configures 
>> various aspects of the system. By using Elastic instead of YAML files, I 
>> can eliminate some of the issues surrounding concurrent access, it also 
>> means backups are simplified, as I'd just need to backup ES.
>>
>
>
> With an ES instance already running, I agree that you have negligible 
> additional *memory* overhead to consider, but that doesn't do anything 
> about *performance* overhead.  Nevertheless, the (speculative) 
> performance impact is not necessarily big; you might well find it entirely 
> tolerable, especially for the kind of usage you describe.  It will depend 
> in part on how, exactly, you implement the details.
>
>
>> Is writing a proof-of-concept Hiera backend something that someone with 
>> reasonable coding skills be able to knock out in a few hours? 
>>
>>
> It depends on what degree of integration you want to achieve.  If you 
> start with the existing YAML back end, and simply hack it to retrieve its 
> target YAML objects from ES instead of from the file system, then yes, I 
> think that could be done in a few hours.  It would mean ES offering up 
> relatively few, relatively large chunks of YAML, which I am supposing would 
> be stored as whole objects in the database.  I think that would meet your 
> concurrency and backup objectives.
>
> If you want a deeper integration, such as having your back end performing 
> individual key lookups in ES, then you might hack up an initial 
> implementation in a few hours, but I would want a lot longer to test it 
> out. I would want someone with detailed knowledge of Hiera and its 
> capabilities to oversee the testing, too, or at least to review it.  Even 
> more so to whatever extent you have in mind to implement Hiera 
> prioritization, merging behavior, interpolations, and / or other operations 
> affecting what data Hiera presents to callers.  If there is an actual 
> budget for this then I believe Puppet, Inc. offers consulting services, or 
> I'm sure you could find a third-party consultant if you prefer.
>
>
> John
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/d35a975e-b9f5-4488-a107-b97007741887%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] lookup from external script

2017-08-22 Thread Reid Vandewiele
For an abandoned experiment awhile back I went to the trouble of mostly 
getting an external ruby script working that used Hiera 5 as a library. I 
don't know that you still want to do this, sounds like there may be other 
options per the conversation in the thread, but I'll post the following 
link as reference code in case it's useful. (Ferreting out which specific 
parts of the code relate to using Hiera 5 as a library is left as an 
exercise to the reader.)

https://github.com/reidmv/r10k_puppetfile_ref_lookup/blob/master/r10k_puppetfile_ref_lookup.rb

~Reid

On Monday, August 21, 2017 at 8:50:57 AM UTC-7, Craig Dunn wrote:
>
>
>
> > Maybe this is a bit overkill for your requirements, but this was
>> > actually one use case for Jerakia (http://jerakia.io).  Hiera 5 can use
>> > it as a backend from your Puppet implementation, and because it runs
>> > over an HTTP API other tools can easily hook into the same data
>> > lookups... for example there is now an Ansible lookup plugin that can
>> > pull the same data as Puppet.  It also has a client library written in
>> > Ruby which would hook into your script.
>>
>> So are you reaching out to Hiera 5 from Jerakia to do that and how are
>> you doing it?
>>
>
> The other way around, Hiera  reaches out to Jerakia.   Jerakia is 
> standalone but one way it can be used is as a backend to Hiera - it can 
> integrate with Puppet using a Hiera 5 backend that ships with the 
> crayfishx/jerakia Puppet module.  http://jerakia.io/integration/puppet
>
>  
>
>>
>> Are you as Hendrik suggested doing a compilation? What I saw as a big
>> difference is the whole context/scope that Hiera 5 is aware of, for
>> doing there lookups, which works for anything being called from puppet
>> code, but is kinda hard for an external query that must mimick that 
>> context.
>>
>
> No - if you are talking about looking up only data that is in your 
> hieradata path in a specific environment, it would be easy to get Jerakia 
> to read from the same file hierarchy, if you're using features inside of 
> Hiera / Puppet such as module level Hiera lookups and Puppet variable 
> interpolation, then this would, as Henrik said, require you to be in an 
> actual Puppet environment so it would be difficult to expose this in the 
> same way.  Your original post said you are on Hiera 3 though so it's highly 
> the data you have to date would be very easy to expose using Jerakia.   
>  Scope (eg: facts/variables..etc) shouldn't be an issue, you can hook up 
> Jerakia to pull the scope data from PuppetDB when you query data from 
> another tool.
>
>
> Craig
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/ea482bd6-7f9a-44e1-85c7-3f74e4546200%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Re: Hiera Merge

2017-08-09 Thread Reid Vandewiele
If you're just trying to transform the data in Puppet code and assuming (as 
Henrik was) that you can't change how the data is stored, something like 
this might work.

# Assuming $was_data is the hash of data from Hiera
$common_data = $was_data.filter |$pair| { $pair[0] != 'was_dmgr_data' }
$hash1 = {'esa-group-service'=> $common_data + 
$was_data['was_dmgr_data']['esa-group-service'] }
$hash2 = {'esa-user-profile-service' => $common_data + 
$was_data['was_dmgr_data']['esa-user-profile-service'] }

~Reid

On Wednesday, August 9, 2017 at 6:15:41 AM UTC-7, ggun wrote:
>
> Thanks
>
> On Tuesday, August 8, 2017 at 7:10:13 PM UTC-4, ggun wrote:
>>
>> Hi Experts,
>>
>> I have a requirement as below.
>> I need to create a Hash from below hiera data.
>>
>> was_data:
>>   hs3sourcepath: 'glic.binaries/websphere'
>>   hdaresponse_file: /opt/software/WAS8.5.5.10_Install.xml
>>   hibmagentpath: 
>> /opt/software/agent.installer.linux.gtk.x86_64_1.8.2000.20150303_1526.zip
>>   hbase_dir: '/opt/was/was855'
>>   hinstance_name: WebSphere
>>   was_dmgr_data:
>> esa-group-service:  
>>   hgroup: websph
>>   hdmgr_profile: TST
>>   hdmgr_cell: CELL
>>   hcluster_name: CLUSTER
>>   hpptdmgrsrvport: 8080
>> esa-user-profile-service:
>>   hdmgr_profile: ABC
>>   hdmgr_cell: PQS
>>   hcluster_name: IOP
>>   hpptdmgrsrvport: 
>>
>>
>> I need a hash of above data as 
>> Hash 1 : 
>> esa-group-service:   
>>   hgroup: websph
>>   hdmgr_profile: TST
>>   hdmgr_cell: CELL
>>   hcluster_name: CLUSTER
>>   hpptdmgrsrvport: 8080
>>   hs3sourcepath: 'glic.binaries/websphere'
>>   hdaresponse_file: /opt/software/WAS8.5.5.10_Install.xml
>>   hibmagentpath: 
>> /opt/software/agent.installer.linux.gtk.x86_64_1.8.2000.20150303_1526.zip
>>   hbase_dir: '/opt/was/was855'
>>   hinstance_name: WebSphere
>>
>> Hash 2
>>   esa-user-profile-service:
>>   hdmgr_profile: ABC
>>   hdmgr_cell: PQS
>>   hcluster_name: IOP
>>   hpptdmgrsrvport: 
>>   hs3sourcepath: 'glic.binaries/websphere'
>>   hdaresponse_file: /opt/software/WAS8.5.5.10_Install.xml
>>   hibmagentpath: 
>> /opt/software/agent.installer.linux.gtk.x86_64_1.8.2000.20150303_1526.zip
>>   hbase_dir: '/opt/was/was855'
>>   hinstance_name: WebSphere
>>
>> So I trying to merge the has of esa-group-service to was_data and 
>> esa-user-profile-service to was_data
>>
>> Please let me know if there is a way
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/be56ce3b-77e2-41a8-b9a5-75c42293a15c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Exists? is called before prefetching

2017-07-17 Thread Reid Vandewiele


On Sunday, July 16, 2017 at 6:04:04 AM UTC-7, bert hajee wrote:
>
> Trevor, Reid,
>
> Thanks for taking the time to look at this. 
>
> Exists should be checking the @property_hash object which is populated by 
>> the instances method.
>> Something like:
>> def exists?
>>   @property_hash[:ensure] == :present
>> end
>
>
>
> The real type does implement the exists? method by looking at the property 
> hash. But when using the transition type, the property hash is not yet 
> filled correctly, because the instances and prefetched methods are not 
> (yet) called.
>
>
> On Friday, 14 July 2017 22:57:54 UTC+2, Reid Vandewiele wrote:
>>
>> I haven't dived into the code recently but depending on when prefetching 
>> happens, it might be possible the Transition type is causing an "early" 
>> invocation of #exists?. This is because Transition invokes a check of the 
>> resource it is "prior to", thusly: 
>> https://github.com/puppetlabs/puppetlabs-transition/blob/0.1.1/lib/puppet/provider/transition/ruby.rb#L68
>>
>> If it's the case that prefetch isn't called until the first instance of a 
>> given type is evaluated, that might be something that's happening *after* 
>> the Transition resource does its thing. Which could help explain why your 
>> exists?() method, which uses prefetched data, isn't working.
>>
>>  
>  This seems to b what is happening.
>  
> But what is the best way forward to solve this?
>

This could be seen as a problem with Transition, which would be best to see 
fixed in the Transition module. The basic idea would be to make sure a 
provider has done any prefetching it needs to before calling safe_insync?() 
on a resource. That module is effectively opensource though so it depends 
on someone being aware of the problem and having spare time to fix it.

Since that may not happen quickly, a workaround might be to ensure that at 
least one instance of the type in question is evaluated prior to the 
transition resource being evaluated. E.g.

file { 'prefetch':
  path   => '/dev/null',
  ensure => present,
}

transition { 'transition a file resource':
  require=> File['prefetch'],
  resource   => File['/path/to/real/file'],
  attributes => { ensure => absent },
  prior_to   => Service['example'],
}


This is just a mock example. The only thing it's really showing is that a 
file "resource" called prefetch exists, doesn't really do anything, but is 
guaranteed to be evaluated before a transition involving a different, real 
file.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/4e6e2c67-dd82-4f6b-9d34-e5e8775df04a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Exists? is called before prefetching

2017-07-14 Thread Reid Vandewiele
I haven't dived into the code recently but depending on when prefetching 
happens, it might be possible the Transition type is causing an "early" 
invocation of #exists?. This is because Transition invokes a check of the 
resource it is "prior to", 
thusly: 
https://github.com/puppetlabs/puppetlabs-transition/blob/0.1.1/lib/puppet/provider/transition/ruby.rb#L68

If it's the case that prefetch isn't called until the first instance of a 
given type is evaluated, that might be something that's happening *after* 
the Transition resource does its thing. Which could help explain why your 
exists?() method, which uses prefetched data, isn't working.

If, however, prefetch is called for ALL providers before ANY resources are 
evaluated, you can ignore everything else I've said above because it's all 
speculation from a false premise. :)

No solutions here, just largely unresearched speculation, but maybe some 
useful thoughts. :)

~Reid

On Thursday, July 13, 2017 at 6:50:02 AM UTC-7, Trevor Vaughan wrote:
>
> Sorry for the double post but this is probably the best walkthrough on the 
> subject out there presently 
> http://garylarizza.com/blog/2013/12/15/seriously-what-is-this-provider-doing/
>
> On Thu, Jul 13, 2017 at 9:48 AM, Trevor Vaughan  > wrote:
>
>> HI Bert,
>>
>> Exists should be checking the @property_hash object which is populated by 
>> the instances method.
>>
>> Something like:
>>
>> def exists?
>>   @property_hash[:ensure] == :present
>> end
>>
>> Trevor
>>
>> On Mon, Jul 10, 2017 at 9:24 AM, bert hajee > > wrote:
>>
>>> Hallo,
>>>
>>> I'm using the puppet module transition to help me out a nasty puppet 
>>> definition situation. But I noticed it sometimes is not idempotent. We 
>>> noticed this on a very complex custom type. To make sure the issue is a 
>>> clear as can be, we extracted the minimal type/and provider to simulate 
>>> this. Here 
>>>  
>>> you can find the type/provider code 
>>>
>>> Here is some simple example code, based on a simple custom type:
>>>
>>>
>>>   transition { 'transition':
>>> resource   => File['/a.a'],
>>> attributes => {
>>>   content  => 'temp',
>>> },
>>> prior_to   =>  Error_type['a']
>>>   }
>>>
>>> file{'/a.a':
>>>   ensure => 'present',
>>>   content => 'aa',
>>> }
>>>
>>> error_type {'a':
>>>   ensure => 'present',
>>>   prop => 'aaa'
>>> }
>>>
>>> Using this example code, I noticed that the exists? is called before 
>>> prefetching is done. Before using this module, I was under the impression 
>>> the exists? method was only called later in the sequence. Therefore my 
>>> exists? method was based on prefetched information.
>>>
>>> Do I:
>>>
>>>- register a bug/question in the module?
>>>- change the type/provider implementation to check if the specific 
>>>resource is already prefetched (would probably need to be done for a lot 
>>>more types/and providers)
>>>- register a bug/question for puppet, making sure the prefetch is 
>>>ALWAYS called before exists?
>>>
>>>
>>> All suggestions are welcome!
>>>
>>> Regards,
>>>
>>> Bert 
>>>
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Puppet Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to puppet-dev+...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/puppet-dev/94724428-5db3-4409-b710-9e2847bd1c47%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> -- 
>> Trevor Vaughan
>> Vice President, Onyx Point, Inc
>> (410) 541-6699 x788
>>
>> -- This account not approved for unencrypted proprietary information --
>>
>
>
>
> -- 
> Trevor Vaughan
> Vice President, Onyx Point, Inc
> (410) 541-6699 x788
>
> -- This account not approved for unencrypted proprietary information --
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/bd1221fd-fb30-4284-bd58-3fba63fb0ee2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] send facts once again after catalog applied

2016-08-22 Thread Reid Vandewiele
On Monday, August 22, 2016 at 12:53:12 AM UTC-7, Craig Dunn wrote:

>
> See 
>
> # puppet help facts upload
>
> That sounds like what you want... you may need to tweak your auth.conf 
> settings too.
>

I think `puppet facts upload` was removed in Puppet 4, unfortunately. 
There's this ticket currently 
open: https://tickets.puppetlabs.com/browse/PUP-5934

Because puppetserver won't accept facts directly (it'll only accept them as 
part of a catalog request) I don't know if there's a reasonable workaround 
that's doable today.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/7a0f1e1c-6b88-4f74-a76a-7c7f39640acf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] External facts based on OS

2016-07-28 Thread Reid Vandewiele
Regardless of the fact that this is on the developer's list, it's worth 
mentioning that from a use-case perspective it may not actually be 
necessary to worry about that too much. Even though the entire contents of 
the directory may be synced (and Michael correct me if I'm wrong about 
this), when it actually comes time to execute external facts Puppet is 
already intelligent enough to execute only certain kinds of scripts on 
Windows, and others on Posix systems. Given that your use case sounds like 
it's just for Windows vs. Linux, this should be sufficient to run facts 
cleanly and return values only for relevant platforms, even though all 
systems will have a copy of all facts including the ones they can't and 
won't evaluate due to platform incompatibility.

https://docs.puppet.com/facter/3.1/custom_facts.html#executable-facts-unix
https://docs.puppet.com/facter/3.1/custom_facts.html#executable-facts-windows

If you need to worry about fine-grained differences like between RHEL and 
Debian, then today at least the bash scripts themselves would need to 
contain that logic.

~Reid

On Wednesday, July 27, 2016 at 5:55:57 PM UTC+1, Michael Smith wrote:
>
> There currently isn't a way.
>
> 
>
> Since this is the developer's list, I'll go into some details about what 
> would be required for it to work:
>
> Currently pluginsync happens before any facts are returned to the system. 
> This makes it difficult to do anything based on OS version. I've had chats 
> about changing Puppet's communication a bit so it sends some core facts 
> (this might have a lot of overlap with things that could be trusted facts - 
> osfamily probably isn't going to change without requiring a new 
> certificate) before pluginsync so they can be used to determine the 
> environment during node classification; using them to do more precise 
> pluginsync also makes sense.
>
> Then you need a way to determine which plugins to sync. I see three options
> - have a way to query Facter for platform-specific criteria for external 
> facts, and send that to the master to be used in evaluating which facts to 
> sync
> - have a hierarchy in external facts, so you can explicitly target facts 
> at certain classes of machines based on the core facts sent before 
> pluginsync; that might interact in useful ways with Facter's upcoming 
> config file, but would need some way in each module to also specify what 
> facts are synced where. Something simple might be syncing based on 
> osfamily, so you'd have
> //facts.d/windows
> //facts.d/redhat
> etc.
>
>
> On Wed, Jul 27, 2016 at 1:45 AM, Aditya Gupta  > wrote:
>
>> Hello All,
>>
>> I have created two types of external facts :
>>
>> 1. windows based on powershell
>> 2. Linux based on bash
>>
>> And i have placed these in //facts.d/ folder.
>>
>> I have to copy everything whatever is present in this folder to the all 
>> client.So i am using pluginsync=true
>>
>> is there a way to transfer external facts scripts based on operating 
>> system.
>>
>> Thanks,
>> Aditya
>>
>> --
>>
>
 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/837f042a-a366-41b6-ae57-8a883d375091%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] implementation of faces

2015-09-22 Thread Reid Vandewiele
On Tuesday, September 22, 2015 at 11:43:30 AM UTC-7, Reid Vandewiele wrote:
>
>
> What I know about faces comes from tinkering with them on and off, and 
> writing one or two over the last couple of years (only one of which I can 
> find/remember now). I've tinkered with the `puppet node` face, the `puppet 
> node_aws`, `puppet node_gce` faces, and written 
> https://forge.puppetlabs.com/tse/nimbus.
>

Ah ha! Remembered and found another one: 
https://github.com/reidmv/puppet-module-puppet_certificate/blob/master/lib/puppet/provider/puppet_certificate/ruby.rb#L22.

Finding that one re-emphasized for me why CLI and API equivalence is the 
big win of Faces. The other half of the Face user experience is not just 
writing them, but being able to use them when writing other Puppet 
components. It's great to have that feeling of (probably false) 
knowledgeability you get from knowing things on the Puppet CLI, and being 
able to leverage it when you branch out and start trying to write more 
automated extensions of it. E.g. types and providers, as shown in this 
example.

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/7a01521b-e8ab-422f-bb88-b997d003fa71%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] implementation of faces

2015-09-22 Thread Reid Vandewiele
On Tuesday, September 22, 2015 at 9:17:22 AM UTC-7, Luke Kanies wrote:
>
> On Sep 21, 2015, at 7:52 PM, Corey Osman  > wrote: 
> > 
> > Hi, 
> > 
> > I remember when the puppet 2.7 release came out with support for faces 
> was all the rage.  The faces API seemed pretty slick as its a pluggable 
> system that allows the plugin to implement options as well.  I am curious 
> if there is any design notes or blog that someone followed in order to 
> create this system as I am looking to implement a similar pluggable feature 
> for a project I have.   
>

As an end-user of Faces I don't have insight into how they work or the 
design process, but I can share a little bit of what makes them awesome and 
what doesn't work at all. I'm a very light user and I'm sure I don't use 
the full suite of functionality but I've found writing and using them to be 
easy and enjoyable (except for the lack of documentation).

What I know about faces comes from tinkering with them on and off, and 
writing one or two over the last couple of years (only one of which I can 
find/remember now). I've tinkered with the `puppet node` face, the `puppet 
node_aws`, `puppet node_gce` faces, and 
written https://forge.puppetlabs.com/tse/nimbus.

What I like as an end-user:

   - CLI and API equivalence. This is AWESOME. This is the #1 reason I'm a 
   fan of Faces.
   - Relatively easy API for setting up my UI - subcommands and arguments.
   - Direct access to Puppet. Especially other faces! But settings and 
   utility methods are a boon as well.

What I don't like / Doesn't work:

   - Versions. I don't use them. Nobody uses them. I'm also pretty sure 
   they don't work. They don't contribute to the usability or draw of Faces.
   - I want to be able to specify more than one subcommand. E.g. I want to 
   write `puppet nimbus modules install`, but since that's the "nimbus" face 
   and two sub-commands it doesn't work well. I have to make do with `puppet 
   nimbus install_modules`.
   - Many faces are Terminus faces (`puppet certificate`, `puppet ca`, 
   etc). Basically, a kind of generalized wrapper for interacting with 
   terminii. Those seem overgeneralized and often don't hold up well. Faces 
   with more intentional and specific design seem to work much better, 
   generally.

People with more experience around the full suite of Face functionality may 
be able to infer by omission other things that either don't excite people 
or need documentation to expose.

~Reid 

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/11fba4ec-eb3b-42c3-908f-eb38a961249e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Re: munging the log output from a custom provider

2015-09-18 Thread Reid Vandewiele
On Fri, Sep 18, 2015 at 9:02 AM Trevor Vaughan 
wrote:

> Hi Corey,
>
> It's part of the 'property' Object:
> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/property.rb#L186
>

Related references:

http://www.rubydoc.info/gems/puppet/Puppet/Property (same thing, the rdoc
view)
https://docs.puppetlabs.com/guides/custom_types.html#customizing-behaviour

As that last link comments, often times the best existing references for
some of this stuff are existing types, used and referenced as examples.

The other resource I'm aware of is Dan Bode and Nan Liu's book "Puppet
Types and Providers" (late 2012). It's been awhile since it was published
but not a whole lot has changed in the relevant ruby API.
http://www.amazon.com/Puppet-Types-Providers-Dan-Bode/dp/1449339328

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/CAHNFGkNviLELsD%3DpNPUK2Q%3Du9y_oQRit7QTjXM9Y0Y%2BXdvv34Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Re: munging the log output from a custom provider

2015-09-17 Thread Reid Vandewiele
On Wednesday, September 16, 2015 at 6:19:27 PM UTC-7, Corey Osman wrote:
>
>
> [...] how can I keep the password from showing up in the reports when the 
> password changes.  Basically I don’t want the following to occur.  Is there 
> a way to suppress the logging of this info?  Or is there a way to 
> “munge/encrypt” the info being logged? [...]
>
> Notice: /Stage[main]/Main/Bmcuser[testuser]/userpass: userpass changed 
> '**Hidden**' to ‘123456'
>
The user type in Puppet core does this mostly, using change_to_s(), 
is_to_s(), and should_to_s(). Check out this part of the newproperty() 
method for password.

https://github.com/puppetlabs/puppet/blob/master/lib/puppet/type/user.rb#L212-L225


-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/cbbc5f61-d61e-4c31-82a4-f1b1877ae34c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Accessing attribute value of one resource in another resource impleataion

2015-06-17 Thread Reid Vandewiele
What Michael said about the design is worth considering.

If it makes sense to reference another resource instead of a path string 
(e.g. File[myfile]) and you're just curious about how to do it, there's 
code that does similar things in the puppetlabs/transition module, as well 
as in changes pending to the 2.x version of the puppetlabs/concat module

https://forge.puppetlabs.com/puppetlabs/transition
https://github.com/puppetlabs/puppetlabs-transition/blob/master/lib/puppet/provider/transition/ruby.rb

and

https://forge.puppetlabs.com/puppetlabs/concat
https://github.com/hunner/puppetlabs-concat/blob/c34231d130591d60d122fdab8c2fe794a17666ef/lib/puppet/type/concat.rb#L222

If you mean to do this in Puppet code rather than in a type/provider, there 
are evaluation-order considerations that make it generally inadvisable. 
However, there is a function for that.

https://forge.puppetlabs.com/puppetlabs/stdlib#getparam

It just needs to be very, very carefully used to the point which in almost 
all cases it would be better to just use a variable instead.

~Reid

  

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/33462acb-e02a-4328-9030-a300a5ea5d51%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Default environment_timeout preference

2015-03-06 Thread Reid Vandewiele
The principle of least astonishment is absolutely what we should be 
targeting. The use of any kind of timer upon whose ticks behavior changes 
is in inarguable opposition to this, whether it's 10 seconds, 3 minutes, or 
15 minutes. However, I think the use of implementation terms like caching 
in describing the two proposed options clouds the water a bit, as both 
options result in clear and consistent behavior.

Described without the term caching, option 1 effectively proposes that 
puppet-server should read all configuration files on startup (puppet.conf, 
auth.conf, fileserver.conf, environments/**/*.pp, etc.), and not 
automatically re-read any of these files unless the user issues an explicit 
`service puppet-server reload` command.

Described without the term caching, option 2 effectively proposes that 
puppet-server should read some configuration files on startup (puppet.conf, 
auth.conf, fileserver.conf, etc.), and that puppet-server should read 
directly from disk all relevant files from environments/**/*.pp when 
compiling a catalog, once per agent request.

Both proposals move strongly away from the problem behavior we have today - 
a clock-based timeout. Were this a new product, either option seems like it 
would satisfy principle of least astonishment. The only difference between 
them, from an astonishment perspective, is that long-time Puppet users are 
accustomed to behavior resembling option 2 (though in the past 
implementation it's been more optimized, similar to the inotify suggestion 
put forth by Trevor).

I don't know that thinking about it this way changes anyone's opinion, but 
I do want to make sure we aren't getting hung up on implementation terms in 
considering what the actual proposed behaviors are.

Is having files live-updated (via guaranteed re-read) a value proposition 
itself? There seem to be some minor benefits to a reload-required behavior. 
In-progress requests are guaranteed not to get half files from one revision 
and half files from another if the catalog request timing is particularly 
bad, since the user won't issue a reload command mid-file-update, and if 
they do puppet-server will flush in-progress requests anyway. Are those 
benefits worth considering proposal 2 for?

Does this perspective make sense, or am I missing something?

~Reid

On Thursday, March 5, 2015 at 6:42:34 PM UTC-8, Adrien Thebo wrote:

 To me, following the principle of least astonishment indicates that 
 caching be disabled by default; it'll work correctly for new users and has 
 no hidden gotchas. When people want to do performance tuning they're 
 probably fairly sophisticated users and can deal with weird cache 
 invalidation issues; since they're opting into this feature they should be 
 prepared to deal with the ramifications.

 On Thu, Mar 5, 2015 at 5:19 PM, Owen Rodabaugh ow...@puppetlabs.com 
 javascript: wrote:

 To clarify, I am asking for opinions on whether the default 
 environment_timeout should be 0 or unlimited in future releases of puppet.  
 The current plan is to default to unlimited. 

 I'm concerned that shipping with this default assumes prior experience 
 and will be another hurdle to getting started with puppet. Anecdotally I've 
 heard that a common question in #puppet is I changed my puppet code, why 
 isn't it showing when I do a test run?.

 Conversely setting environment_timeout=0 will result in lower 
 performance, but no need to restart puppet or hit the API to flush a cache 
 to see code changes. The users impacted by this are likely more experienced 
 and would already be managing, or easily able to manage this setting if 
 they had performance concerns or a pre-existing code deployment workflow.

 Thanks,

 Owen

 On Thursday, March 5, 2015 at 3:56:24 PM UTC-8, Trevor Vaughan wrote:

 Can you use inotify to invalidate the cache via the API call when 
 selected files in your infrastructure change?

 It looks like you could do this directly from the core 
 https://launchpad.net/inotify-java. You'll just want to queue them up a 
 bit to not go crazy. 10 seconds should probably do it, but you could make 
 that configurable.

 Trevor

 On Wed, Mar 4, 2015 at 4:36 PM, Owen Rodabaugh ow...@puppetlabs.com 
 wrote:

 We've been discussing what the default environment_timeout setting 
 should be. There is general agreement that the current 3 minutes is not 
 great. It's both baffling to new users and does not bring in the full 
 performance benefits.

 Two main perspectives on this:

 1. Performance should be the primary driver and that the default of 
 unlimited (cache never automatically refreshes) is preferred. This assumes 
 most users have a code deployment workflow and tooling which can be 
 adjusted to include the steps required to update the cache. These steps 
 are 
 either hitting the puppetserver environment cache endpoint, or restarting 
 the service to cause the cache to update.

 2. New user experience should be the primary driver and that a default 

Re: [Puppet-dev] Default environment_timeout preference

2015-03-06 Thread Reid Vandewiele
As Eric said there seems to be clear consensus and an issue has been opened
to make the change. I think it is still be useful for me to respond in
detail to John, but just to wrap up the thoughts - not to further advocate
for reload behavior. There seem to be good reasons to choose to serve files
live as opposed to read-on-start.

On Fri, Mar 6, 2015 at 11:06 AM, John Bollinger john.bollin...@stjude.org
wrote:



 On Friday, March 6, 2015 at 11:36:03 AM UTC-6, Reid Vandewiele wrote:

 The principle of least astonishment is absolutely what we should be
 targeting. The use of any kind of timer upon whose ticks behavior changes
 is in inarguable opposition to this, whether it's 10 seconds, 3 minutes, or
 15 minutes. However, I think the use of implementation terms like caching
 in describing the two proposed options clouds the water a bit, as both
 options result in clear and consistent behavior.

 Described without the term caching, option 1 effectively proposes that
 puppet-server should read all configuration files on startup (puppet.conf,
 auth.conf, fileserver.conf, environments/**/*.pp, etc.), and not
 automatically re-read any of these files unless the user issues an explicit
 `service puppet-server reload` command.

 Described without the term caching, option 2 effectively proposes that
 puppet-server should read some configuration files on startup (puppet.conf,
 auth.conf, fileserver.conf, etc.), and that puppet-server should read
 directly from disk all relevant files from environments/**/*.pp when
 compiling a catalog, once per agent request.



 It would also be clear and consistent to say that when a manifest is
 changed, those changes will start being reflected in catalogs emitted by
 the master within 3 minutes (or 1 or 20).  The exact timing is not quite as
 predictable, but the behavior can still be given as a rule, and without
 using any variant of the word cache.


I'll cede consistent, but I continue to hold that clock-based behavior
lacks clarity. The problem is not that the behavior can't be calculated,
it's that to a new user experimenting with the product the clock-based
behavior hinders their ability to engage in adjust/observe/iterate
experimentation. If I'm trying to understand what a product is doing, I
want to make a change and observe the result to see if the change I made
did what I expected. I want confidence that if I make an adjustment and a
change is observed, it is due to my adjustment. This is a feedback loop.

The problem with a clock is that after I make my adjustment, there usually
won't be any observed change immediately. But, there will be change later.
At best, my feedback loop is delayed and it takes me longer to understand
what's going on. More realistically, this kind of inconsistent feedback
makes me frustrated with the product and don't have confidence that what
I'm doing is having the expected effect. After I figure it out, I can work
with it.





 Both proposals move strongly away from the problem behavior we have today
 - a clock-based timeout. Were this a new product, either option seems like
 it would satisfy principle of least astonishment.



 Both do move away from the default behavior of 3.7, but I don't see how
 you can support a claim that *either* option would provide *least*
 astonishment, particularly inasmuch as you also claim that a timeout is
 more astonishing than either of the other alternatives.  I do not find it
 self-evident, for example, that most users would be more astonished that
 Puppet eventually notices manifest changes, than that they have to perform
 some kind of manual action separate from the change itself to make Puppet
 notice changes.


My assertion that either option provides a least-astonishment experience is
indeed built on a belief that clock-based system is Bad (TM), which I first
took as a given. I've provided more context for why I believe this to be
the case above. Even allowing that this assumption is incorrect, I still
believe that option 1 and 2 are much more similar to each other in terms of
first-time-user astonishment than either is to clock-based, since both give
users consistent feedback.

Users expect to have to restart or refresh most services if their
configuration file(s) change. I have been considering puppet manifests
similar to configuration files in this way. It sounds like most people
would more intuitively consider manifests more like *.php files, or
something to be *served*. The general opinion in this thread supports that.
Given that starting point, I can see how it would be less astonishing to
people if changes made to those files were immediately impactful, rather
than requiring a reload.




 The only difference between them, from an astonishment perspective, is
 that long-time Puppet users are accustomed to behavior resembling option 2
 (though in the past implementation it's been more optimized, similar to the
 inotify suggestion put forth by Trevor).



 I disagree.  Puppet has always

Re: [Puppet-dev] Autorequiring parent directories to the home directory in user resources

2015-03-02 Thread Reid Vandewiele
On Monday, March 2, 2015 at 7:21:55 AM UTC-8, Trevor Vaughan wrote:

 HmmOk, how about this:

 1) Dangling symlinks are allowed
 2) Warnings on dangling symlinks are the default (because you *probably* 
 don't want them)
 3) Setting :force = true, disables the warning message (in theory, you 
 would only do this after seeing the message)

3a) For a less destructive method, something like 'dangle = true' could be 
 allowed I suppose
 4) Autorequires happen so that you don't get spurious warning messages

 Would that work?


It still seems presumptuous to me even to emit warnings by default if 
Puppet creates a symlink which is dangling at the time of creation. The 
assumption is that potential benefit of the alert would outweigh the cost 
of the potential noise and extra parameters required to silence it when 
dangling symlinks are desired.

Besides the crazy things symlinks get used for on occasion, such as Samba's 
use of dangling symlinks to represent DFS file shares 
https://wiki.samba.org/index.php/DFS, Puppet may legitimately be asked to 
create a link prior to installing a package or performing another action 
which will result in the target being created, and users shouldn't need to 
set :force or :dangle for their first run to log cleanly.

The potential benefit of the noise does not merit the extra complexity to 
silence it. This is an instance where Puppet cannot reasonably determine 
whether or not a dangling symlink is a problem and should not presume to do 
so.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/1f9719c7-3729-4311-b93f-c386408f1e6d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Re: Dealing with transitional states in Puppet

2014-12-22 Thread Reid Vandewiele
On Mon, Dec 22, 2014 at 9:11 AM, John Bollinger john.bollin...@stjude.org
wrote:


 [...]

 I went looking for holes to poke in this approach, and didn't find any.  I
 like that it builds on Puppet's core concepts of resources and state, and
 that it appears to be general enough to be adapted to a wide variety of
 situations.

 Having read the docs but not examined the code, I am curious about
 management of the Transition type's own state.  How does it look ahead to
 determine whether it is already in sync?  I guess it invokes 'in_sync?' on
 each of the 'prior_to' resources, or something like that?  Also, I presume
 a Transition resource fails if it is not initially in sync and it cannot
 apply the transitional state it describes.  Is that correct?


Effectively, yes. For each prior_to resource the provider will retrieve it
from the catalog, call insync?() against each of its properties, and if any
are found that are not in sync that triggers the transition. For specifics
see
https://github.com/puppetlabs/puppetlabs-transition/blob/master/lib/puppet/provider/transition/ruby.rb#L49-L66
(about 16 lines).


 And does the transitional state copy/inherit anything from the target
 final state given for the named resource, or does it have only the
 (explicit) attributes specified in the Transition resource?


Yes. Right now all attribute defaults in the transition resource are
inherited from the target final state resource. Now that you mention it I
can imagine that this might not always be the desired behavior, so it could
make sense to introduce a parameter that affected whether or not the
transitional state had this implicit relationship to the target state. The
current behavior was mostly chosen for convenience. I'd be curious to think
through a specific scenario where the difference was important. One doesn't
spring to mind immediately for me. Obviously it's possible already to be
explicit about many/all attributes in the transition declaration.


 I'm also curious about the nature of some of the documented limitations.

 With regard to the transitioned resource being of native type, don't
 defined type instances have a representation in the catalog?  Classes do.
 For that matter, are classes excluded from being the transitioned
 resource?  The more I think about it, the more I foresee potential for
 difficulties if defined type instances or classes were eligible for
 transitioning.  On the other hand, I'd be open to the argument that it's ok
 to offer additional capability to manifest writers in exchange for opening
 (more) possibilities for shooting themselves in the foot.


Allowing defined types or classes to be eligible for transitioning
certainly makes things more complicated. The problem is that defined types
and classes could have logic in Puppet code that dictates which resources
to create and add to the graph. If statements, case statements, variables -
the provider doesn't have access to any of that Puppet code. If a property
or parameter on a defined type or a class is set to a different value in
Puppet code, it's possible that an entirely different set of resources
could be created. This is the root of the native type only limitation. At
the time transition operates, none of the Puppet code exists anymore, only
the raw catalog itself.

To the best of my knowledge, the graph-level representation of a class is a
pair of whits with relationships to each resource from the class. I've been
imagining defined types to be the same. This is likely enough to identify
which resources came from a particular class or defined type, but I suspect
it isn't enough to confidently change a defined type or class parameter. I
would appreciate a check though on that from someone more familiar with how
the catalog is put together and what information is available.


 With regard to 'noop' parameters, is it your thinking that the transition
 should not be performed if all the 'prior_to' resources have 'noop'
 enabled?  What about the transitioned resource?  I'd be inclined to say
 that the Transition resource *shouldn't* look at 'prior_to' resources'
 'noop' parameters.  If 'noop' is being applied on a per-resource basis,
 then the responsibility should be on the manifest developer to apply 'noop'
 to the Transition resource where needed.  On the other hand, I think you
 should consider whether the transitional state should automatically be
 marked with the same 'noop' value as the final state of the transitioned
 resource, at least by default.  I haven't reached a conclusion on that
 myself, but it seems more likely to be appropriate than basing the
 Transition's noop on the 'prior_to' resources'.


I think marking the transitional state with the target resource's noop
value as default would be the way to go. This is not a design limitation,
just work that has yet to be done.

I also have to ask: will this work with Puppet 4?


It should absolutely work with Puppet 4! :-)  If it doesn't I think it'll
be because of 

[Puppet-dev] Dealing with transitional states in Puppet

2014-12-19 Thread Reid Vandewiele
This thread is introducing a simple workaround for an observed limitation 
in Puppet's ability to automate inelegant but real configuration 
requirements. The desired outcome is to get feedback on the suitability of 
the workaround or its approach, how well it fits with Puppet's paradigm, 
and whether or not people would find it acceptable to implement this kind 
of pattern in their own environments.

tl;dr: What do people think 
of https://forge.puppetlabs.com/puppetlabs/transition ?

The Problem:

Puppet has always been tightly focused on a single, final target state. 
When iterating over resources in the graph, Puppet examines resources one 
by one and if necessary makes configuration changes to bring them into 
compliance with that one true target state. It is expected that at the end 
of the run, every resource in the graph will still be in that target state. 
This is generally a decent way to model things, but there are some 
situations where it isn't quite enough. For example, if a running service 
locks a file (Windows often does this), that file cannot be changed unless 
the service is stopped. Procedurally, to edit the file one would be 
expected to stop the service, make the change to the file, and then start 
the service back up. Similarly, when installing software a procedure may 
say to download the installation media to a temporary directory, use it to 
install the software, and finally remove the installation media (as it is 
no longer necessary to keep it).

Unless the underlying provider has built-in logic that handles those kinds 
of temporary changes within the context of a single resource, it is very 
difficult to model these transitional states in Puppet. There are a few 
contexts where it kinda makes sense to extend a provider to encompass a 
transition state, such as a package provider downloading a source file to 
keep temporarily, but often times supporting transitional states in the 
context of a provider would feel like bloat, and not good design.

The Experiment:

What if we had the ability to model transitional states as part of the 
catalog? Chris Barker, myself and a few others were brainstorming about 
this a little while back and came up with the idea to insert a resource 
into the graph that had a kind of reverse-notify behavior, where it would 
enact a specified state on another resource, temporarily, if and only if 
another specified resource ahead of it in the graph was going to change. 
For example:

transition { 'stop myapp service':
  resource   = Service['myapp'],
  attributes = { ensure = stopped },
  prior_to   = File['/etc/myapp/myapp.cfg'],
}

file { '/etc/myapp/myapp.cfg':
  ensure  = file,
  content = 'mycontent',
  notify  = Service['myapp'],
}

service { 'myapp':
  ensure = running,
  enable = true,
}

We implemented a prototype and published it at 
https://forge.puppetlabs.com/puppetlabs/transition. It's 0.1.0 code, 
basically first cut, just enough to build out and test the idea, but not 
all the rough edges are sanded off. There's more detail in the readme on 
the Forge page.We implemented a prototype and published it at 
https://forge.puppetlabs.com/puppetlabs/transition. It's 0.1.0 code, 
basically first cut, just enough to build out and test the idea, but not 
all the rough edges are sanded off. There's more detail in the readme on 
the Forge page.

Does this pattern or capability make sense in the general context of 
Puppet? Is this a decent interim solution for something better currently 
under development? What do people think of this?

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/6b2b305c-fb2c-4832-9990-35cfe89a6a06%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Re: Feedback on the behavior of +=, -=, +, and -

2014-08-10 Thread Reid Vandewiele
On Sunday, August 10, 2014 7:11:11 PM UTC-7, Trevor Vaughan wrote:

 Yeah, I know that it doesn't actually mutate. But it *feels* like it does, 
 which is the issue.

 Trevor


For this reason I would advocate omission of += and -= from the language.

The problem is not that the behavior is inconsistent or that it breaks any 
-rules-, per se. The problem is that the behavior is non-intuitive and not 
just in a difficult-to-guess-at way, but in a can-directly-confuse-users 
way. Yes, $fqdn is potentially different from $::fqdn but if we're trying 
to guide people into a mindset of variables are immutable we should not 
muddy the waters with syntax that looks contradictory to that paradigm - 
especially if all it gains us is saving a few characters being typed.

I believe this constitutes a compelling design reason to remove += and -=.

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/4a0b0503-dd0a-408f-b856-afe91374a924%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Re: Decision: Near future of resource expressions

2014-08-05 Thread Reid Vandewiele
On Mon, Aug 4, 2014 at 3:18 PM, Henrik Lindberg 
henrik.lindb...@cloudsmith.com wrote:


 So, to summarize: The use of * = as an operator is not liked but the
 concept of being able to set attributes from a hash is. Unfortunately, it
 is not possible to directly allow an expression at the position in
 question, there must be a syntactical marker.

 As pointed out earlier, the * = was thought to read as any_attribute =
 from_these_values, but I totally grok if people have an allergic reaction.

 We can do this though:

 file { default: ($hash) }

 This works because it is impossible to have an attribute name in
 parentheses.

 In use:

 file (
   default   : ($my_file_defaults + { mode = '0666' });
   '/tmp/foo': ;
   '/tmp/bar': ;
 }

 Is that better? No new operator, but you have to use parentheses around
 the expression.

 We can naturally also revert the functionality, but it seems it is liked
 conceptually.


 - henrik



I think the parenthesis are far preferable over *=. That isn't to say I
like them - I don't particularly. But the functionality is desirable, and
if it's a matter of a technical limitation then parenthesis are a Good
Enough (TM) compromise from the more ideal direct use of a hash.

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/CAHNFGkO329huKu%2BZH8KsQpQs_k6Txni%2Bj-3Tuiu7QtdA6HMQ7Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Re: Decision: Near future of resource expressions

2014-08-05 Thread Reid Vandewiele
On Tue, Aug 5, 2014 at 4:11 PM, Henrik Lindberg 
henrik.lindb...@cloudsmith.com wrote:

 On 2014-05-08 18:24, Andy Parker wrote:


 My argument against using parenthesis is that parenthesis, are often
 read as seldom necessary grouping. I believe that most programmers
 read them as usually only needed for fixing precedence problems, which
 is really what is happening here but it doesn't look like it. Based on
 that I can imagine that a common, and frustrating mistake would be:

apache::vhost { $servername: $opts }

 And then confusion and anger and bug reports.


 Yeah, I think they are too subtle too (and hence the * =).


 One more proposal :-)

 We could leave out the name part all together (i.e. drop the '*').

 dalens' example would then look like this:


  apache::vhost { $servername:

  port = $port,
  ssl  = $ssl,
   = $extra_opts,

 And if it is used for local defaults (or the only thing for a titled
 resource):

 file { default: = $hash }
 file { '/tmp/foo': = $hash }

 This works best if it is restricted to being the only attribute operation
 for a title, but looks a bit odd when presented in a list where there are
 also named (i.e. name = expression) operations.

 At least it is not a new operator.

 Is this better than * = or requiring parentheses ?


 - henrik



I'm still not happy with either * = or  =. Both unnecessarily (imho)
complicate the structure of the most basic building block in the Puppet
language.

On Tue, Aug 5, 2014 at 11:52 AM, David Schmitt da...@dasz.at wrote:


 I like that piece of code as it is. Perhaps I would add a comment noting
 that $vhost_options is not allowed to override the base_vhost_options and
 give a reason for that. I needed to browse up to the parameter doc and
 think a bit about what that should mean.

 I do not think the whole sequence would be any better with some kind of
 special operator, except perhaps for the hash() thingy, which I
 conveniently ignore in the analysis, but assume it's doing some merging.

 Also, create_resources is google-able. To find the splat operator, one
 would either have to know it or think about the language reference and
 browse through the visual index or the operator chapter.


Maybe solving for this use case would be better handled by implementing
something that looks and feels like a metaparameter rather than trying to
come up with new syntax. That approach would have the benefit of not
complicating the language, and meet all of the functional requirements
discussed so far. It would also be google-able. There would need to be some
design around the choice of a name for the metaparameter, but it's easy
enough to demonstrate the concept with a stand-in like attribute_defaults
or attribute_hash.

Example 1 (assuming behavior whereinmerging is OK, and that explicit
parameter specification takes precedence):

apache::vhost { $servername:
  port = $port,
  ssl  = $ssl,
  attribute_defaults = $extra_opts,
}

Example 2 (assuming that merging is not OK, and that conflicts will be
treated as duplicate parameter specification):

apache::vhost { $servername:
  port = $port,
  ssl  = $ssl,
  attribute_hash = $extra_opts,
}

My initial thought would be to choose and settle on one behavior and review
an appropriate name, though it wouldn't be objectionable to support both.

Does an operator/syntax gain us anything that this kind of
metaparameter-like approach does not?

Is taking a metaparameter-like approach still a language feature, or does
that become an actual metaparameter?

Visual review, for convenience:

file { $title: * = $attributes; }
file { $title: = $attributes; }
file { $title: ($attributes); }
file { $title: attribute_defaults = $attributes; }
file { $title: attribute_hash = $attributes; }

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/CAHNFGkOFB5GNuRQmcv7m2q9u%3DBNxAYkyT3chj4i56ZicgkR_Jw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Re: Decision: Near future of resource expressions

2014-08-04 Thread Reid Vandewiele


On Sunday, August 3, 2014 4:32:39 PM UTC-7, henrik lindberg wrote:


 My main objection with create_resources function is that it is not a 
 natural progression from the language. When developing puppet code, you 
 start out with simple resources and use the syntax for creating them. As 
 you are building up your modules and complexity increases you reach a 
 point where you have to redo what you have already done because now you 
 have to instead construct a hash and call a function. 

 When you have reached this point several times, you are more likely to 
 always use create_resources. 

 When instead directly supported in the language, you can add the more 
 advanced things if and when they are needed. 

 Having the power to do so, does not take anything away. 

 - henrik 


I am concerned that adding additional operators to the language does in 
fact take something away. As such, I do think that proposals to do so need 
to require a very strong argument in order to proceed.

When I go out and introduce new teams of sysadmins to Puppet, the fact *in 
vacuo* that we have our own domain-specific language is anything but a 
strength. One of the top initial objections to Puppet today is that large 
teams will not be able, or will not want, to learn a new, complex language. 
What sets us apart from competitors like Chef, from scripting alternatives, 
is the simplicity and apparent intuitive design of our DSL. The fact that 
it is more akin to a configuration file than to a procedural program. That 
a sysadmin can look at a Puppet manifest and usually figure out the basics 
of what it does and even modify it without needing a training course or a 
pocket reference. Every operator that is added to the language has the 
potential to take away from that, and so absolutely needs to be approached 
as a design decision focused on the end-user experience.

We need the ability to better transform structured data into resources. 
From an end-user experience perspective (setting aside implementation 
considerations) I much prefer the idea of using a hash directly over 
introducing a new operator. The difference would be the two examples shown 
below.

Proposed operator (proposed):
$x.each |$title, $attributes| { file { $title: * = $attributes } } 

Formal hash treatment (my preferred):
$x.each |$title, $attributes| { file { $title: $attributes } } 

If we introduce a new operator it should be a user-experience design 
decision, not an implementation decision. If the preferred design cannot be 
implemented due to technical constraints that's something we have to deal 
with. But I hope that we're arriving at decisions to introduce new 
operators as part of an intentional end-user experience focused design.

The fact that we have a DSL is not a strength. The fact that it is simple 
and intuitive to practitioners in this space, is.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/25847473-d8a7-48e2-a4df-e430374e4f9c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Re: Decision: Near future of resource expressions

2014-07-29 Thread Reid Vandewiele
Definitely excited to see this stuff moving forwards.

On Thursday, July 24, 2014 5:32:13 PM UTC-7, Andy Parker wrote:


 Henrik took all of the ideas and started trying to work out what we could 
 do and what we couldn't. Those are in a writeup at 
 https://docs.google.com/a/puppetlabs.com/document/d/1mlwyaEeZqCfbF2oI1F-95cochxfe9gubjfc_BXjkOjA/edit#
  


According to that doc, syntactically we are transitioning from this:

* *Instantiation*  notify { hi: message = 'hello' }
* *Default*  Notify { message = 'greetings' }
* *Override*  Notify[hi] { message = 'hello there' }

To this:

* *Instantiation*  notify { hi: message = 'hello' }
* *Default*  Notify { default: message = 'greetings' }
* *Override*  Notify[hi] { default: message = 'hello there' }

This mostly makes sense except for the use of the literal default in the 
Override syntax. The term default implies semantics which I don't think 
are correct. When using an override it's often the case that previously set 
values are being explicitly swapped out for different ones. It seems like 
it would make more sense to change the placeholder word to something that 
reflects that values set take precedence. E.g.

* *Override*  Notify[hi] { override: message = 'hello there' }

Is it an intentional design decision to continue to use the word default 
in the new syntax for resource overrides?

~Reid

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/1d192839-f8e3-4d0f-a5f8-1c790e96fca0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Re: RFC2 - Resource Defaults

2014-07-17 Thread Reid Vandewiele
On Friday, July 11, 2014 7:50:47 PM UTC-7, henrik lindberg wrote:


 Here we have another problem; variables defined in classes are very 
 different from those defined elsewhere - they are really 
 attributes/parameters of the class. All other variables follow the 
 imperative flow. That has always bothered me and causes leakage from 
 classes (all the temporary variables, those used for internal purposes 
 etc). This is also the source of immutable variables, they really do 
 not have to be immutable (except in this case). 

 If we make variables be part of the lazy logic you would be able to write: 

$a = $b + 2 
$b = 2 

 I think this will confuse people greatly. 


Slightly off-topic so I'll keep it short.

I have a huge appreciation for immutable variables in the Puppet language 
as I think it helps keep people centered in the mindset of declarative 
configuration and not procedural programming. The fact that variable values 
are parse-order dependent is detrimental in that it forces users to hold 
and visualize a more complex model in order to not get tripped up by 
parse-order dependencies. Resources can only be declared once and can be 
referred to before they've been hit by the parser. I would strongly support 
variables being the same. Today they are immutable and so have one foot in 
that door. Making them part of the lazy logic sounds like it could get them 
the rest of the way.

Outside of technical implementation challenges, it would be a good thing if 
variables were immutable and lazily evaluated in such a way as to make the 
example given above work.

Is there an existing thread or Jira ticket that would be a more appropriate 
place to discuss further?

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/e492657e-4deb-4f85-afa5-fc404bc9518e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Making a Puppet custom Type 'optional'...

2014-05-23 Thread Reid Vandewiele
How about just create a utility define to go with your custom type? E.g. create 
an rs_tag::conditional define, then use that instead of the rs_tag type 
directly. You can put the conditional logic in the define and it's about as 
clean as it gets. You also don't have to muck about with the logic at the 
provider level.

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/de388f38-9daf-4725-aea7-b06b315b30d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet-dev] Making a Puppet custom Type 'optional'...

2014-05-23 Thread Reid Vandewiele
In the type for it to make sense and be consistent with the usual semantics
I think you'd have to make it some kind of valid ensurable state. For
example, let people specify ensure=present_if_in_rightscale or some better
state name.

That also suggests to options like ensure=$is_rightscale or other
fact-based approaches, where the fact is either true or undef. That would
probably work.

It doesn't make a lot of sense to me to build a type that doesn't fail if I
say ensure=present and it can't be present, regardless of why not. Hence
why I'd say approach it from the perspective of what you're trying to
ensure.

A noop provider could exist that would fail on ensure=present but succeed
on ensure=something_else_or_maybe_undef.
On May 23, 2014 11:16 AM, Matt Wise m...@nextdoor.com wrote:

 Thats definitely one way to do it... and in fact, I may do that today if I
 can't come up with another solution. I'd prefer to do it in the actual
 provider, but I could see how this is arguably cleaner. Before I do that
 though, is there no clean way to do this inside the Type or Provider?

 Matt Wise
 Sr. Systems Architect
 Nextdoor.com


 On Fri, May 23, 2014 at 11:12 AM, Reid Vandewiele r...@puppetlabs.comwrote:

 How about just create a utility define to go with your custom type? E.g.
 create an rs_tag::conditional define, then use that instead of the rs_tag
 type directly. You can put the conditional logic in the define and it's
 about as clean as it gets. You also don't have to muck about with the logic
 at the provider level.

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Puppet Developers group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/puppet-dev/GWFjYkHT-e8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 puppet-dev+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/puppet-dev/de388f38-9daf-4725-aea7-b06b315b30d2%40googlegroups.com
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups Puppet Developers group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/puppet-dev/GWFjYkHT-e8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 puppet-dev+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/puppet-dev/CAOHkZxMoNN6s-m0y4jgrZe9OgVP6%2BN%3DV4uQvwFA7DrObMfqD-w%40mail.gmail.comhttps://groups.google.com/d/msgid/puppet-dev/CAOHkZxMoNN6s-m0y4jgrZe9OgVP6%2BN%3DV4uQvwFA7DrObMfqD-w%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/CAHNFGkPCZHtfRoBB8rSt6ugj13yzkj7JQAbAqT0HQaK1SeJj-w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[Puppet-dev] Re: Help with Composite Namevars

2014-01-28 Thread Reid Vandewiele
I don't know the right way to do this but I've worked on a couple of 
composite namevar types at least enough that I've seen that kind of error 
before.

In effect, when using a composite namevar you must manually specify how to 
extract individual parameters from the resource title. It is assumed that 
the default title pattern is insufficient.

What happens if for a given resource, no pattern in your title_patterns 
matches? Maybe that's what you're running into?

Here are a few other title_patterns examples.

https://github.com/puppetlabs/puppetlabs-java_ks/blob/master/lib/puppet/type/java_ks.rb#L140
https://github.com/reidmv/puppet-module-yamlfile/blob/master/lib/puppet/type/yaml_setting.rb#L166

On Thursday, January 23, 2014 12:36:38 PM UTC-8, Leonard Smith wrote:

 To all,

 I am on puppet 2.7.3 and I'm working on a custom RabbitMQ type, that will 
 use composite namevar.  I did not see any work out there already for 
 managing rabbitMQ bindings so I've started on one and I'm running into 
 problems with the composite namevar. I have a very basic type ( below ) but 
 when I run the puppet as an agent I still get the error Error 400 on 
 SERVER: Could not render to pson: you must specify title patterns when 
 there are two or more key attributes

 Any help or pointers would be appreciated.

 #Puppet Manifest: 

   rabbitmq_binding { 'testing':

 source  = src,

 destination = dest,

   }

 # Puppet Type

 Puppet::Type.newtype(:rabbitmq_binding) do

   desc 'rabbitmq_binding creates a puppet type for managing rabbitMQ 
 binding'

   def self.title_patterns

 [ [

 /^(.*):(.*)$/,  # pattern to parse source:destination

 [

   [:source, lambda{|x| x} ],

   [:destination, lambda{|x| x} ]

 ] ]

 ]

   end

   newparam( :source ) do  

 isnamevar

   end

   newparam( :destination ) do

 isnamevar

   end

 end


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-dev/59f4c397-fa7b-4c15-9bde-509fba2f8e4e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.