Re: [Puppet Users] Re: Long puppet catalog run times on certain nodes after using pson is called

2013-03-03 Thread Brice Figureau
On 01/03/13 22:24, r.yeo wrote:
 Passenger settings are currently -
 
 PassengerHighPerformance on
 PassengerUseGlobalQueue on
 
 PassengerMaxPoolSize 3

This parameter means you're allowing only 3 puppet clients to talk to
your master at any time.

I can understand why you've set it up this low, because essentially a
puppet master is a CPU bound process, so your master host might not be
able to compile more than 3 catalog at one time.

Unfortunately, in a scenario where you have 4 clients checking-in at the
approximately the same time (A,B,C,D), D will wait in the passenger
queue, and when A's catalog will be delivered and will start processing,
all file resources pointing back to your master will get delayed until
D's catalog is ready.

To solve this kind of issues, the best way is to dedicate a part of your
passenger pool for the puppet file service (use a vhost for instance).
File resource are theorically much shorter in time (especially metadata)
than compilation requests. Separating both would allow for a smarter
queueing at the master level.

[snip]
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [Puppet Users] Fileserver in standalone mode.

2012-11-23 Thread Brice Figureau
On Thu, 2012-05-24 at 09:37 -0700, btimby wrote:
 I am using puppet in standalone mode (puppet apply) to test manifests
 that I also use in a client/server configuration.
 
 I have everything working as far as files included in modules. I can
 reference file source as puppet:///modules/modulename/path/to/file.
 
 However, some files are not part of a module, so for the client/server
 portion, I just set up a share called files. However, references to
 these files puppet://files/path/to/files don't work in standalone
 mode.

Because it tries to resolve a host called 'files' as the server where to
get those files.
Have you tried:
puppet:///files/path/to/files
(notice the ///)

 I understand that standalone mode (puppet apply) command can find the
 module files because you tell it the path to look in (--modulepath
 argument). Why is there no argument for adding file shares
 (--fileserver=files:/path/to/files)? Is there another way to achieve
 this?
 
 My workaround for now is to simply move the files to a module named
 files and reference them as puppet:///modules/files/path/to/file,
 but it seems like there might be a better solution.

You could also setup a puppetmaster only for the purpose of serving
files to your servers, and use:
puppet://fileserver.domain.com/files/path/to/file
kind of url.

It's still masterless for compilation purposes, but uses a master for
file serving.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet Type Munge help...

2012-11-12 Thread Brice Figureau
Hi,

I think you should direct those e-mails to the puppet-dev mailing list,
you'll get certainly more answers there.

Also, I didn't closely read your previous posts about the NetApp
provider you're writing, but let just me say that's awesome!

On Mon, 2012-11-12 at 05:16 -0800, Gavin Williams wrote:
 Afternoon all... 
 
 I'm trying to use Munge in a custom Puppet type to set the param value
 to that of another param if the current value is null...
 
 Code I've got is:
 
 Puppet::Type.newtype(:netapp_export) do
   @doc = Manage Netapp NFS Export creation, modification and
 deletion.
 
   apply_to_device
 
   ensurable do
 desc Netapp NFS Export resource state. Valid values are:
 present, absent.
 
 defaultto(:present)
 
 newvalue(:present) do
   provider.create
 end
 
 newvalue(:absent) do
   provider.destroy
 end
   end
 
   newparam(:name) do
 desc The export name.
 isnamevar
   end
 
   newparam(:persistent) do
 desc Persistent export?
 newvalues(:true, :false)
 defaultto(:true)
   end
 
   newparam(:path) do
 desc The filer path to export.
 Puppet.debug(Validating path param.)
 munge do |value|
   if value.nil?
 Puppet.debug(path param is nil. Setting to
 #{@resource[:name]})
 resource[:name]
   else
 Puppet.debug(path param is not nil.)
   end
 end
 #Puppet.debug(path value is: #{@resource[:path]}.)
   end
 
 end
 
 
 However I'm not having any success with the above. 

Well, what kind of error or output do you get?

 Any ideas???

Isn't it better to use a defaultto, like this:

defaultto { @resource[:name] }

(That's what I did in the network device interface type, and it was
working)
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] how to intercept a catalog and perform a diff

2012-10-29 Thread Brice Figureau
On Mon, 2012-10-29 at 08:25 -0700, Kevin G. wrote:
 I'm re-reading the puppet docs
 http://docs.puppetlabs.com/learning/manifests.html and just noticed
 this footnote
 
  If you drastically refactor your manifest code and want to make sure
 it still generates the same configurations, 
 you can just intercept the catalogs and use a special diff tool on
 them;
 
 
 but the footnote doesn't say how to do that or point to any
 documentation. I've been looking for a way to do exactly that for some
 time now and haven't succeeded in finding it, can anyone point me to
 documentation or give me a list of simple steps to intercept a
 catalog or what this special diff tool would be?

You can check this awesome tool by RI:
https://github.com/ripienaar/puppet-catalog-diff

It's now available as a Forge module, which should even more help using
it.

It produces a report that lists the differences between catalogs
(old/new resources, and differences between changed resources).

It is specifically useful when upgrading a puppet master to a new
version to spot differences in behavior.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] File optimizations

2012-10-22 Thread Brice Figureau
On 23/10/12 01:39, Nikola Petrov wrote:
 On Mon, Oct 22, 2012 at 12:09:45PM -0700, Bostjan Skufca wrote:
 Hi there,

 I'm running into slow catalog runs because of many files that are managed. 
 I was thinking about some optimizations of this functionality.

 1: On puppetmaster:
 For files with source = 'puppet:///modules...' puppetmaster should 
 already calculate md5 and send it with the catalog.

 2: On managed node:
 As md5s for files are already there once catalog is received, there is no 
 need for x https calls (x is the number of files managed with source= 
 parameter)

 3. Puppetmaster md5 cache
 This would of course put some strain on puppetmaster, which would then 
 benefit from some sort of file md5 cache:
 - when md5 is calculated, put in into cache, key is filename. Also add file 
 mtime and time of cache insert.
 - on each catalog request, for each file in the catalog check if mtime has 
 changed, and if so, recalculate md5 hash, else just retrieve md5 hash from 
 cache
 - some sort of stale cache entries removal, based on cache insert time, 
 maybe at the end of each puppet catalog compilation, maybe controlled with 
 probability 1:100 or something

 Do you have any comments about these optimizations? They will be greatly 
 appreciated... really :)

 b.

 Hi,
 
 When using puppet I found that it is a far better idea to serf files
 with something else. You will be far better with something else for this
 job like sftp or ssh. My conclusions just came from the fact that we
 were trying to import a big dump(2GB seems ok to me but who knows) and
 puppet just died because they are not streaming the file but it is
 *fully* loaded into memory.

This assertion is not true anymore since at least 2.6.0.
Also, for big files you can activate http compression on the client,
this might help (or not, YMMV).
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] File optimizations

2012-10-22 Thread Brice Figureau
Hi,

For development questions, feel free to post in puppet-dev :)

You're not the first irritated by those md5 computations taking time.
That's something I'd like to really optimize since a lng time.
That's simple quite difficult.

On 22/10/12 21:09, Bostjan Skufca wrote:
 Hi there,
 
 I'm running into slow catalog runs because of many files that are
 managed. I was thinking about some optimizations of this functionality.
 
 1: On puppetmaster:
 For files with source = 'puppet:///modules...' puppetmaster should
 already calculate md5 and send it with the catalog.

That's what the static compiler does, if I'm not mistaken. The static
compiler is part of puppet since 2.7.

 2: On managed node:
 As md5s for files are already there once catalog is received, there is
 no need for x https calls (x is the number of files managed with
 source= parameter)
 
 3. Puppetmaster md5 cache
 This would of course put some strain on puppetmaster, which would then
 benefit from some sort of file md5 cache:
 - when md5 is calculated, put in into cache, key is filename. Also add
 file mtime and time of cache insert.
 - on each catalog request, for each file in the catalog check if mtime
 has changed, and if so, recalculate md5 hash, else just retrieve md5
 hash from cache
 - some sort of stale cache entries removal, based on cache insert time,
 maybe at the end of each puppet catalog compilation, maybe controlled
 with probability 1:100 or something

Actually checking the mtime/size prior to do any md5 computations could
be a big win.

But that's not all, in fact there are 3 md5 computations per files
taking place during a puppet run:
* one by the master when computing file metadata
* one by the agent on the existing file (this helps to know if the file
changed)
* and finally one after writing the change to the files to make sure we
wrote it correctly.

A potential solution would be to implement a different checksum type
(maybe less powerful than a md5, but faster).

 Do you have any comments about these optimizations? They will be greatly
 appreciated... really :)

Well, I believe we're (at least myself) very aware of those issues. The
fact that it never got fixed (except by the static compiler) is that
it's a complex stuff. Last time I tried to fiddle with the checksumming,
I never quite got anywhere :)

As I said in the preamble, feel free to chime in puppet-dev to talk
about this, and check the various redmine tickets regarding those issues.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.




Re: [Puppet Users] What is the intention of thin_storeconfigs?

2012-07-13 Thread Brice Figureau
On 12/07/12 10:29, Bernd Adamowicz wrote:
 I started doing some experiments with the configuration option
 'thin_storeconfigs=true' by adding this option to one of my Puppet
 masters. However, I could not determine any change in behavior. 

As others already have explained, with thin_storeconfigs, only exported
resources, facts and nodes are persisted to the DB. With regular (thick)
storeconfigs every resources are persisted to the database.

 I expected to have the resources collected faster, but Puppet still
 takes some 15min to do the job. 

The thing is that if you had run with regular storeconfigs before
activating the thin_storeconfigs option, then your database is already
populated with all the resources definitions and parameters. So the
first time you run with thin_storeconfigs you end up collecting as if
thick was activated, then after the first catalog run (for a given
node), puppet should remove all the un-needed resources (ie the non
exported ones) from the database.
If that doesn't happen, I would suggest you to cleanup the database for
your nodes so that only exported resources are persisted and collected.

 So maybe I misunderstood something.
 Should this option instead be placed in the client's configuration to
 make them export only the @@-resources?

No, it's a master option.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] How to use thin_storeconfigs

2012-07-06 Thread Brice Figureau
On Fri, 2012-07-06 at 09:43 +0200, Bernd Adamowicz wrote:
 Which is the right way to use thin_storeconfigs? Currently I'm about to try 
 this:
 
 storeconfigs = true
 thin_storeconfigs = true
 
 Or should it be only a single line containing the 'thin_storeconfigs' 
 directive without 'storeconfigs=true'?

You just need:
thin_storeconfigs = true

It will automatically enable storeconfigs for you.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Reducing system load

2012-06-20 Thread Brice Figureau
On Tue, 2012-06-19 at 03:23 -0700, Duncan wrote:
 Hi folks, I'm scratching my head with a problem with system load.
 
 When Puppet checks in every hour, runs through all our checks, then
 exits having confirmed that everything is indeed as expected, the vast
 majority of the time no changes are made.  But we still load our
 systems with this work every hour just to make sure.  Our current
 configuration isn't perhaps the most streamlined, taking 5 minutes for
 a run.
 
 The nature of our system, however, is highly virtualised with hundreds
 of servers running on a handful of physical hosts.  It got me thinking
 about how to reduce the system load of Puppet runs as much as
 possible.  Especially when there may be a move to outsource to
 virtualisation hosts who charge per CPU usage (but that's a business
 decision, not mine).
 
 Is there a prescribed method for reducing Puppet runs to only be done
 when necessary?  Running an md5sum comparison on a file every hour
 isn't much CPU work, but can it be configured so that Puppet runs are
 triggered by file changes?  I know inotify can help me here, but I was
 wondering if there's anything already built-in?

It depends on what you really want to achieve. Part of the CPU
consumption is to make sure the configuration on the node is correct.

I see a possibility where:
* you don't care if there is some configuration drift on the agent (ie
manual modifications for instance)
* you run the agent on demand when you need a change

This can be done with something like MCollective where you can decide to
remotely launch some puppet runs as you see fit. 

Now you need to have a way to map from manifest modification to a set of
hosts where you need a puppet run (which might not be that trivial).
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet-load forbidden request to /catalog/*

2012-06-02 Thread Brice Figureau
On 01/06/12 12:41, Matthew Burgess wrote:
 On Thu, May 31, 2012 at 6:07 PM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 
 I'll run a test over the week-end to see if I can reproduce the issue
 with 2.7.14. It's possible something changed in the puppet codebase and
 puppet-load doesn't properly encode the facts it sends to the master,
 which in turn doesn't get unserialized as they should.

Unfortunately, I wasn't able to reproduce the issue with your fact file.

 Just as a hunch, I wonder whether, given the problems I had getting
 compatible versions of various gems installed it may be a problem
 there (I'm on an air-gapped environment, so had to download, transfer
 and install the individual .gem files).  In case it's useful, here's
 the output of 'gem list':
 
 activemodel (3.0.12)
 activerecord (3.0.12)
 activesupport (3.0.12)
 addressable (2.2.8)
 arel (2.0.10)
 builder (3.0.0, 2.1.2)
 cookiejar (0.3.0)
 daemon_controller (0.2.5)
 em-http-request (1.0.2)
 em-socksify (0.2.0)
 eventmachine (1.0.0.beta.4)
 fastthread (1.0.7)
 http_parser.rb (0.5.3)
 i18n (0.6.0, 0.5.0)
 json (1.4.3)
 mime-types (1.16)
 multi_json (1.3.5)
 mysql (2.8.1)
 passenger (3.0.12)
 rack (1.1.0)
 rake (0.8.7)
 rest-client (1.6.1)
 sqlite3-ruby (1.2.4)
 tzinfo (0.3.33)

On my side, I noticed that I'm running:
addressable (2.2.6)
cookiejar (0.3.0)
em-http-request (0.2.15)
em-socksify (0.1.0)
eventmachine (1.0.0.beta.4)
facter (1.6.2)
http_parser.rb (0.5.3)
rake (0.8.7)

So I tried to upgrade to em-http-request 1.2.0 like you, and ran into
many authentification trouble (like the SSL connection wasn't open with
the specified cert and key).

So my suggestion would be to:
gem uninstall em-http-request
gem install -v 0.2.5 em-http-request

and try if that helps,
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet-load forbidden request to /catalog/*

2012-05-31 Thread Brice Figureau
On Thu, 2012-05-31 at 10:38 +0100, Matthew Burgess wrote:
 On Wed, May 30, 2012 at 6:58 PM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 
  Probably that the stacktrace you'll get with --trace will be enough for
  the moment. Also if you can cat the facts file (feel free to obfuscate
  the private data you might have in there), that might help.
 
 OK. so another buglet here is that if you have '--debug --verbose
 --trace' on the puppetmaster command line, you get no stacktrace.  As
 soon as I take '--verbose' out, I get the attached stacktrace.
 puppet-load's logging has a similar issue in that '--verbose' seems to
 override '--debug' so I get no debug output if both options are
 specified.

Yes, the order is important, and --verbose will override --debug.

 Anyway, attached are the debug log from the puppet master and the
 facts file suitably obfuscated.
 

Can you run your master on the console (add --no-daemonize to the
command-line)?

Because due to how Puppet reports errors during compilation, the full
stacktrace is only printed on the console, and I'm missing the important
bits :)

Thanks,
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet-load forbidden request to /catalog/*

2012-05-31 Thread Brice Figureau
On Thu, 2012-05-31 at 15:57 +0100, Matthew Burgess wrote:
 On Thu, May 31, 2012 at 2:09 PM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 
  Can you run your master on the console (add --no-daemonize to the
  command-line)?
 
  Because due to how Puppet reports errors during compilation, the full
  stacktrace is only printed on the console, and I'm missing the important
  bits :)
 
 Thanks for you help and patience with this.  Here's the log as
 captured on the command line.
 
 It's a bit bigger than the last one, and looks as if it's got a bit
 more at the very start of the stack trace.   Here's hoping it's what
 you were looking for!

That's perfect!

Apparently the client_version fact is not correct (it contains an
array).
One thing you might want to do, is to modify manually the
master.domain.com.yaml file to remove the clientversion fact
altogether to see if that fixes it or not.

I'm not sure what makes this fact an array, but I'll have a deeper look.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet-load forbidden request to /catalog/*

2012-05-31 Thread Brice Figureau
On 31/05/12 18:34, Matthew Burgess wrote:
 On Thu, May 31, 2012 at 4:31 PM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 
 That's perfect!

 Apparently the client_version fact is not correct (it contains an
 array).
 One thing you might want to do, is to modify manually the
 master.domain.com.yaml file to remove the clientversion fact
 altogether to see if that fixes it or not.

 I'm not sure what makes this fact an array, but I'll have a deeper look.
 
 So, hopefully I'm not barking up completely the wrong tree here but:
 
 I removed the clientversion fact, but that triggered other issues as
 puppet then decided it had to go into compatibility mode, and none of
 my manifests have been written to handle that.  I then put the
 clientversion fact back and just changed parser/resource.rb to return
 false rather than it try and check the array-based fact it was
 getting.
 
 That's now a little better, but it causes one of my manifests to fail,
 which is doing some string manipulation on the ipaddress fact.  That
 in turn appears to have been turned into an array, which funnily
 enough, my .erb wasn't expecting :-)
 
 So, it appears to my completely untrained eye as if all/most facts are
 being stored or passed as arrays when the concurrency parameter is
 increased.

Your findings are very interesting, and we're now definitely closer to
the real issue.

I'll run a test over the week-end to see if I can reproduce the issue
with 2.7.14. It's possible something changed in the puppet codebase and
puppet-load doesn't properly encode the facts it sends to the master,
which in turn doesn't get unserialized as they should.

I'll keep you posted,
Thanks,
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet-load forbidden request to /catalog/*

2012-05-30 Thread Brice Figureau
Hi Matthew,

As the original author of puppet-load (and the aforementioned blog
post), I'm sorry to answer so late to this thread.

On Wed, 2012-05-30 at 16:32 +0100, Matthew Burgess wrote:
  Apologies for taking so long to get back about this, more pressing
  matters took precedence.  So, back on this, I think I must be doing
  something really daft then, as I've made that change to my auth.conf
  file and still get the same forbidden errors.
 
 Indeed, I was doing something really daft.  I'd added the changes to
 the bottom of auth.conf.  2 things were wrong in doing that:
 
 a) Adding anything below the 'path /' stanza isn't going to be picked
 up, I don't think (I noticed this when trying to get 'puppet kick' to
 work and got
 similar 403 errors when trying to access /run)
 b) There's already a 'path ~ ^/catalog/([^/]+)$' stanza in the default
 auth.conf file, so the settings there were being hit before my new
 stanza at the bottom
 of the file.  By adding 'allow master.domain.com' and 'auth any' to
 the default stanza the 403s have disappeared.
 
 Now though, is my next problem.  puppet-load works fine with
 concurrency set to 1.  As soon as I increase that number though, I get
 the following error:
 
 undefined method `' for [2.7.14, 2.7.14]:Array on node master.domain.com

Where do you get this error?
Is it from puppet-load or your current master stack?
Is there any stack trace printed?

Can you add --debug and --trace to the puppet-load command line?
This should print more information to the console and we'll certainly be
able to find what's wrong.

 Is this an issue with my Ruby version (1.8.7.299-7), or is it a bug in the 
 code?

I'd tend to say a bug in the code :)
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet-load forbidden request to /catalog/*

2012-05-30 Thread Brice Figureau
On 30/05/12 19:10, Matthew Burgess wrote:
 On Wed, May 30, 2012 at 5:11 PM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 Hi Matthew,

 As the original author of puppet-load (and the aforementioned blog
 post), I'm sorry to answer so late to this thread.

 On Wed, 2012-05-30 at 16:32 +0100, Matthew Burgess wrote:
 Apologies for taking so long to get back about this, more pressing
 matters took precedence.  So, back on this, I think I must be doing
 something really daft then, as I've made that change to my auth.conf
 file and still get the same forbidden errors.

 Indeed, I was doing something really daft.  I'd added the changes to
 the bottom of auth.conf.  2 things were wrong in doing that:

 a) Adding anything below the 'path /' stanza isn't going to be picked
 up, I don't think (I noticed this when trying to get 'puppet kick' to
 work and got
 similar 403 errors when trying to access /run)
 b) There's already a 'path ~ ^/catalog/([^/]+)$' stanza in the default
 auth.conf file, so the settings there were being hit before my new
 stanza at the bottom
 of the file.  By adding 'allow master.domain.com' and 'auth any' to
 the default stanza the 403s have disappeared.

 Now though, is my next problem.  puppet-load works fine with
 concurrency set to 1.  As soon as I increase that number though, I get
 the following error:

 undefined method `' for [2.7.14, 2.7.14]:Array on node 
 master.domain.com

 Where do you get this error?
 Is it from puppet-load or your current master stack?
 Is there any stack trace printed?
 
 The error appears in /var/log/messages and is being spit out by the
 puppet master.
 
 Can you add --debug and --trace to the puppet-load command line?
 
 I've added --debug, but there's no --trace option.

My bad, there is no --trace in puppet-load. But there is one for puppet
master. Running your master with --debug and --trace will definitely
print a stack trace.

From there, I believe I should be able to understand what really happens.

My gut feeling right now is that the facts puppet-load sends to the
master are somehow not correct, but I can be wrong of course.

 debug just shows the facts file being picked up, then both clients
 finishing with an HTTP code 400.  No stack trace is printed.
 
 I've also added --debug to the puppetmaster's config.ru file.  That's
 printing lots of stuff about access lists, expiring the nodes and then
 caching the node, then I get the undefined method error.  Again, no
 stack trace :-(
 
 So, I took mod_passenger out of the equation and ran 'puppetmasterd
 --no-daemonize --debug --verbose --logdest /tmp/puppet-error.log' and
 get the following:
 
 Wed May 30 18:02:22 +0100 2012 Puppet (info): Expiring the node cache
 of master.domain.com
 Wed May 30 18:02:22 +0100 2012 Puppet (info): Expiring the node cache
 of master.domain.com
 Wed May 30 18:02:23 +0100 2012 Puppet (info): Not using expired node
 for master.domain.com from cache; expired at Wed May 30 18:01:22 +0100
 2012
 Wed May 30 18:02:23 +0100 2012 Puppet (info): Not using expired node
 for master.domain.com from cache; expired at Wed May 30 18:01:22 +0100
 2012
 Wed May 30 18:02:23 +0100 2012 Puppet (debug): Executing
 '/etc/puppet/enc.pl master.domain.com'
 Wed May 30 18:02:23 +0100 2012 Puppet (debug): Executing
 '/etc/puppet/enc.pl master.domain.com'
 Wed May 30 18:02:24 +0100 2012 Puppet (debug): Using cached facts for
 master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (debug): Using cached facts for
 master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (info): Caching node for 
 master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (info): Caching node for 
 master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (err): undefined method `' for
 [2.7.14, 2.7.14]:Array on node master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (err): undefined method `' for
 [2.7.14, 2.7.14]:Array on node master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (err): undefined method `' for
 [2.7.14, 2.7.14]:Array on node master.domain.com
 Wed May 30 18:02:24 +0100 2012 Puppet (err): undefined method `' for
 [2.7.14, 2.7.14]:Array on node master.domain.com
 Wed May 30 18:02:32 +0100 2012 Puppet (notice): Caught INT; calling stop
 
 Now, interestingly, I guess, is that every other run of puppet-load is
 triggering this issue, so adding '--repeat 2' to my puppet-load
 command line will trigger the issue consistently for me (as in 2
 requests will succeed, 2 will fail).  If you want full debug logs of
 that type of run, I'll be more than happy to provide them.

Probably that the stacktrace you'll get with --trace will be enough for
the moment. Also if you can cat the facts file (feel free to obfuscate
the private data you might have in there), that might help.

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com

Re: [Puppet Users] Re: Announce: PuppetDB 0.9.0 (first release) is available

2012-05-22 Thread Brice Figureau
On Mon, 2012-05-21 at 15:39 -0600, Deepak Giridharagopal wrote:
 On Mon, May 21, 2012 at 2:04 PM, Marc Zampetti
 marc.zampe...@gmail.com wrote:

Why wouldn't a DB-agnostic model be used?
 
 
 The short answer is performance. To effectively
 implement things we've
 got on our roadmap, we need things that (current)
 MySQL doesn't
 support: array types are critical for efficiently
 supporting things
 like parameter values, recursive query support is
 critical for fast
 graph traversal operations, things like INTERSECT are
 handy for query
 generation, and we rely on fast joins (MySQL's nested
 loop joins don't
 always cut it). It's much easier for us to support
 databases with
 these features than those that don't. For fairly
 divergent database
 targets, it becomes really hard to get the performance
 we want while
 simultaneously keeping our codebase manageable.
 
 
 I understand the need to not support everything. Having
 designed a number of systems that require some of the features
 you say you need, I can say with confidence that most of those
 issues can be handled without having an RDBMS that has all
 those advanced features. So I will respectfully disagree that
 you need features you listed. Yes, you may not be able to use
 something like ActiveRecord or Hibernate, and have to
 hand-code your SQL more often, but there are a number of
 techniques that can be used to at least achieve similar
 performance characteristics. I think it is a bit dangerous to
 assume that your user base can easily and quickly switch out
 their RDBMS systems as easy as this announcement seems to
 suggest. I'm happy to be wrong if the overall community thinks
 that is true, but for something that is as core to one's
 infrastructure as Puppet, making such a big change seems
 concerning.
 
 
 We aren't using ActiveRecord or Hibernate, and we are using hand-coded
 SQL where necessary to wring maximum speed out of the underlying data
 store. I'm happy to go into much greater detail about why the features
 I listed are important, but I think that's better suited to puppet-dev
 than puppet-users. We certainly didn't make this decision cavalierly;
 it was made after around a month of benchmarking various solutions
 ranging from traditional databases like PostgreSQL to document stores
 like MongoDB to KV stores such as Riak to graph databases like Neo4J.
 For Puppet's particular type of workload, with Puppet's volume of
 data, with Puppet's required durability and safety requirements...I
 maintain this was the best choice.
 
 While I don't doubt that given a large enough amount of time and
 enough engineers we could get PuppetDB working fast enough on
 arbitrary backing stores (MySQL included), we have limited time and
 resources. From a pragmatic standpoint, we felt that supporting a
 database that was available on all platforms Puppet supports, that
 costs nothing, that has plenty of modules on the Puppet Forge to help
 set it up, that has a great reliability record, that meets our
 performance needs, and that in the worst case has free/cheap hosted
 offerings (such as Heroku) was a reasonable compromise.

I didn't had a look to the code itself, but is the postgresql code
isolated in its own module?

If yes, then that'd definitely help if someone (not saying I'm
volunteering :) wants to port the code to MySQL.

On a side note, that'd be terriffic Deepak if you would start a thread
on the puppet-dev explaining how the postgresql storage has been done to
achieve the speed :)
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: [Puppet-dev] Taking github noise away from puppet-dev list

2012-04-12 Thread Brice Figureau
On Mon, 2012-04-09 at 14:09 -0700, Michael Stahnke wrote:
 Since our move to github for pull requests and patches, the usefulness
 of puppet-dev has declined significantly.  puppet-dev used to be a
 great list for development discussion of puppet and the ecosystem
 around it. With the information and pull request emails from github,
 unless everybody has finely-tuned their email clients, the puppet-dev
 list has turned into mostly noise.

IMHO, that's not the root cause of the problem. Back to the time we were
sending patches to the list, discussion could spark easily on a topic,
based on a comment on a patch.

Nowadays, I don't even read patches because they're either one click
away or difficult to read in the e-mail. Worst: most of the time I open
the close pull request e-mail by mistake (thinking it is the open one)
and struggle to find the real open one to read the patch. (I believe
we don't need this close e-mail, it just adds unnecessary noise).
One reason the patches are difficult to read is that all sub-patches are
merged in one big chunk, so you're losing the author intent.

More generally the move to github certainly increased the development
team velocity (which is good), it might also have increased the
community contributions, but I think it also decreased the community
involvement on the dev list. 

Maybe nobody feels like me or that's because there are much more patches
than there was before, but the dev list was a good way to stay tuned on
what happens in the Puppet code. I just feel it isn't now anymore.
Maybe I'm a dinosaur and I need to follow more carefully what happens on
github :)

Also, a lot of the discussion moved to the pull request in github, so it
disappeared from the dev-list. Now to be part of the discussion, you
need to follow the pull request discussion manually. If you weren't part
of the discussion at start, it now requires more involvement compared to
receiving all discussion in my mail client.

 We have a goal to foster development discussion from the community.
 Because of that, I am proposing we move the github notifications to a
 new list, puppet-commits.  I realize this may have a consequence of
 reducing patch/commit discussion.  This should be compensated by:

Since the move to github commit discussion happened on github, so that's
a non-issue.

 1.  Still having a list where pull requests can be commented on
 2.  Ability to comment on pull requests directly on github
 3.  More forethought and discussion on the dev list prior to making a
 pull request/patch.

That'd be really great. And I noticed some attempts lately in this
direction, which is really good.

 4.  You can also watch the RSS feed for the puppet projects you have
 the most interest in.

You mean the commit/pull request feed on github, or the redmine feed?

 This decision isn't final, but I would like to get opinions on the
 idea.  I welcome feedback until Friday, April 13.

What would be utterly awesome would be better pull request e-mails flow
on this commit (or on the dev) list:

* no close e-mail
* more readable inlined patches (syntax coloring?, broken in different
e-mails per commit?)
* send back to the thread on this list the internal discussion happening
on github

Thanks,
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppetdoc is not playing nice

2012-01-31 Thread Brice Figureau
On 31/01/12 19:29, Dan White wrote:
 Puppet 2.6.12 on Red Hat 5.7
 
 Some background is necessary to set up the question:
 I have a class - toggledservices - where I have grouped service control.
 class toggledservices::disabled covers all the stuff I want turned off by 
 default (for hardening requirements)
 and individual services I want to turn on/configure get their own class like 
 class toggledservices::ntpd
 
 I have a total of 8 sub-classes to toggledservices.
 
 Now for the problem:
 puppetdoc rdoc output drops two of the sub-classes from the frame on the left 
 of the page it generates.
 All are listed on the module page and I can navigate to each sub-class from 
 that page, but the list of classes on the left is incomplete.
 
 Any clues for this clueless one ?

The best would be to reproduce the problem with the minimal manifest,
and then file a bug report as detailed as possible.
This way I'd be able to reproduce it and potentially fix the problem.

Alternatively you might try Puppet 2.7.10, maybe the bug was fixed.

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Compiled catalog arount 70s

2012-01-27 Thread Brice Figureau
On Fri, 2012-01-27 at 11:02 +0100, Antidot SAS wrote:
 Hi everyone,
 
 
 
 
 I am using puppet 2.7.9 and ruby 1.8.7 on debian box. I don't a lot of
 modules right now: just one module that create user + dotfile + ssh
 key and compiled catalog takes around 70s do I have to worry is that
 big.

70s is very high. My biggest node has around 2000 resources, and it
takes around 12s to compile (including storeconfigs) on a 4 year old
Dell 2950 (one 4C 1.18GHz processor, 4GiB RAM).

Compilation time depends on two things:

* the complexity (in number of classes or resources) of your manifests
* the number of concurrent nodes asking for catalog (the more node
asking for catalogs, the more load on your server and processes start to
fight for CPU/RAM).

Also, if you have storeconfigs enabled, this can take quite some time.
In which case I suggest to run with thin_storeconfigs instead of the
full one if you can.

You might also want to check that your server is not swapping.

 Does the compiled time scale with the module number? Do I have to make
 sure I don't cross a certain limit?

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet security issue?

2012-01-26 Thread Brice Figureau
On 27/01/12 02:14, Ryan Bowlby wrote:
 Hi All,
 
 I have a two puppet servers using Apache with mod_proxy as the
 frontend. Similar to what what's described in Pro Puppet.
 Unfortunately, Apache mod_proxy is passing the puppetca requests using
 the loopback IP instead of the original source IP.

You're not mentioning what stack your master are running.
But if they're running on Apache and Passenger, may I suggest using
mod_rpaf?

 This is a bit of a security concern when configuring auth.conf! An
 example stanza in auth.conf:
 
 # allow certificate management on provisioning server without cert
 path ~ /cert*
 auth no
 allow localhost

If you instead make this a certname, then it's secure again.

 With that near the bottom of auth.conf ALL hosts can now perform any
 API calls matching that path. This is due to puppet using the
 127.0.0.1 passed by Apache.
 
 I need one of the following:
 
 1. A way to do IP passthrough in apache such that the correct
 originating IP is used.

Configure your mod_proxy to pass the IP in X-Forwarded-For.

 2. Puppet to make use of the X-Forwarded-For header if it exists and
 to fallback in instances where it doesn't.

And mod_rpaf is what you need, running in your master apache.

 Likely the latter is the best method. Please feel free to correct me
 if I am missing something. I have verified that with the above
 auth.conf stanza ALL hosts can perform all /cert* related API calls.
 Additionally here is a log line:
 
 127.0.0.1 - - [27/Jan/2012:00:32:00 +] GET /production/
 certificate_statuses/no_key HTTP/1.1 200 343 - curl/7.15.5 (x86_64-
 redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/
 0.6.5
 
 That's a request from another server. Here are the Apache configs:
 
 http://pastebin.com/rDKPSjjy
 
 
 Thanks everyone!
 Ryan Bowlby
 


-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Puppet proxies

2012-01-13 Thread Brice Figureau
Hi Nan,

On Thu, 2012-01-12 at 00:41 -0600, Nan Liu wrote:
 On Wed, Jan 11, 2012 at 8:57 AM, Jeff Sussna j...@ingineering.it wrote:
  I'm not looking to create Puppet environments in AWS. Rather the
  opposite: use Puppet to configure AWS services, including things like
  RDS, ElastiCache, ELB, that can't have puppet agents running on them.
  I had hoped to use CloudFormation itself for that purpose. Their
  support for change management is still incomplete. Without change
  management CF is IMHO worse than useless.
 
 Yeah, at the moment that's one of the killer limitations of Cloudformation.
 
  The only thing I'm unsure of is how/where to run my proxy. Seems like
  the simplest way to do it is to run a single puppet agent somewhere
  (maybe even on the master machine). Then create the necessary AWS-
  control modules and include them in the manifest for my proxy node.
 
 How do you plan to configure EC2 services? through existing EC2
 command line tools, fog, or some other third party tool?
 
  I wonder if it might be useful to add the concept of a pseudo-node to
  puppet for cases like this.
 
 In Puppet 2.7 the puppet device application was introduced to support
 network devices that can't run puppet agent. A proxy puppet agent
 server will interact with the puppet master and enforce the catalog by
 configuring the device through the configured transport. Puppet device
 is extensible, and this actually suitable beyond network device for
 things like storage, and possibly your use case for configuring EC2
 resources. It would be really awesome to extend support for EC2.
 
 Here's an example how we could model EC2 resources and use puppet
 device to manage it.
 
 /etc/puppet/ec2.conf
 [amazon]
 fog /path/to/fog.credentials

Or directly fog credentials...

 on the master specify the amazon node and puppet device will manage
 those EC2 resources.
 node amazon {
   ec2_instance {...}
   ec2_rds {...}
   ec2_s3 {...}
 }
 
 $ puppet device --deviceconfig /etc/puppet/ec2.conf
 
 Things like s3, rds are simple to map, because we can specify the name
 of the instance on creation:
 
 ec2_rds { 'demodb':
   ensure = present,
   class = 'db.m1.small',
   engine = 'mysql',
   zone = 'us/east',
   username = '...',
 ...
 }
 
 But for instances, vpc this is a bit more problematic because the
 instance_id is determined after they are launched. Also I think for
 production use cases, people probably don't care about the
 instance_id, but rather how many systems of this role I need with this
 ami-id perhaps. This is one of the areas I need some feedback and
 suggestion.
 
 For instances perhaps:
 
 ec2_instances { instance_name (???):
   instance_id = 'can't really manage this.',
 ...
 }

That's where you would use ec2 tags. We can have an ec2
'puppet_resource' tag that would map back to the ec2_instance namevar.
The ec2_instance provider would then be able to look-up the instances in
ec2 and manage them appropriatly.

 ec2_vpc { ??? what uniquely identifies this ???:
 ...
 }

I don't know if you can tag vpc, but if you can I suggest using the same
model.

 I'm definitely looking see if there's interest and whether the model
 above fits people usage of EC2.

Based on discussions I had at the last PuppetConf with a lot of people,
I'm sure this would be a terriffic feature.

Imagine all what you can do, including setting up notification events
between ec2 resources and so on.

Granted this might not be dynamic (in the elastic meaning), but I
believe this can fit a need for people that don't want to introduce a
different tool to manage their ec2 cloud.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Mildly disconcerting problem with a 2.6.7 client and 2.7.9 master

2012-01-04 Thread Brice Figureau
Hi,

On Wed, 2012-01-04 at 11:51 +, John Hawkes-Reed wrote:
 Hello.
 
 In testing a potential upgrade from 2.6.7 - 2.7.9 I ran across the following 
 'interesting' behaviour:
 
 The relevant section of manifest (postfixconf::generic) looks like this:
 
 file { /etc/aliases:
   mode  = 444,
 source = [ 
 puppet:///modules/postfixconf/$hostname-aliases,puppet:///modules/postfixconf/aliases
  ],

change the first string to:
puppet:///modules/postfixconf/${hostname}-aliases

 owner = root,
 group = root,
   ensure = file,
 before  = Exec[newaliases],
   }
 
 ... And the result of a 'puppet agent --test --debug --noop ... is this:
 
 notice: /Stage[main]/Postfixconf::Generic/File[/etc/aliases]/ensure: 
 current_value absent, should be directory (noop)
 info: /Stage[main]/Postfixconf::Generic/File[/etc/aliases]: Scheduling 
 refresh of Exec[newaliases]
 notice: /Stage[main]/Postfixconf::Generic/Exec[newaliases]: Would have 
 triggered 'refresh' from 1 events

The problem is that in 2.7.x '-' is a valid character in variable names.
So your postfixconf/$hostname-aliases string was interpolated to
postfixconf/ which is a directory.

This is tracked in bugs:
https://projects.puppetlabs.com/issues/10146
https://projects.puppetlabs.com/issues/5268


-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Seperate CA's/Master behind load balancer

2011-12-20 Thread Brice Figureau
On Tue, 2011-12-20 at 07:14 -0800, ollies...@googlemail.com wrote:
 Thanks.
 
 I assume that the section in this:- 
 http://projects.puppetlabs.com/projects/puppet/wiki/Puppet_Scalability
 
 Stating that is doesn't work for 0.25  2.6 also applies to the 2.7.9
 release that is the latest ?

Yes, I believe chained CA are still not working in 2.7.x, if that's what
you meant.

 Sharing an area via NFS/iSCSI/rsync'ing or whatever is potentially
 viable does anyone know how this would be possible with different
 hostnames serving the certs and the traffic being directed via a load-
 balancer ?

That's easy: dedicate two host to be CAs only. One is the hot standby of
the first one. You can either manually bring it up when the first one
fails, or use something like drbd+pacemaker to do it automatically.
Then have all your other masters run in no ca mode. Each can have a
different server CN, or they can share the same server certificate.
This is explained in length in the Pro puppet [1] book if you need.

 Maybe it's just not possible right now and I am flogging a dead horse
 and should accept a SPOF for a CA but can easily scale out the
 puppetmasters fine.

The simplest architecture for load balanced puppet is the single CA one,
of course that means you can live with the SPOF. BTW, the SPOF is only
at certificate signing. In the event your CA becomes unresponsive, it
won't prevent your actual nodes to get a catalog.

I highly recommend you to get a copy of the Pro Puppet book. It
contains an extensive chapter on load balancing puppet master (both with
the SPOF and without it).

[1]: http://www.apress.com/9781430230571
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Seperate CA's/Master behind load balancer

2011-12-20 Thread Brice Figureau
On Tue, 2011-12-20 at 08:02 -0800, ollies...@googlemail.com wrote:
  That's easy: dedicate two host to be CAs only. One is the hot standby of
  the first one. You can either manually bring it up when the first one
  fails, or use something like drbd+pacemaker to do it automatically.
  Then have all your other masters run in no ca mode. Each can have a
  different server CN, or they can share the same server certificate.
  This is explained in length in the Pro puppet [1] book if you need.
 
   Maybe it's just not possible right now and I am flogging a dead horse
   and should accept a SPOF for a CA but can easily scale out the
   puppetmasters fine.
 
  The simplest architecture for load balanced puppet is the single CA one,
  of course that means you can live with the SPOF. BTW, the SPOF is only
  at certificate signing. In the event your CA becomes unresponsive, it
  won't prevent your actual nodes to get a catalog.
 
  I highly recommend you to get a copy of the Pro Puppet book. It
  contains an extensive chapter on load balancing puppet master (both with
  the SPOF and without it).
 
 Thanks.
 
 Have got a copy of the book and that is what I was working from. As
 per the
 example in the book it's fine running the CA's in the localhost sort
 of mode
 but when switching from locahost to other servers off the load-
 balancer server
 I get the cert errors:-
 
 err: /File[/var/lib/puppet/lib]: Failed to generate additional
 resources using 'eval_generate: certificate verify failed.  This is
 often because the time is out of sync on the server or client
 
 
 Do I have to clean out the puppetmaster setup on the load-balancer
 host ?
 
 On the CA servers I removed the ssldir and ran puppet master to
 generate a
 new ssl data.
 
 Then with a new client I get the new cert generated but then the above
 error.

That's expected because when the client connects to one of your
loadbalanced server it receives a certificate that was signed/generated
under the previous CA. You actually need your loadbalanced masters to
get a certificate from your current CA. This certificate will then be
used when talking to your nodes.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Seperate CA's/Master behind load balancer

2011-12-20 Thread Brice Figureau
On Tue, 2011-12-20 at 08:25 -0800, ollies...@googlemail.com wrote:
 
 On Dec 20, 4:16 pm, Brice Figureau brice-pup...@daysofwonder.com
 wrote:
  On Tue, 2011-12-20 at 08:02 -0800, ollies...@googlemail.com wrote:
That's easy: dedicate two host to be CAs only. One is the hot standby of
the first one. You can either manually bring it up when the first one
fails, or use something like drbd+pacemaker to do it automatically.
Then have all your other masters run in no ca mode. Each can have a
different server CN, or they can share the same server certificate.
This is explained in length in the Pro puppet [1] book if you need.
 
 Maybe it's just not possible right now and I am flogging a dead horse
 and should accept a SPOF for a CA but can easily scale out the
 puppetmasters fine.
 
The simplest architecture for load balanced puppet is the single CA one,
of course that means you can live with the SPOF. BTW, the SPOF is only
at certificate signing. In the event your CA becomes unresponsive, it
won't prevent your actual nodes to get a catalog.
 
I highly recommend you to get a copy of the Pro Puppet book. It
contains an extensive chapter on load balancing puppet master (both with
the SPOF and without it).
 
   Thanks.
 
   Have got a copy of the book and that is what I was working from. As
   per the
   example in the book it's fine running the CA's in the localhost sort
   of mode
   but when switching from locahost to other servers off the load-
   balancer server
   I get the cert errors:-
 
   err: /File[/var/lib/puppet/lib]: Failed to generate additional
   resources using 'eval_generate: certificate verify failed.  This is
   often because the time is out of sync on the server or client
 
   Do I have to clean out the puppetmaster setup on the load-balancer
   host ?
 
   On the CA servers I removed the ssldir and ran puppet master to
   generate a
   new ssl data.
 
   Then with a new client I get the new cert generated but then the above
   error.
 
  That's expected because when the client connects to one of your
  loadbalanced server it receives a certificate that was signed/generated
  under the previous CA. You actually need your loadbalanced masters to
  get a certificate from your current CA. This certificate will then be
  used when talking to your nodes.
 
 But the Apache LB settings are sending the certificate stuff to the
 seperate
 CA server (I can see this in the logs) and the CA has the signed cert
 in the
 puppet cert --list --all but it still complains on the client.

The client is supposed to validate the certificate server. It does this
by checking the certificate the server sent against its locally cached
CA certificate.

In your case, depending on how your LB is working, it might be possible
the SSL endpoint is your LB. In which case this is the one that will
send the server certificate. Make sure this one sends a certificate that
was generated by the loadbalanced CA.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Bug #9388 prevents us from upgrading to 2.7.x

2011-12-20 Thread Brice Figureau
On Tue, 2011-12-20 at 08:32 -0800, Dennis Jacobfeuerborn wrote:
 Hi,
 can somebody who understands the puppet codebase take a look at bug
 #9388?
 I isolated the problem and it seems that the yaml cache files are not
 properly updated when mongrel is used.
 Cody Robertson added the the switch from GET to POST/PUT between 2.6.x
 and 2.7.x might be the problem and that the POST/PUT code might not
 update the cache files while the old GET code does.
 
 Given that 2.7.x is considered stable I'm getting a bit nervous that
 such a bug is still present and we are basically stuck on an outdated
 version.

I believe this was fixed as part of the work in:
https://projects.puppetlabs.com/issues/9109

So it was released in 2.7.8rc1, and should definitely be fixed in 2.7.8.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Learn from MY Mistake: false != false

2011-12-19 Thread Brice Figureau
On Mon, 2011-12-19 at 16:14 +, Dan White wrote:
 Sharing my stoopid mistake in the hopes of saving someone else the same grief:
 
 I had a boolean toggle that was not performing as expected.
 
 Long story short: I had put quotes around the word false
 
 class { 'foo' : boolFlag = false } was coming up TRUE
 
 To fix it, lose the quotes
 class { 'foo' : boolFlag = false }

It all depends what is done with boolFlag in your parametrized class.
More specifically what doesn't work is:

if false {
}

Because a string when (internally) converted to a boolean is true.

This was discussed 2 days ago (look when the thread changes name):
http://groups.google.com/group/puppet-users/browse_thread/thread/3dfba6566d97880e/c473deea3f302410?#

And this is tracked in the following bug:
http://projects.puppetlabs.com/issues/5648

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Seperate CA's/Master behind load balancer

2011-12-19 Thread Brice Figureau
On 19/12/11 12:05, ollies...@googlemail.com wrote:
 Thanks,
 
 On our older infrastrcture if we wanted to scale out we just copied
 the ssldir and changed the filenames to the FQDN of the new master
 server. certdnsnames would be wildcarded.

The problem with this way of scaling is that you won't be able to revoke
a certificate. The reason is that more than one certificate can have the
same serial.

I believe it's better to dedicate a master to be a CA only master. Then
you point your clients to this ca.
If you fear the SPOF, then you can use a pair of CA server sharing
ssldir either through rsync or anything else allowing sharing files.

 Now using 2.7.9 how do we do certificates so we could scale out
 horizontally from behind this loadbalancer ?

There's no reasons you can't do what you were doing before upgrading to
the 2.7.9 version. If what you were doing doesn't work anymore, then it
might be a bug you should report.

 Tring this approach leads now to this:-
 
 # puppet cert --list --all
 warning: The `certdnsnames` setting is no longer functional,
 after CVE-2011-3872. We ignore the value completely.
 
 For your own certificate request you can set `dns_alt_names` in the
 configuration and it will apply locally.  There is no configuration
 option to
 set DNS alt names, or any other `subjectAltName` value, for another
 nodes
 certificate.
 
 Alternately you can use the `--dns_alt_names` command line option to
 set the
 labels added while generating your own CSR.
 - CLIENT FQDN (FA:C4:68:C1:30:E2:95:9E:48:AB:ED:E4:A7:BF:3F:19)
 (certificate signature failure)
 
 Going around in circles somewhat trying to get a modern puppet setup
 with a potential to scale horizontally.

The command just complains about the certdnsnames option that has been
removed. You can stil use dns_alt_names to generate clients and/or
server certificates with embedded subjectAltName extension.

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Seperate CA's/Master behind load balancer

2011-12-16 Thread Brice Figureau
On Fri, 2011-12-16 at 04:40 -0800, ollies...@googlemail.com wrote:
 Hello,
 
 Attempting to setup a CA primary/standby as well as seperate
 puppetmaster servers (all running Apache/Passenger) behind another
 Apache/Passenger type load balancer.
 
 Clients are not getting certs:-
 err: Could not request certificate: Could not intern from s: nested
 asn1 error
 
 Clearly an SSL issue but not something I know a great deal about.

Your primary load-balancer is the SSL endpoint, so when the requests
arrive in your puppet_ca nodes it is in clear text.
But apparently the ca_host configuration tell the server that it will
receive SSL content.

 [snipped]
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] new user: need Conditional statement example within a file resource type

2011-12-16 Thread Brice Figureau
On Tue, 2011-12-13 at 10:31 -0800, Kenneth Lo wrote:
 Searching old archive I find this topic:
 
 http://groups.google.com/group/puppet-users/browse_thread/thread/187ee3897a26ae2a/32fea612e79dda80?hl=enlnk=gstq=puppet+case+statement+in+file+resource#32fea612e79dda80
 
 
 I understand that  case statements must be outside of resource
 statements per that discussion and I understand the usage for the
 selector in-statement solution, however that's just for assignment
 though.
 
 Consider this simple file resource, I just want to have a external
 variable that control whether I need the file in the system:
 
 file { somefile :
 
 case ${hasfile} {
true: { ensure = present }
default: { ensure = absent }
 }
 source = puppet:///somefile,
 owner = root,
 .
 .
 .
 }
 
 
 Obviously I had a syntax error here because case statement is not
 happy within the resource.

That's why the documentation says to use a selector.

 So, what's a recommended puppet way to do something like this? thx in
 advance.

file { 
  somefile :
ensure = $hasfile ? {
true  = present,
default  = absent
  },
source = puppet:///somefile,
owner = root,
}

Please note that true is not strictly equivalent to the bareword true
in the puppet language :)
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Seperate CA's/Master behind load balancer

2011-12-16 Thread Brice Figureau
On Fri, 2011-12-16 at 07:53 -0800, ollies...@googlemail.com wrote:
 Thanks I realised that when I sent it. Dialled back the CA to:-
 Listen 18140
 VirtualHost *:18140
   SSLEngine off
   ServerName CA FQDN
   RackAutoDetect On
   DocumentRoot /etc/puppet/rack/puppetmaster/public/
   Directory /etc/puppet/rack/puppetmaster/
 Options None
 AllowOverride None
 Order allow,deny
 allow from all
   /Directory
 /VirtualHost
 
 Now clients are getting cert requests signed but not going any further
 info: Creating a new SSL key for CLIENT FQDN
 warning: peer certificate won't be verified in this SSL session
 info: Caching certificate for ca
 warning: peer certificate won't be verified in this SSL session
 warning: peer certificate won't be verified in this SSL session
 info: Creating a new SSL certificate request for CLIENT FQDN
 info: Certificate Request fingerprint (md5): 51:D6:6B:58:EA:CC:
 11:14:4B:48:E1:B4:C1:8B:A5:A6
 warning: peer certificate won't be verified in this SSL session
 warning: peer certificate won't be verified in this SSL session
 info: Caching certificate for CLIENT FQDN
 info: Retrieving plugin
 err: /File[/var/lib/puppet/plugins]: Failed to generate additional
 resources using 'eval_generate: certificate verify failed.  This is
 often because the time is out of sync on the server or client
 err: /File[/var/lib/puppet/plugins]: Could not evaluate: certificate
 verify failed.  This is often because the time is out of sync on the
 server or client Could not retrieve file metadata for puppet://LOAD
 BALANCER FQDN
 /plugins: certificate verify failed.  This is often because the time
 is out of sync on the server or client
 err: Could not retrieve catalog from remote server: certificate verify
 failed.  This is often because the time is out of sync on the server
 or client
 warning: Not using cache on failed catalog

OK, so when it tried to pluginsync it complained the server certificate
could not be verified.

Are you sure the puppetmaster _server_ certificate has been signed by
the same CA as this node _client_ certificate.

In other words is the following working:
openssl s_client -host puppet -port 8140 \
 -CAfile /var/lib/puppet/ssl/certs/ca.pem \
 -cert /var/lib/puppet/ssl/certs/CLIENT FQDN.pem \
 -key /var/lib/puppet/ssl/private_keys/CLIENT FQDN.pem

If not, it might give you more information (especially with -debug).

Also, it might be worth checking on the apache error log.

 I know the time is in sync OK
 
 Certs look the same.

To be really sure compare the certificate fingerprints.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Quoting 'true' and 'false'

2011-12-16 Thread Brice Figureau
On 16/12/11 19:48, Tim Mooney wrote:
 In regard to: Re: [Puppet Users] new user: need Conditional statement...:
 
 Obviously I had a syntax error here because case statement is not
 happy within the resource.

 That's why the documentation says to use a selector.

 So, what's a recommended puppet way to do something like this? thx in
 advance.

 file {
  somefile :
ensure = $hasfile ? {
true  = present,
default  = absent
},
source = puppet:///somefile,
owner = root,
 }

 Please note that true is not strictly equivalent to the bareword true
 in the puppet language :)
 
 Ah, perfect segue.  I had been meaning to follow up to John Bollinger
 when he earlier posted something similar that also had 'true' quoted.
 
 I've been through the style guide and several other areas in the
 documentation and I can't find any recommendations on whether it's better
 to use bare
 
   true
   false
 
 or whether it's better to quote them.  This is specifically for use in
 parameterized classes.  For example:
 
 foo.bar.edu.pp:
 
 node 'foo.bar.edu' {
 
class {'rhel':
  version  = '5',
  ipv6_enabled = true,
}
 }
 
 rhel/manifests/init.pp:
 
 class rhel($version, $ipv6_enabled='default') {
include rhel::common
 
case $ipv6_enabled {
  true: {
  class {'network': ipv6 = true }
  }
  false: {
  class {'network': ipv6 = false }
  }
  default: {
case $version {
  '5': {
  class {'network': ipv6 = false }
  }
  '6': {
  class {'network': ipv6 = true }
  }
  default: { fail(only version 5 and 6 of rhel are currently 
 supported)}
}
  }
}
 }
 
 
 In other words, our default for RHEL 5 is ipv6 disabled, on RHEL 6 it's
 ipv6 enabled, but the default can be overridden for either.
 
 The problem is that we had to be very careful to make certain that we
 didn't quote true or false in some places and leave them as barewords
 elsewhere, or it just wouldn't work.  Mixing quoted  nonquoted gave us
 unreliable and unexpected results.

Exactly. If you intend your options to be boolean use the barewords true
and false.

 This brings me back to the questions: where in the docs is this covered,
 and what are the recommendations for whether we should (or shouldn't) be
 quoting true  false when passing them around into parameterized classes
 and testing them in selectors?

I don't know if it's covered in the documentation.

Puppet has the notion of true/false (ie the boolean). Any puppet
conditional expression can evaluate to either true or false.

On the other hande true is a string containing the word true. false
is a string containing the word false. It is not a boolean.

But that's where things get difficult:

if false {
 notice(false is true)
}

This will print false is true.

The same for
$str = false
if $str {
 notice(false is true)
}

But,
case $str {
true: { notice(true) }
false: { notice(false as bool) }
false: { notice(false as str) }
}

will print false as str. So false != false and is not == to true.

But when converted as a boolean any strings becomes true, and that's
what happen in our if example.

We track this issue in the following ticket:
http://projects.puppetlabs.com/issues/5648

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] catalog compilation caching

2011-11-23 Thread Brice Figureau
On 23/11/11 19:27, Antony Mayi wrote:
 Hello,
 
 just trying to understand the workload behind the compilation of
 catalogs puppet master is doing each time the client does a request to
 the master. I understand the clients send the facts to the master and
 the master based on the facts and the manifests compiles the catalog. I
 would expect that this can be optimized with some caching so if neither
 the manifest and the set of facts doesn't change the compilation of that
 catalog results to cache hit and that request is served significantly
 faster leaving lower load footprint then the first request or request
 after some changes in manifests or in facts. In reality I don't see any
 difference that would suggest some caching in the master, eg. eachtime
 the client contact the master the compilation takes about 6 seconds
 which seams be unbelievably long.
 
 can anyone please explain the way the master caches the catalogs or what
 are the reasons it doesn't?

The master doesn't cache any compiled catalogs. One of the reason is
that the facts change for each run (yes, uptime I'm looking to you), and
the master doesn't know which facts matters or not (regarding to change
in the compiled catalog, which I believe is an hard problem).

Check the Optimize Catalog Compilation section of this blog post for a
workaround:
http://www.masterzen.fr/2010/03/21/more-puppet-offloading/

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Regenerating puppet master certificate

2011-10-26 Thread Brice Figureau
On Wed, 2011-10-26 at 10:02 +0200, Peter Meier wrote:
  Wish I could've found that in the docs.
  This will certainly get me going again.
 
 Sounds like a ticket for puppet documentation...

What would be awesome is this hidden feature to become a first class
feature in puppet cert (like --generateca).
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Regenerating puppet master certificate

2011-10-25 Thread Brice Figureau
Hi Tom,

On Tue, 2011-10-25 at 11:20 +0200, Tom De Vylder wrote:
 Hi all,
 
 Is there a more elegant way to regenerate the Puppet master
 certificate than what's described in the CVE-2011-3872 toolkit?

You're talking about generating a master cert or a master CA cert?

  If you can maintain a secondary shell session to the puppet master
 server, you can start a WEBrick master with puppet master
 --no-daemonize --verbose and stop it with ctrl-C.
  If you prefer to only maintain one shell session, you can start a
 WEBrick master with puppet master and stop it with kill $(cat $(puppet
 master --configprint pidfile)).
 Source: README.pdf inside the toolkit.
 
 I used to be able to do this by running 'puppetca'. But ever since
 puppetca isn't available anymore I can't seem to find any information
 on how to do it instead.

Puppetca is now called puppet cert. 

 Well other than what's described above that is. But that's not
 feasible in an automated fashion. I'd like to deploy a second puppet
 master.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet 2.7 allows dash in variable names: bug or feature?

2011-10-22 Thread Brice Figureau
On 05/10/11 19:46, Steve Snodgrass wrote:
 While testing puppet 2.7, I found that one of my manifests broke
 because of the following quoted string:
 
 http://$yumserver/repos/vmware-$esxversion-rhel6-64;
 
 Everything in the resulting string after vmware- was blank.  After
 some experiments I found that puppet 2.7 allows dashes in variable
 names, and was interpreting $esxversion-rhel6-64 as one big
 variable.  Of course adding curly braces fixes the problem, but that
 seems like a significant change.  Was it intended?

I don't know if it was intended, but there's a ticket open right now:
http://projects.puppetlabs.com/issues/10146

Go watch it to increase the odds of it being fixed :)

I really believe '-' shouldn't be allowed at end of any variable name,
which then should fix your problem.

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet 2.7 allows dash in variable names: bug or feature?

2011-10-22 Thread Brice Figureau
On 22/10/11 18:47, Brice Figureau wrote:
 On 05/10/11 19:46, Steve Snodgrass wrote:
 While testing puppet 2.7, I found that one of my manifests broke
 because of the following quoted string:

 http://$yumserver/repos/vmware-$esxversion-rhel6-64;

 Everything in the resulting string after vmware- was blank.  After
 some experiments I found that puppet 2.7 allows dashes in variable
 names, and was interpreting $esxversion-rhel6-64 as one big
 variable.  Of course adding curly braces fixes the problem, but that
 seems like a significant change.  Was it intended?
 
 I don't know if it was intended, but there's a ticket open right now:
 http://projects.puppetlabs.com/issues/10146
 
 Go watch it to increase the odds of it being fixed :)
 
 I really believe '-' shouldn't be allowed at end of any variable name,
 which then should fix your problem.

Or not, because the lexer would tokenize $esxversion-rhel6 as the
variable name...
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] parsedfile help needed

2011-10-14 Thread Brice Figureau
Hi,

On Thu, 2011-10-13 at 17:47 -0400, Guy Matz wrote:
 Thanks for the reply!!  One more thing I can't quite figure out:  How
 to get a new entry in my target file!  I can parse, but an entry in
 manifests doesn't magically appear.  Do I need to add something to my
 provider or type to get that to happen?  Any help would be s
 appreciated; i'm getting tired of struggling with this!
 
 vncserver/manifests/init.pp
 class vncserver {
   vncserver { 'guymatz':
   port = '92',
   geometry = '1024x768',
   ensure = 'present'

Those are properties: you want to manage them.

   }
 } # class vncserver
 
 
 vncserver/lib/puppet/type/vncserver.rb
 require 'puppet/property/list'
 require 'puppet/provider/parsedfile'
 
 Puppet::Type.newtype(:vncserver) do
 
 ensurable
 
 newparam(:port) do
   desc The vnc servers port assignment.  Will be +5900 on the
 server
 end
 
 newparam(:name) do
   isnamevar
   desc The user who will own the VNC session.
 end
 
 newparam(:geometry) do
   desc Resolution for VNC, in XxY, e.g. 1024x768.
 end
 
 newparam(:password) do
   desc Password to be put into users .vnc/passwd.
 end
 
 newparam(:args) do
   desc Optional arguments to be added to the vncserver
 command-line.
 end

But you defined them as parameters.
Use newproperty instead of newparam.

 @doc = Installs and manages entries for vncservers.  For
 Redhat-bases
   systems, and likely many others, these entries will be in
   /etc/sysconfig/vncservers.
 
 end
 
 
 and, finally, my very unfinished provider:
 vncserver/lib/puppet/provider/vncserver/parsed.rb:
 require 'puppet/provider/parsedfile'
 
 vncservers = /etc/sysconfig/vncservers
 
 Puppet::Type.type(:vncserver).provide(:parsed,
   :parent =
 Puppet::Provider::ParsedFile,
   :default_target = vncservers,
   :filetype = :flat
   ) do
   desc The vncserver provider that uses the ParsedFile class
   confine :exists = vncservers
   text_line :comment, :match = /^\s*#/;
   text_line :blank, :match = /^\s*$/;
 
   record_line :parsed,
   :joiner = '',
   :fields = %w{port geometry optional_args},
   :optional = %w{port geometry },
   :match = /^VNCSERVERARGS\[(\d+)\]=-geometry (\d+x\d
 +)(.*)$/,
   :to_line = proc { |record|
 VNCSERVERARGS[#{record[:port]}]=\-geometry
 #{record[:geometry]} #{record[:optional_args]}\
   }
 
   record_line :parsed_again,
   :joiner = '',
   :fields = %w{port_name},
   :optional = %w{port_name},
   :match = /^VNCSERVERS=(.*)$/,
   :to_line = proc { |record|
 VNCSERVERS=\#{record[:port_name]}\
   }
 end

Also, you don't have any property in the type for port_name.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] parsedfile help needed

2011-10-12 Thread Brice Figureau
On 12/10/11 19:12, Guy Matz wrote:
 hi! 
 I've seen it reported that there is no official doc for parsedfile; does
 anyone know if this is still true?
 
 I'm trying to make a new type and getting stuck on how parsedfile works
 . . .  Any help would be appreciated:
 
 regarding the Puppet::Type.type(:newType).provide line:
 1. What does the :parsed label do?  Are there other options?

It's the provider name. I don't think it really matters, except that
there can't be two provider of the same name for a given puppet type.

 2. Are there other types of :filetypes besides :flat?  Does labeling
 as :flat have any affect on the parsing?

It looks like there's a handful of variations of crontab formats. Use :flat.

 regarding record_line:
 1. can the name parameter be anything?  is :parsed anything
 special?  Is the name used for anything?

I believe it can be anything. I think the name is used if you ever want
to look up for a given record_line.

 2. what exactly do the parenthetic groupings do, e.g. /^\s+(.*)=(.*):$/   ?

It's a regex capture.

 3.  Is there a relationship between the parenthetic groupings and the
 :fields label?

Yes. I think the first capture will end up in the first 'field' and so
on for the subsequent capture/field.

 4. what is the relationship between :fields in provider, newparams in
 type  fields in manifest/init.pp?

The role of a given provider is to fill in resource properties and later
on, commit updated properties to the target physical resource.
So, yes you need a field per newproperty in the type.

   4a. Why do i get the following error when I don't have stuff_1 as
 :optional: Could not evaluate: Field 'stuff_1' is required

We need to see the code to help with specific errors.

 5. What does :post_parse do?

It allows to run some specific code after the parsing of a given record
line happens. It allows to modify was what read, or munge some values...

 6 What does :pre_gen do?

It allows to run provider specific code before the to_line operation
happens.

 7 Are there any other mystery parameter?

There's at least process, if you want to use your own parsing instead of
relying on the regex. And there is a bunch of options to modify how the
parsing happens.

Check for more information:
Puppet::Util::FileParsing
and
Puppet::Provider::ParsedFile

Hope that will help you :)
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Secure Certification Authority Transfer

2011-08-25 Thread Brice Figureau
On 25/08/11 16:05, Horacio Sanson wrote:
 snip
 For me this was a problem rather than a feature and the problem was
 mainly because  nginx (version  1.0.0) did not support optional ssl
 client verification as Apache does. With nginx 1.0.5 I can set
 ssl_verify_client to optional and now my new clients get signed as
 expected. 

For what is worth, this specific feature was added in Nginx 0.8.7 then
backported a little bit after into 0.7.63 (circa Oct 2009).
I do know this well because I wrote the patch so that we could support
puppet clients with nginx without resorting to using a different port
for CA :)

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] 2.7.1 slowness?

2011-08-24 Thread Brice Figureau
On Tue, 2011-08-23 at 11:00 -0700, Digant C Kasundra wrote:
 Is anyone else noticing slowness with 2.7.1?  When I run puppet on my
 2.6.8 box, it takes 11 seconds.  On my second box with exactly the
 same catalog, it takes 35 seconds.

Is the problem while compiling catalog (ie the master) or when applying
it (ie puppet agent)?
If the later, can you report what --summarize gives you on both host?

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] RFC: Adding implicit stages to Puppet

2011-06-10 Thread Brice Figureau
On Thu, 2011-06-09 at 18:50 -0700, Jacob Helwig wrote:
 On Thu, 09 Jun 2011 18:42:54 -0700, Nigel Kersten wrote:
  
  https://projects.puppetlabs.com/issues/7697
  
  One problem people producing modules that make use of stages are hitting is
  that it's difficult to create something reusable that integrates seamlessly
  into existing setups.
  
  This feature request is to add several more implicit stages to Puppet so we
  have:
  
  bootstrap
  pre
  main
  post
  
  existing by default, making it easier for authors to specify stages in their
  modules.
  
  Thoughts?
  
 
 The answer to question Which comes first, 'bootstrap' or 'pre'? seems
 awfully ambiguous from just the names.
 
 What's the reason for separating it out?

One of the reason would be for the bootstrap phase to happen in its own
run instead of being part of the standard run. That would allow to
pre-install stuff that plugins could use (like for instance mysqladmin
for the mysql types). Then the 3 other stages would happen in the same
run.

It would also be great to have this stage being optional in subsequent
runs, allowing you to use the bootstrap stage during provisioning (ie
just after a pre-seed or kickstart), but never again. This would help
bootstrap from bare-metal.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Upgrade from 0.24.5 to 2.6.2: Could not find resource(s) M[test] for overriding

2011-06-10 Thread Brice Figureau
On Fri, 2011-06-10 at 10:00 +0200, Lorenz Schori wrote:
 Hi,
 
 I've been upgrading from Debian Lenny to Squeeze and now many of my
 puppet modules are failing with the message Could not find
 resource(s) X for overriding on node Y. I've isolated the problem case
 and apparently properties of defined resources may not be overridden
 anymore in recent versions.
 
 Consider the following two test cases:
 
 [snip]

 B: inherit-defined-resource.pp 
 class p {
   define m($message) {
 notify{${message}:}
   }
 
   m{test:
 message = hello,
   }
 }
 
 class c inherits p {
   M[test] {
 message = overridden,
   }
 }

I think puppet is looking M in the scope of C, but not in the scope of P
anymore.
Can you rewrite it like this and test:

class c inherits p {
   P::M[test] {
 message = overridden,
   }
 }
 
I'm confident this will solve your issue, but I don't remember exactly
if this behavior changed in 2.6.x.
In any case, you should open a redmine ticket with your observations.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Semantic differences between selector and if/then?

2011-06-01 Thread Brice Figureau
On Tue, 2011-05-31 at 17:29 -0700, Aaron Grewell wrote:
 Should the C-style selector and if/then statements have equivalent
 true/false handling?  Maybe I'm setting this up wrong, but I expected
 these two to be the same:
 
 if $name[symlink]{ $symlink= $name[symlink]} else
 { $symlink= undef}
 $symlink= $name[symlink]? { true = $name[symlink],false
 = undef  }
 
 Yet they don't return the same result.  The if/then statement sets the
 value to false as expected, whereas the selector never matches at all
 and throws an error.

There shouldn't be any difference, except there is a known issue with
selectors and arrays:
http://projects.puppetlabs.com/issues/5860

Since you're mentioning an error (which you should have included for
further analysis), I'd tend to think you're hitting this issue.

HTH,
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Going to publish custom modules : Request for comments

2011-05-17 Thread Brice Figureau
On Sun, 2011-05-15 at 21:27 +0200, Matthias Saou wrote:
 Dan Bode d...@puppetlabs.com wrote:
  [snip]
  If there are people familiar with puppetdoc here : Is it possible to
   generate clean doc for my modules with only relative links to be
   included in the repo?
  
  I do not understand this question.
 
 Let me rephrase quickly : From a checkout inside ~/puppet-modules/ when
 I run something like this :
 puppetdoc --mode rdoc --outputdir ./doc \
   --modulepath modules --manifestdir /var/empty
 
 I then get html documentation inside ./doc/ but all of the manifests
 files are referred to as /home/myuser/puppet-modules which would be
 quite ugly if included in the git repo or on a website as documentation.

Do you mean the file path of the parsed manifest mentioned when you
click on a class?
I'm afraid this is a bug nobody cared about. Can you file a redmine
ticket, please?

 I've just tested with 2.6.8 and I still get the same result. There are
 more details, like the module's main class showing up as xinetd::xinetd
 instead of just xinetd or my definition's parameters needing to be
 right after the define line (no empty line in between allowed) or the
 documented #-- not working to stop further parsing...

About the xinetd::xinetd it's by design. The module is called xinetd
itself and we need a way to distinguish the global module space (ie
xinetd) from the class called xinetd which lives in this module. It
could well have been possible that you have a manifests (not module)
class called xinetd which would have collisionned with this one in the
UI. Thus I used xinetd::xinetd. 
I could have make it xinetd::main or something akin but that I'm sure
you wouldn't have find it better. 

The #-- needs to be actually be written as ##-- until I fix this
offending bug. I don't think we have a redmine entry for this, so if you
feel brave enough go add one (and even better produce a patch to fix the
bug :))

 Are others using puppetdoc for their modules? Are there some good
 examples out there? The official documentation is useful but seems
 somewhat limited.

I think Alessandro is using puppetdoc for his own module, check Lab42's
Example42 modules on github:
https://github.com/example42/puppet-modules

Patches are more than welcome (even documentation patches) :)
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] How to manage a big cluster of 100s of node?

2011-04-15 Thread Brice Figureau
On Fri, 2011-04-15 at 03:56 -0700, Sans wrote:
 Dear all,
 
 Apparently I'll be installing Puppet on a cluster of 300+ nodes that
 divided into four different types.  How do I apply a specific module/
 class to a specific client-group? For example, if I want to apply
 my_module_1 to 50 or so odd machines and my_module_1 + my_module_2
 to 80 machines, how do I do that? I don't think the only way is to add
 individual node one by one in the nodes.pp. How do you guys do it? You
 input/comment is already appreciated. Cheers!!

I think the best practice is to use an ENC[1] (external node classifier)
like the Foreman or Puppet Dashboard, or one of your own.

This way you can program (more easily) the logic of attributions of
modules to your specific nodes.

[1]: http://docs.puppetlabs.com/guides/external_nodes.html
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppetdoc utf-8

2011-04-02 Thread Brice Figureau
On 02/04/11 01:15, Hugo Cisneiros (Eitch) wrote:
 Hi,
 
 Is there any support for UTF-8 in puppetdoc? I see in the HTML output
 that it generates the document as iso-8859-1 and some characters in my
 language gets corrupted while displayinf the pages on browser. At the
 html's source, I can see the tags describing as iso-8859-1, and
 replacing it for utf-8 would correct for me.

Just use --charset utf-8 on the puppet doc command line. I think this
was added in puppet 2.6.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: fileserver and distributing many files

2011-03-01 Thread Brice Figureau
On Mon, 2011-02-28 at 23:58 -0800, Thomas Rasmussen wrote:
 
 On Feb 28, 9:35 pm, Patrick kc7...@gmail.com wrote:
  On Feb 28, 2011, at 4:23 AM, Thomas Rasmussen wrote:
 
 
 
 
 
   hey
 
   OK, now I have tried to do it via rsync and it seems to be working...
   but the recurse bug is apparently very serious... I now have a
   manifest that does:
 
  file { /pack/mysql-5.5.9:
ensure = directory,
recurse = true,
force = true,
owner = root,
group = root,
require = Exec[rsync_mysql_install],
  }
 
   This takes about the same time as if I was copying (I need to be sure
   of permissions of rsync'ed files). Is the recurse feature really that
   bad?
 
  If the permissions you need to be sure of are all root,root,755, it will 
  be much faster to just do a chmod+chown at the end and put that and the 
  rsync in a shellscript.
 
 
 That is a hack but not a solution, honestly I'm really disappointed in
 puppet not handling this very effeciently I've been used to
 cfengine2 for the past year or so, but for a particular project we
 have decided to use puppet, mainly because we like the manifests
 better here than on cfengine...

You need to understand the issue first:
When puppet manage a files (ie all aspects of it), it creates internally
a resource object (like every other resources you manage). Then this
resource is evaluated and the said resources do what is necessary so
that its target (the file on disk) is modified to match what you wanted.
To simplify recursive file resources management, puppet will manage all
the files found by recursively walking the hierarchy as if they were
independent resources. Which means that puppet has to create many
instances in memory (one per managed file/dir found during the walk).
This in turn expose some scalability issues in the event system and
transaction system of puppet.

 So I hope that the performance issue when recursing directories gets
 attention and gets fixed soon.

I also hope this will be the case. Unfortunately each time I wanted to
address this issue, I found that this part of the code is horribly
complex, and all my attempts to do it differently were doomed to fail.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Puppetmaster/Amazon EC2/DNS

2011-02-23 Thread Brice Figureau
On 23/02/11 21:34, donavan wrote:
 I actually made a type and provider for managing Route 53 entries a
 while back[1].
 
 I was putting off publishing it until I could rewrite it based on
 Brices network device framework. If other people people could find
 something like this useful I can clean it up to work with the current
 2.6/2.5 and push to github.

I unfortunately didn't had time to work on my network device framework
for more than a month. I expect to resume this work soon :)

I'm not sure it will be generic enough to support what you want to do,
but that'd be a great opportunity to generalize it :)
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] ssl alert: Unknown CA

2011-02-18 Thread Brice Figureau
On Fri, 2011-02-18 at 00:44 -0800, Eric Sorenson wrote:
 I have a couple of hosts which are having trouble talking to the puppet VIP:
 
 puppetd[4554]: could not retrieve catalog from remote server: ssl_connect 
 returned=1 errno=0 state=sslv3 read server certificate b: certificate verify 
 failed
 puppetd[4554]: Not using cache on failed catalog
 puppetd[4554]: Could not retrieve catalog; skipping run
 puppetd[4961]: Retrieving plugin
 puppetd[4961]: (/File[/var/lib/puppet/lib]) Failed to generate additional 
 resources using 'eval_generate': SSL_connect returned=1 errno=0 state=SSLv3 
 read server certificate B: certificate verify failed
 puppetd[4961]: (/File[/var/lib/puppet/lib]) Failed to retrieve current state 
 of resource: SSL_connect returned=1 errno=0 state=SSLv3 read server 
 certificate B: certificate verify failed Could not retrieve file metadata for 
 puppet://puppet/plugins: SSL_connect returned=1 errno=0 state=SSLv3 read 
 server certificate B: certificate verify failed

This certainly means your local node CA cert is not able to verify the
server proposed certificate (probably because the server advertised cert
hasn't been signed by this CA, or you use a CA chain but don't send the
full chain to the client).

 I've gone through the usual SSL troubleshooting: the clocks are in
 sync, the client cert matches the one issued to it by the server (and
 is decodable by the private_key).
 
 When I use tshark to watch the ssl traffic, I see that the client is
 rejecting the server with the following ssl error. The connection
 never makes it to the back-end server, because the client hangs up.
 (10.1.1.1 is this client, 10.0.0.1 is the puppet vip)
 
 [root@db9 /var/lib/puppet/ssl]# tshark -n -i bond0 -d tcp.port==8140,ssl 
 -s2000 'port 8140 and len  60'
   0.00 10.1.1.1 - 10.0.0.1 TCP 29718  8140 [SYN] Seq=0 Win=5840 Len=0 
 MSS=1460 TSV=1862094055 TSER=0 WS=7
   0.001585 10.1.1.1 - 10.0.0.1 SSLv2 Client Hello
   0.001713 10.0.0.1 - 10.1.1.1 TLSv1 Server Hello, Certificate, Certificate 
 Request, Server Hello Done
   0.002208 10.1.1.1 - 10.0.0.1 TLSv1 Alert (Level: Fatal, Description: 
 Unknown CA)
 
 But openssl with the same cert and key that puppet is using passes 
 verification and connects successfully:
 
  openssl s_client -connect puppet:8140 -cert certs/db9.domain.com.pem -key 
 private_keys/db9.domain.com.pem -showcerts -state -verify 2

You didn't ask openssl s_client to actually check the server certificate
against the CA cert of the client.

Can you try:
openssl s_client -connect puppet:8140 -CAfile certs/ca.pem -cert 
certs/db9.domain.com.pem -key private_keys/db9.domain.com.pem -showcerts -state 
-verify 2

 103.115871 10.1.1.1 - 10.0.0.1 TCP 40758  8140 [SYN] Seq=0 Win=5840 Len=0 
 MSS=1460 TSV=1862197169 TSER=0 WS=7
 103.116949 10.1.1.1 - 10.0.0.1 SSLv2 Client Hello
 103.117078 10.0.0.1 - 10.1.1.1 TLSv1 Server Hello, Certificate, Certificate 
 Request, Server Hello Done
 103.121057 10.1.1.1 - 10.0.0.1 TLSv1 Certificate, Client Key Exchange, 
 Certificate Verify, Change Cipher Spec, Encrypted Handshake Message
 103.122162 10.0.0.1 - 10.1.1.1 TLSv1 Change Cipher Spec, Encrypted Handshake 
 Message
 
 Any thoughts on what could be causing this failure? I've seen quite a
 few odd ones (#3120, #4948 for example) but I've been gnawing at this
 one all day and haven't figured it out.

For an unknown reason your local node CA cert is not correct.
You can solve this by overwriting it with the main CA cert, or check
that your server certificate is indeed correctly signed by the CA you
think it was signed with.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Puppet T-shirt contest

2011-02-12 Thread Brice Figureau
Hi Jose,

On 11/02/11 22:33, Jose Palafox wrote:
 Our Puppet t-shirts are due for a
 redesign: http://www.puppetlabs.com/blog/tshirt-contest/
 
 Be sure you submit cool tagline for our new shirts and enter for a
 chance to win an Ar.drone
 (http://store.apple.com/us/product/H1991ZM/A?fnode=MTY1NDA3NAmco=MjA1MTA3MzY
 http://store.apple.com/us/product/H1991ZM/A?fnode=MTY1NDA3NAmco=MjA1MTA3MzY)
  
 and a ticket to Puppet
 Camp(http://www.puppetlabs.com/community/puppet-camp/)
 

Can we submit more than one entry?
I have several ones that I find quite good :)
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: high 500 error rate on file metadata operations

2011-02-11 Thread Brice Figureau
On 11/02/11 18:06, Jason Wright wrote:
 I've completed rolling out 2.6.3 and it's completely resolved our issues.

That's terriffic!
Thanks for sharing this good news.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-02-10 Thread Brice Figureau
On Thu, 2011-02-10 at 15:55 +0100, Udo Waechter wrote:
 Hello,
 I am one of those who have this problem. Some people suggested using Ruby 
 Enterprise. I looked at its installation, it looked a little bit 
 time-consuming, so I did not try that one out.
 I upgraded to debian squeeze (of course), and the problem persists.
 
 Thus I did some tests:
 
 1. got ruby from Ubuntu Meercat:
 libruby1.81.8.7.299-2
 ruby1.8   1.8.7.299-2
 ruby1.8-dev   1.8.7.299-2
 
 Same Problem (debian is 1.8.7.302 I think), with ruby from ubuntu lucid 
 (1.8.7.249) the problem is the same. I guess we can rule out debian's ruby 
 here.
 
 2. I reported that after stopping apache, stray master process remain and do 
 100% cpu. I did an strace on those processes and they do this (whatever that 
 means):
 
 $ strace -p 1231
 Process 1231 attached - interrupt to quit
 brk(0xa49a000)  = 0xa49a000
 brk(0xbf51000)  = 0xbf51000
 brk(0xda09000)  = 0xda09000
 brk(0xa49a000)  = 0xa49a000
 brk(0xbf52000)  = 0xbf52000
 brk(0xda09000)  = 0xda09000
 brk(0xa49a000)  = 0xa49a000
 brk(0xbf52000)  = 0xbf52000
 brk(0xda09000)  = 0xda09000
 ^CProcess 1231 detached

This process is allocating memory like crazy :)

 3. I have now disabled reports, lets see what happens.
 
 Thanks for the effort and have a nice day.
 udo.

Are you still on puppet 2.6.3?
Can you upgrade to 2.6.5 to see if that's better as reported by one
other user?


-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] How should I use puppetdoc?

2011-02-08 Thread Brice Figureau
On Tue, 2011-02-08 at 10:16 +, Nick Moffitt wrote:
 Brice,
 
   Thanks for your helpful answers.  
 
 Brice Figureau:
   I saw someone say that it was good at finding README files, but:
   ...was unable to find ./manifests/README or ./manifests/README.rdoc or
   any other common combination. I tried using the :include: directive at
   this point, but it didn't seem to have the effect I expected.
  
  Puppetdoc doesn't support a global manifests/README file.  It is only
  able to extract documentations from your comments embedded in your
  actual manifests.
  It is able, though, to use modules/mymodule/README as the cover for
  your modules.
 
 So is there no way to provide introductory documentation visible from
 the doc/index.html?
 
  but if you add a blank line between the comment and the define
  keyword, puppetdoc will lose the comment.
 
 Ah!  Thank you, that may be one of my biggest confusions.
 
   Can I not puppetdoc a node or an individual resource?  
 
 Above you appear to have documented a define.  A define is not a class,
 and I was worried that only classes were supported.  Can I puppetdoc a
 resource within a class?
 
  I don't think any automatic tool can produce the kind of documentation
  you're trying to achieve, but if you can prove me wrong (especially in
  the form of a patch or good suggestions that would be awesome).
 
 My goal is to create complete documentation for the puppet newcomer
 within my organization.  It should all live in the same repository as
 the manifests and modules, and should include a top-level tutorial
 document of some sort that can link into the reference material easily.
 
 I'm happy with the reams-of-reference-pages structure of the puppetdoc
 output, so long as I can set up the front page to be a single
 easy-to-print essay explaining the overall architecture of the layout
 and interplay between pieces.  Currently the front page shows a __site__
 class, which appears to be magical.  

It's magical in the sense that it is the main class, i.e. the global
default namespace. For instance if you define some resources/definitions
or variables in site.pp or any other manifests outside of any class,
then those entities will be attached to the __site__ class.

 I would like to be able to do
 something like... I dunno:
 
   # :include:README.rdoc
   class __site__ {}
 
 To say that README.rdoc has all of the documentation that should be
 shown first, and without need for comment delimiters.

I will check to see if :include: works correctly first.
If it does, then I think adding this as the first line of
manifests/site.pp:

# :include:README.rdoc

might just be enough. If that doesn't work, let me know and I'll try to
fix the bug. 

 I'd prefer a less hacky approach, though.  

In any case, can you file a redmine ticket?

  If you follow the best practice to encapsulate your puppet code in
  classes and modules, then I think puppetdoc can be a really good tool
  (and you don't need the --all parameter in this case).
 
 One need I have is that I keep a very strict distinction between
 manifests and modules: my modules are all mechanism and my manifests are
 all policy.  As a result, I need to provide a preamble warning newcomers
 that they need to keep this structure in future.  This needs to be among
 the first things seen when browsing to the documentation.  I don't want
 well it was buried down in some module's documentation page! to be
 used as an excuse.

I understand.

  In any case, what is produced is a bunch of html files, which means you
  can do whatever post-processing you want to them. If you just need to
  include some static html files on top of this, then nothing prevents you
  to do so :)
 
 My big concern here is maintainability.  If I can keep all the
 documentation in a standard puppet tool, then I don't end up having to
 further document some bag on the side.  I'd like for the newcomer to
 just type make or something, and out comes the docs.  I'd also like
 for future maintainers not to have to learn two markup systems just to
 maintain one set of docs.

That makes perfect sense.
I don't see any reasons for not fixing the problem on puppetdoc.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-02-07 Thread Brice Figureau
On 07/02/11 17:23, Ashley Penney wrote:
 Because I like to live dangerously I upgraded to 2.6.5 and it seems like
 this has resolved the CPU problem completely for me.

Did you upgrade the master or the master and all the nodes?

I had a discussion about this issue with Nigel during the week-end, and
he said something really interesting I didn't thought about:
it might be possible that the reports generated by 2.6.3 were larger
than what they were in previous versions.

It is then possible that the CPU time taken to unserialize and process
those larger reports is the root cause of the high CPU usage.

That'd be great if one of the people having the problem could disable
reports to see if that's the culprit.

And if this is the case, we should at least log how long it takes to
process a report on the master.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] How should I use puppetdoc?

2011-02-07 Thread Brice Figureau
Hi,

First, I'm sorry to hear you have any issues with puppetdoc (and before
you ask yourself why I started like this, I wrote the tool :).

On 07/02/11 20:19, Nick Moffitt wrote:
 I'm having a bit of trouble figuring out the best way to do my puppetdoc
 stuff.  I hope to have actual tutorial introductory chapter type
 documentation as well as individual module documentation, but it seems
 like all the examples I can find merely re-state the parameters that
 puppetdoc already noted.
 
 I'm trying to find a good whole-tree example for this, and all the
 examples I've been shown tend to be either a single .pp file and its
 output (oh boy!  One .pp file can become a hideous tangled mess of
 frames and three-lines-of-text-per-page HTML!) or a vague hand-wave
 toward rdoc and instructions to go read the sourceforge (wow are they
 still around?) pages on rdoc.

I would recommend having a look to Lab42 modules on Puppet Forge (or
github). Alessandro did an amazing job at commenting properly his
modules to work with puppetdoc.

 I saw someone say that it was good at finding README files, but:
 
   puppetdoc --all --mode rdoc --modulepath ./modules/ --manifestdir 
 ./manifests/

Assuming you have modules in ./modules/ and manifests in ./manifests/,
then puppetdoc will generate an html tree in ./doc containing all your
nodes, classes and modules (and in your case resources).

 ...was unable to find ./manifests/README or ./manifests/README.rdoc or
 any other common combination. I tried using the :include: directive at
 this point, but it didn't seem to have the effect I expected.

Puppetdoc doesn't support a global manifests/README file.
It is only able to extract documentations from your comments embedded in
your actual manifests.
It is able, though, to use modules/mymodule/README as the cover for your
modules.

In any case adding --debug to your command line will show you what
puppetdoc parsed and understood.

Note that running with --all as you were doing, means that puppetdoc
will reference every found resources.

 I'm a little confused by
 http://projects.puppetlabs.com/projects/1/wiki/Puppet_Manifest_Documentation
 which seems to suggest that my puppetdoc comments need to immediately
 precede a class.  Is that a hard and fast rule?  

This is a hard rule in the sense that there shouldn't be a single blank
line between the comment and the puppet entity (be it a class, node,
resource...).

Ex:

# My super definition
# Use like this:
#   my_super_definition { test: }
#
define my_super_definition() {
...
}

will work, but if you add a blank line between the comment and the
define keyword, puppetdoc will lose the comment.

 Can I not puppetdoc a
 node or an individual resource?  Can I not create structure in addition
 to that provided by the manifests themselves?

Puppetdoc generates the structure for you by understanding how your
classes fit, how your modules are layed out and used.

 Is this simply the wrong tool for my needs?  I was hoping for something
 a bit more useful than just generate reams of API printouts for the
 mandatory documentation binder software, but I'm getting worried that
 that's exactly what this is.  I want to build tutorial documentation as
 well as reference, but there appears to be a very heavy reference bias
 in the output I'm seeing.

Yes, this is a tool to produce the reference manual for your modules.

I don't think any automatic tool can produce the kind of documentation
you're trying to achieve, but if you can prove me wrong (especially in
the form of a patch or good suggestions that would be awesome).

If you follow the best practice to encapsulate your puppet code in
classes and modules, then I think puppetdoc can be a really good tool
(and you don't need the --all parameter in this case).

In any case, what is produced is a bunch of html files, which means you
can do whatever post-processing you want to them. If you just need to
include some static html files on top of this, then nothing prevents you
to do so :)

 I'd greatly appreciate any pointers to example manifest trees that make
 effective use of puppetdoc!

Hope that helped,
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-02-01 Thread Brice Figureau
On Tue, 2011-02-01 at 10:30 -0500, Ashley Penney wrote:
 This is the crux of the situation for me too - Puppetlabs blame it on
 a Ruby bug that hasn't been resolved with RHEL6 (in my situation) but
 this wasn't an issue until .3 for me too.  I feel that fact that many
 of us have this problem since upgrading means it can be fixed within
 Puppet, rather than Ruby, because it was fine before.

Do you mean puppet 2.6.2 wasn't exhibiting this problem?


-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-02-01 Thread Brice Figureau
On 01/02/11 20:35, Ashley Penney wrote:
 Yes, it didn't happen with the earlier versions of 2.6.

If it's easy for you to reproduce the issue you really should git bisect
the issue and tell puppetlabs what commit is the root cause (the
differences between 2.6.2 and 2.6.3 is not that big).
This way, they'll certainly be able to fix it.

Do we have a redmine ticket to track this issue?
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-01-31 Thread Brice Figureau
On 31/01/11 19:11, Udo Waechter wrote:
 Hi.
 
 I am just reading this thread, and it strikes me that we have the
 same problems with 2.6.3.
 
 Since upgrading from 2.6.2 to .3 puppetmaster shows the behabiour
 described in this thread. We have about 160 clients, and puppetmaster
 (is now) 8 core, 8Gb RAM kvm instance. Had this with 4 cores and 4
 gigs RAM, doublesizing the VM did not change a thing!
 
 We use passenger 2.2.11debian-2 and apache 2.2.16-3, ruby1.8 from
 squeeze.

I see a pattern here. It seems Micah (see a couple of mails above in
this thread) has about the same setup, except he's using mongrels.

It would be great to try a non-debian ruby (hint: Ruby Enterprise
Edition for instance) to see if that's any better.

Do you use storeconfigs?

 Puppetmaster works fine after restart, then after about 2-3 hours it
 becomes pretty unresponsive, catalog runs go upt do 120 seconds and
 more (the baseline being something about 10 seconds).

With 160 hosts, a 30 min sleeptime, and a compilation of 10s, that means
you need 1600 cpu seconds to build catalogs for all your fleet.
With a concurrency of 8 core (assuming you use a pool of 8 passenger
app), that's 200s per core, which way less than the max of 1800s you can
accomodate in a 30 min time-frame. Of course this assumes an evenly
distributed load an perfect concurrency, but still you have plenty of
available resources. So I conclude this is not normal.

 I need to restart apache/puppetmaster about once a day. When I do
 that I need to:
 
 * stop apache * kill (still running) pupppetmasters (with SIGKILL!),
 some are always left running with CPU 100% * start apache

Does stracing/ltracing the process show something useful?

 Something is very weird there, and there were no fundamental changes
 to the manifests/modules.
 
 The only thing that really changed is the VM itself. It was XEN (for
 years), we switched to KVM with kernel 2.6.35
 
 Another strange thing:
 
 puppet-clients do run a lot longer nowadays. A machine usually took
 about 40-50 seconds for one run. When puppetmaster goes crazy it now
 takes ages (500 seconds and even more).

If your master are busy, there are great chances your clients have to
wait more to get served either catalogs or sourced files (or file
metadata). This can dramatically increase the run time.

 Something is weird there... --udo,

Indeed.

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-28 Thread Brice Figureau
On Thu, 2011-01-27 at 15:59 -0800, Jason Wright wrote:
 On Thu, Jan 27, 2011 at 1:46 PM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 
  Regarding the first stacktrace you posted in your first e-mail, I'm sure
  this is the writelock multiprocess issue we fixed in 2.6 and that I
  referred to in a previous e-mail. This is a single (ok maybe 2) line fix
  that is safe for you to backport to your 0.25.5 master.
 
 You're referring to this patch, correct?
 
 http://projects.puppetlabs.com/projects/puppet/repository/revisions/9ba0c8a22c6f9ca856851ef6c2d38242754a7a00/diff/lib/puppet/util/file_locking.rb
 
 I've got that patched into our package and will try to get it out to
 our canary puppetmasters tomorrow.

Yes, this is the patch I was talking about. I have great hopes it will
fix the filelock related stacktrace.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-27 Thread Brice Figureau
On Wed, 2011-01-26 at 13:56 -0800, Jason Wright wrote:
 On Wed, Jan 26, 2011 at 1:17 PM, Daniel Pittman dan...@puppetlabs.com wrote:
  For what it is worth I have been looking at this quietly in the
  background, and come to the conclusion that to progress further I am
  going to have to either reproduce this myself (failed, so far), or get
  a bit of state instrumentation into that code to track down exactly
  what conditions are being hit to trigger the failure.
 
 I haven't been able to reproduce it either.  So far, I've tried
 annexing a bunch of machines and running puppetd in a tight loop
 against an otherwise idle puppetmaster VM and I can get the rate of
 API calls and catalog compiles up to the correct level for one of our
 busy VMs, but no 500s (or even 400s) so far.  If this fails, I have
 some code which fetches pluginsync metadata and then proceeeds to make
 fileserver calls for every .rb listed.  I'll start using that generate
 traffic, since these are the sorts of operations which get the most
 errors.

There's no guarantee you exercise the exact same XLMRPC stuff than you
are seeing in production (especially if it's the filebucket handler).
There are great chances your test environment doesn't change, so the
filebucket is almost never used.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-27 Thread Brice Figureau
On Thu, 2011-01-27 at 10:31 +0100, Brice Figureau wrote:
 On Wed, 2011-01-26 at 13:56 -0800, Jason Wright wrote:
  On Wed, Jan 26, 2011 at 1:17 PM, Daniel Pittman dan...@puppetlabs.com 
  wrote:
   For what it is worth I have been looking at this quietly in the
   background, and come to the conclusion that to progress further I am
   going to have to either reproduce this myself (failed, so far), or get
   a bit of state instrumentation into that code to track down exactly
   what conditions are being hit to trigger the failure.
  
  I haven't been able to reproduce it either.  So far, I've tried
  annexing a bunch of machines and running puppetd in a tight loop
  against an otherwise idle puppetmaster VM and I can get the rate of
  API calls and catalog compiles up to the correct level for one of our
  busy VMs, but no 500s (or even 400s) so far.  If this fails, I have
  some code which fetches pluginsync metadata and then proceeeds to make
  fileserver calls for every .rb listed.  I'll start using that generate
  traffic, since these are the sorts of operations which get the most
  errors.
 
 There's no guarantee you exercise the exact same XLMRPC stuff than you
 are seeing in production (especially if it's the filebucket handler).
 There are great chances your test environment doesn't change, so the
 filebucket is almost never used.

From the stacktrace, it is highly improbable it's the filebucket. It
really looks like the XMLRPC fileserving api. 

But, I did a quick check and to my knowledge pluginsync, factsync and
file serving are using the REST api which doesn't call any of the XMLRPC
handlers.

Which make me think that either you still have 0.24.x clients or I
missed some 0.25 client feature that uses XMLRPC file serving.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-27 Thread Brice Figureau
On Wed, 2011-01-26 at 14:36 -0800, Daniel Pittman wrote:
 On Wed, Jan 26, 2011 at 13:56, Jason Wright jwri...@google.com wrote:
  On Wed, Jan 26, 2011 at 1:17 PM, Daniel Pittman dan...@puppetlabs.com 
  wrote:
 
  For what it is worth I have been looking at this quietly in the
  background, and come to the conclusion that to progress further I am
  going to have to either reproduce this myself (failed, so far), or get
  a bit of state instrumentation into that code to track down exactly
  what conditions are being hit to trigger the failure.
 
  I haven't been able to reproduce it either.  So far, I've tried
  annexing a bunch of machines and running puppetd in a tight loop
  against an otherwise idle puppetmaster VM and I can get the rate of
  API calls and catalog compiles up to the correct level for one of our
  busy VMs, but no 500s (or even 400s) so far.  If this fails, I have
  some code which fetches pluginsync metadata and then proceeeds to make
  fileserver calls for every .rb listed.  I'll start using that generate
  traffic, since these are the sorts of operations which get the most
  errors.
 
  Sounds like a good next step might be for y'all to let me know when
  you might look at being able to do that instrumentation, and I can try
  and send you a satisfactory patch to trial?
 
  What instrumentation would you be looking for?
 
 Specifically, around the not mounted fault, in the 'splitpath'
 method, identify what the value of 'mount' in the outer 'unless' is,
 and what @mounts and mount_name contain.  My hope would be to use that
 to narrow down the possible causes, and either confirm or eliminate a
 thread race or something.

There are some thread races in this codepath: 
* we currently know that all cached_attrs (and splitpath uses one
through the module accessor of the environment) are subject to a thread
race in 0.25.
* there is another one when reading fileserver.conf (in readconfig).

But since normally passenger should make sure there is only one thread
in a given running puppet process we should be immune.

 I doubt that will be the complete data set, but it should help move
 forward.  Annoyingly, I don't have a super-solid picture of what the
 problem is at this stage, because it looks like it shouldn't be
 possible to hit the situation but, clearly, it is getting there...

Yes, so we're certainly missing something, and instrumenting this
codepath will help understand the root cause.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-27 Thread Brice Figureau
On 27/01/11 20:40, Jason Wright wrote:
 On Thu, Jan 27, 2011 at 1:42 AM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 Which make me think that either you still have 0.24.x clients or I
 missed some 0.25 client feature that uses XMLRPC file serving.
 
 All of our OS teams upgraded to 0.25.x clients months ago.  I don't
 even think our manifests compile for 0.24.x clients any more and we do
 not support clients earlier than 0.25.x any longer.  If we do have
 0.24.x clients (and there probably are a few running around), they're
 broken machines.  In any case, the nodes which are reporting the 500s
 which caused our Ubuntu team to open the bug in the first place are
 definitely on 0.25.5.  I don't know that I've reported the correct
 stack traces, only some of the more common ones.

I believe you :)

Can you correlate the stacktrace to a given client through the access
log, and with that maybe we can find what on the client triggers this.
I did a quick grep in the 0.25.5 source-code to find what could still
use the xmlrpc fileserver, but couldn't find anything.
Adding a couple of Puppet.info() to puppet/network/handler/fileserver.rb
around line 232 to print both the client and the url would maybe help
you find who is triggering this.

Regarding the first stacktrace you posted in your first e-mail, I'm sure
this is the writelock multiprocess issue we fixed in 2.6 and that I
referred to in a previous e-mail. This is a single (ok maybe 2) line fix
that is safe for you to backport to your 0.25.5 master.

HTH,
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-26 Thread Brice Figureau
On Tue, 2011-01-25 at 20:26 +0100, Brice Figureau wrote:
 On 25/01/11 20:10, Jason Wright wrote:
  On Tue, Jan 25, 2011 at 10:48 AM, Brice Figureau
  brice-pup...@daysofwonder.com wrote:
  xmlrpc?
  Do you still have 0.24.x clients?
  
  No.  We're 0.25.5 across the board.
  
  You omitted one important piece of information which is the kind of
  exception along with the error message. Can you post it so that we can
  understand what happens?
  
  No, I can't.  As I originally stated, the actual exception isn't
  making it into the apache error log and since the stack traces aren't
  timestamped, I can't correlate to the puppetmasterd logs.  I'd love to
  understand why this is so I can provide better information to you.
  I've received some passenger pointers from a coworker and am going to
  play with the logging options to see if I can affect this.
 
 OK, I missed this fact in the discussion, sorry.
 
  Would it be possible to run passenger in a mode where it won't spawn
  more than one thread and see if the problem disappears?
  
  You mean setting PassengerMaxPoolSize to 1?  I can try that on one of
  the production VMs but I'll have to revert it immediately if it causes
  any other problems.
 
 I really don't know anything about Passenger, but reading the
 documentation I don't think that's the correct settings.
 I couldn't find in the documentation how passenger or rack use ruby
 threads, so I'm unsure about what to do (one solution would be to add
 some sync on the puppet side).

My analysis might be completely wrong, because reading the passenger
documentation, it looks like there should be only one thread entering a
given master process at any given time.

Still there can be some tricky multi-process issues, like the one we
fixed in 2.6:
http://projects.puppetlabs.com/issues/4923


-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-01-26 Thread Brice Figureau
On Tue, 2011-01-25 at 17:11 -0500, Micah Anderson wrote:
 Brice Figureau brice-pup...@daysofwonder.com writes:
 
  On 15/12/10 19:27, Ashley Penney wrote:
  This issue is definitely a problem.  I have a support ticket in with
  Puppet Labs about the same thing.  My CPU remains at 100% almost
  constantly and it slows things down significantly.  If you strace it you
  can see that very little appears to be going on.  This is absolutely not
  normal behavior.  Even when I had 1 client checking in I had all cores
  fully used.
 
  I do agree that it's not the correct behavior. I suggest you to strace
  or use any other ruby introspection techniques to find what part of the
  master is taking CPU.
 
 I'm having a similar problem with 2.6.3. At this point, I can't get
 reliable puppet runs, and I'm not sure what to do.
 
 What seems to happen is things are working fine at the
 beginning. Catalog compiles peg the CPU for the puppet process that is
 doing them and take anywhere from between 20 seconds and 75
 seconds. Then things get drastically worse after 4 compiles (note: I
 have four mongrels too, coincidence?), catalog compiles shoot up to 115,
 165, 209, 268, 273, 341, 418, 546, 692, 774, 822, then 1149
 seconds... then things are really hosed. Sometimes hosts will fail
 outright and complain about weird things, like:
 
 Jan 25 14:04:34 puppetmaster puppet-master[30294]: Host is missing hostname 
 and/or domain: gull.example.com
 Jan 25 14:04:55 puppetmaster puppet-master[30294]: Failed to parse template 
 site-apt/local.list: Could not find value for 'lsbdistcodename' at 
 /etc/puppet/modules/site-apt/manifests/init.pp:4 on node gull.example.com
 
 All four of my mongrels are constantly pegged, doing 40-50% of the CPU
 each, occupying all available CPUs. They never settle down. I've got 74
 nodes checking in now, it doesn't seem like its that many, but perhaps
 i've reached a tipping point with my puppetmaster (its a dual 1ghz,
 2gigs of ram machine)?

The puppetmaster is mostly CPU bound. Since you have only 2 CPUs, you
shouldn't try to achieve a concurrency of 4 (which your mongrel are
trying to do), otherwise what will happen is that more than one request
will be accepted by one mongrel process and each thread will contend for
the CPU. The bad news is that the ruby MRI uses green threading, so the
second thread will only run when the first one will either sleep, do I/O
or relinquish the CPU voluntary. In a word, it will only run when the
first thread will finish its compilation.

Now you have 74 nodes, with the worst compilation time of 75s (which is
a lot), that translates to 74*75 = 5550s of compilation time.
With a concurrency of 2, that's still 2775s of compilation time per
round of insert here your default sleep time. With the default 30min
of sleep time and assuming a perfect scheduling, that's still larger
than a round of sleep time, which means that you won't ever finish
compiling nodes, when the first node will ask again for a catalog.

And I'm talking only about compilation. If your manifests use file
sourcing, you must also add this to the equation.

Another explanation of the issue is swapping. You mention your server
has 2GiB of RAM. Are you sure your 4 mongrel processes after some times
still fit in the physical RAM (along with the other thing running on the
server)?
Maybe your server is constantly swapping.

So you can do several thing to get better performances:
* reduce the number of nodes that check in at a single time (ie increase
sleep time)

* reduce the time it takes to compile a catalog: 
  + which includes not using storeconfigs (or using puppetqd or
thin_storeconfig instead). 
  + Check the server is not swapping. 
  + Reduce the number of mongrel instances, to artifically reduce the
concurrency (this is counter-intuitive I know)
  + use a better ruby interpreter like Ruby Enterprise Edition (for
several reasons this ones has better GC, better memory footprint).
  + Cache compiled catalogs in nginx
  + offload file content serving in nginx
  + Use passenger instead of mongrel

Note: you can use puppet-load (in the 2.6 source distribution) to
simulate concurrent node asking for catalogs. This is really helpful to
size a puppetmaster and check the real concurrency a stack/hardware can
give.

 I've tried a large number of different things to attempt to work around
 this:
 
 
 0. reduced my node check-in times to be once an hour (and splayed
 randomly)
 
 1. turn on puppetqd/stomp queuing
 
This didn't seem to make a difference, its off now
 
 2. turn on thin stored configs
 
This sort of helped a little, but not enough
 
 3. tried to upgrade rails from 2.3.5 (the debian version) to 2.3.10
 
I didn't see any appreciable difference here. I ended up going back to
 2.3.5 because that was the packaged version.

Since you seem to use Debian, make sure you use either the latest ruby
lenny backports (or REE) as they fixed an issue with pthreads and CPU
consumption:
http

Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-01-26 Thread Brice Figureau
On Wed, 2011-01-26 at 09:44 -0500, Micah Anderson wrote:
 Felix Frank felix.fr...@alumni.tu-berlin.de writes:
 
  I propose you need to restructure your manifest so that it compiles
  faster (if at all possible) or scale up your master. What you're
  watching is probably just overload and resource thrashing.
 
 I'm interested in ideas for what are good steps for restructuring
 manifests so they can compile faster, or at least methods for
 identifying problematic areas in manifests.
 
  Do you have any idea why each individual compilation takes that long?
 
 It wasn't before. Before things start spinning, compilation times are
 between 9 seconds and 60 seconds, usually averaging just shy of 30
 seconds. 

Do you use a External Node Classifier?
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-01-26 Thread Brice Figureau
On Wed, 2011-01-26 at 10:11 -0500, Micah Anderson wrote:
 Brice Figureau brice-pup...@daysofwonder.com writes:
 
  On Tue, 2011-01-25 at 17:11 -0500, Micah Anderson wrote:
  Brice Figureau brice-pup...@daysofwonder.com writes:
 
  All four of my mongrels are constantly pegged, doing 40-50% of the CPU
  each, occupying all available CPUs. They never settle down. I've got 74
  nodes checking in now, it doesn't seem like its that many, but perhaps
  i've reached a tipping point with my puppetmaster (its a dual 1ghz,
  2gigs of ram machine)?
 
  The puppetmaster is mostly CPU bound. Since you have only 2 CPUs, you
  shouldn't try to achieve a concurrency of 4 (which your mongrel are
  trying to do), otherwise what will happen is that more than one request
  will be accepted by one mongrel process and each thread will contend for
  the CPU. The bad news is that the ruby MRI uses green threading, so the
  second thread will only run when the first one will either sleep, do I/O
  or relinquish the CPU voluntary. In a word, it will only run when the
  first thread will finish its compilation.
 
 Ok, that is a good thing to know. I wasn't aware that ruby was not able
 to do that.
 
  Now you have 74 nodes, with the worst compilation time of 75s (which is
  a lot), that translates to 74*75 = 5550s of compilation time.
  With a concurrency of 2, that's still 2775s of compilation time per
  round of insert here your default sleep time. With the default 30min
  of sleep time and assuming a perfect scheduling, that's still larger
  than a round of sleep time, which means that you won't ever finish
  compiling nodes, when the first node will ask again for a catalog.
 
 I'm doing 60 minutes of sleep time, which is 3600 seconds an hour, the
 concurrency of 2 giving me 2775s of compile time per hour does keep me
 under the 3600 seconds... assuming scheduling is perfect, which it very
 likely is not.
 
  And I'm talking only about compilation. If your manifests use file
  sourcing, you must also add this to the equation.
 
 As explained, I set up your nginx method for offloading file sourcing.
 
  Another explanation of the issue is swapping. You mention your server
  has 2GiB of RAM. Are you sure your 4 mongrel processes after some times
  still fit in the physical RAM (along with the other thing running on the
  server)?
  Maybe your server is constantly swapping.
 
 I'm actually doing fine on memory, not dipping into swap. I've watched
 i/o to see if I could identify either a swap or disk problem, but didn't
 notice very much happening there. The CPU usage of the mongrel processes
 is pretty much where everything is spending its time. 
 
 I've been wondering if I have some loop in a manifest or something that
 is causing them to just spin.

I don't think it's the problem. There could be some ruby internals
issues playing here, but I doubt something in your manifest creates a
loop.

What is strange is that you mentioned that the very first catalog
compilations were fine, but then the compilation time increases.

  So you can do several thing to get better performances:
  * reduce the number of nodes that check in at a single time (ie increase
  sleep time)
 
 I've already reduced to once per hour, but I could consider reducing it
 more. 

That would be interesting. This would help us know if the problem is too
many load/concurrency for your clients or a problem in the master
itself.

BTW, what's the load on the server?

  * reduce the time it takes to compile a catalog: 
+ which includes not using storeconfigs (or using puppetqd or
  thin_storeconfig instead). 
 
 I need to use storeconfigs, and as detailed in my original message, I've
 tried puppetqd and it didn't do much for me. thin_storeconfigs did help,
 and I'm still using it, so this one has already been done too.
 
+ Check the server is not swapping. 
 
 Not swapping.

OK, good.

+ Reduce the number of mongrel instances, to artifically reduce the
  concurrency (this is counter-intuitive I know)
 
 Ok, I'm backing off to two mongrels to see how well that works.

Let me know if that changes something.

+ use a better ruby interpreter like Ruby Enterprise Edition (for
  several reasons this ones has better GC, better memory footprint).
 
 I'm pretty sure my problem isn't memory, so I'm not sure if these will
 help much.

Well, having a better GC means that the ruby interpreter will become
faster at allocating stuff or recycling object. That in the end means
the overall memory footprint can be better, but that also means it will
spend much less time doing garbage stuff (ie better use the CPU for your
code and not for tidying stuff).

+ Cache compiled catalogs in nginx
 
 Doing this.
 
+ offload file content serving in nginx
 
 Doing this
 
+ Use passenger instead of mongrel
 
 I tried to switch to passenger, and things were much worse. Actually,
 passenger worked fine with 0.25, but when I upgraded I couldn't get it
 to function anymore. I

Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2011-01-26 Thread Brice Figureau
On Wed, 2011-01-26 at 10:11 -0500, Micah Anderson wrote:
 http {
   default_typeapplication/octet-stream;
 
   sendfileon;
   tcp_nopush  on;
   tcp_nodelay on;
 
   large_client_header_buffers 1024  2048k;
   client_max_body_size150m;
   proxy_buffers   128 4k;
   
   keepalive_timeout   65;
   
   gzipon;
   gzip_min_length 1000;
   gzip_types  text/plain;
 
   ssl on;
   ssl_certificate /var/lib/puppet/ssl/certs/puppetmaster.pem;
   ssl_certificate_key 
 /var/lib/puppet/ssl/private_keys/puppetmaster.pem;
   ssl_client_certificate  /var/lib/puppet/ssl/ca/ca_crt.pem;
   ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA;
   ssl_session_cache   shared:SSL:8m;
   ssl_session_timeout 5m;
   
   proxy_read_timeout  600;
   upstream puppet_mongrel {
 fair;
 server127.0.0.1:18140;
 server127.0.0.1:18141;
 server127.0.0.1:18142;
 server127.0.0.1:18143;
   }
   log_format  noip  '0.0.0.0 - $remote_user [$time_local] '
   '$request $status $body_bytes_sent '
   '$http_referer $http_user_agent';
 
   proxy_cache_path  /var/cache/nginx/cache  levels=1:2   
 keys_zone=puppetcache:10m;

make this:
proxy_cache_path  /var/cache/nginx/cache  levels=1:2   
keys_zone=puppetcache:50m inactive=300m

The default inactive is 10 minute which is too low for a sleeptime of 60
minutes, and it is possible the cached catalog to be evicted.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Mirror folder with large files

2011-01-25 Thread Brice Figureau
On Mon, 2011-01-24 at 17:14 +, Daniel Piddock wrote:
 Dear list,
 
 I'm attempting to mirror a folder containing a few large files from an
 NFS location to the local drive. Subsequent runs take a lot longer than
 I'd have expected, after the first run.
 
 Using the following block a puppet apply run is currently taking 30 seconds:
 file { '/usr/share/target':
 source   = 'file:///home/archive/source/',
 recurse  = true,
 backup   = false,
 checksum = mtime,
 }
 
 There are 42 files taking up 870MB. I'd have thought stating the files
 in the source and target, comparing to each other (or a cache internal
 to puppet as it doesn't set the mtime on files) would be a lot faster
 than it is.

This is a naive view of the problem :)
The puppet file type is certainly the most complex resource abstraction
puppet embeds (just think about the fact that it handles dir, files,
link, remote recursion, local recursion, etc...).

 I was curious about what puppet was up to, so ran it in strace. It's
 reading every file every run, multiple times! Reads the target twice,
 then the source twice before reading the target again. Considering I
 wasn't expecting it to open any of the files at all this is total over kill.
 
 Is this horribly bugged or have I got a magic incantation that's causing
 this behaviour? strace is rather verbose and I haven't exactly read all
 80MB of the dump line by line.
 
 Is there a neater way of just mirroring a folder based on modification
 time? I suppose the easiest route would be an exec of rsync, at least I
 have control over that.

Yes, I think rsync is the sanest way to do this.

Recursive file resources (and especially sourced ones) are really tough
for puppet to handle in the current way the code is working.

Puppet manages individual file resources, and for every resource it
manages it as an instance of this resource in memory.

For deep/large file hierarchies, Puppet has to create/manage an
individual resource per file/directory present in this hierarchy, which
consumes both cpu and ram (due to the way the ruby GC is poorly
implemented and the time it takes to create a ruby object). 
And I don't even talk about the scalability issues of the generation and
handling of billions of change event coming up each time a file is
changed (which happens for instance the first time puppet runs).

I think I remember mtime is a checksum valid only for directory, and
puppet automatically switches to md5 for files (I don't really know the
reason, but I'm sure redmine knows it).

(One of) The problem is that puppet reads the file once to compute the
md5 sum, then it also reads it again to perform the copy when it detects
a change. I don't exactly know why it would write multiple times, but
I'm sure you can debug this by adding debug statements in
puppet/type/file/content.rb where all the write happens.


 I'm using Puppet 2.6.4.
 
 Dan
 I especially like the way Ruby searches for and loads the md5 library
 every time it's used. What a performant language.

This certainly comes from this code in Puppet::Util::Checksums:
  # Calculate a checksum of a file's content using Digest::MD5.
  def md5_file(filename, lite = false)
require 'digest/md5'

digest = Digest::MD5.new
checksum_file(digest, filename,  lite)
  end

Notice how the require is in the function instead of being outside.
I'd think that ruby would be smart enough to understand the file has
already been required and not bother, but apparently it doesn't do
that for you. Can you give us what ruby version and what platform you're
using?
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-25 Thread Brice Figureau
On 25/01/11 18:47, Jason Wright wrote:
 On Mon, Jan 24, 2011 at 9:24 PM, Daniel Pittman dan...@puppetlabs.com wrote:
 For the other two exceptions, do you have 'ArgumentError' Could not
 find hostname raised anywhere, or FileServerError, Fileserver module
 %s is not mounted?  They also, ultimately, lead down to a place where
 I/O subsystem errors could cause a false failure, and it would be
 interesting to know if either of those two were thrown.
 
 These two lead to a fileserver module not mounted exception:
 
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:401:in `splitpath'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:236:in `convert'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:133:in `list'
 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `call'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `protect_service'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:85
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `call'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `each'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:368:in `call_method'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:380:in `handle'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:44:in `process'
 /usr/lib/ruby/1.8/puppet/network/http/rack/xmlrpc.rb:35:in `process'
 
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:401:in `splitpath'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:236:in `convert'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:68:in `describe'
 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `call'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `protect_service'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:85
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `call'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `each'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:368:in `call_method'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:380:in `handle'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:44:in `process'
 /usr/lib/ruby/1.8/puppet/network/http/rack/xmlrpc.rb:35:in `process'

xmlrpc?
Do you still have 0.24.x clients?

You omitted one important piece of information which is the kind of
exception along with the error message. Can you post it so that we can
understand what happens?
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-25 Thread Brice Figureau
On 25/01/11 19:27, Nigel Kersten wrote:
 On Tue, Jan 25, 2011 at 10:25 AM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 On 25/01/11 18:47, Jason Wright wrote:
 On Mon, Jan 24, 2011 at 9:24 PM, Daniel Pittman dan...@puppetlabs.com 
 wrote:
 For the other two exceptions, do you have 'ArgumentError' Could not
 find hostname raised anywhere, or FileServerError, Fileserver module
 %s is not mounted?  They also, ultimately, lead down to a place where
 I/O subsystem errors could cause a false failure, and it would be
 interesting to know if either of those two were thrown.

 These two lead to a fileserver module not mounted exception:

 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:401:in `splitpath'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:236:in `convert'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:133:in `list'
 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `call'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `protect_service'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:85
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `call'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `each'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:368:in `call_method'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:380:in `handle'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:44:in `process'
 /usr/lib/ruby/1.8/puppet/network/http/rack/xmlrpc.rb:35:in `process'

 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:401:in `splitpath'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:236:in `convert'
 /usr/lib/ruby/1.8/puppet/network/handler/fileserver.rb:68:in `describe'
 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `call'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:52:in `protect_service'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:85
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `call'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:338:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `each'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:325:in `dispatch'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:368:in `call_method'
 /usr/lib/ruby/1.8/xmlrpc/server.rb:380:in `handle'
 /usr/lib/ruby/1.8/puppet/network/xmlrpc/processor.rb:44:in `process'
 /usr/lib/ruby/1.8/puppet/network/http/rack/xmlrpc.rb:35:in `process'

 xmlrpc?
 Do you still have 0.24.x clients?

 You omitted one important piece of information which is the kind of
 exception along with the error message. Can you post it so that we can
 understand what happens?
 
 Brice, I'm pretty sure we still had some XMLRPC left in 0.25.x, I
 don't believe we completely got rid of it until 2.6.x

Oh, I'm well aware of this. I was asking about 0.24.x clients, because I
thought the not-ported-to-REST handlers was only the filebucket.
I was pretty sure file_metadata and file_content were handled through
fullblown REST.

BTW, I really think this is a thread race, as the first trace reminds me
something I reported (and we fixed) for 2.6.

Looking to the 0.25.5 code of the xmlrpc fileserver handler, when
mounting it tries to find the current node, which might trigger a call
to the ENC, if I'm not mistaken.
If this operation lasts a long time, it is well possible that another
threads trigger the same codepath. This same codepath also uses the
environment cached_attr module: I also discovered a thread race in
this code that was fixed in 2.6.

Would it be possible to run passenger in a mode where it won't spawn
more than one thread and see if the problem disappears?
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] high 500 error rate on file metadata operations

2011-01-25 Thread Brice Figureau
On 25/01/11 20:10, Jason Wright wrote:
 On Tue, Jan 25, 2011 at 10:48 AM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 xmlrpc?
 Do you still have 0.24.x clients?
 
 No.  We're 0.25.5 across the board.
 
 You omitted one important piece of information which is the kind of
 exception along with the error message. Can you post it so that we can
 understand what happens?
 
 No, I can't.  As I originally stated, the actual exception isn't
 making it into the apache error log and since the stack traces aren't
 timestamped, I can't correlate to the puppetmasterd logs.  I'd love to
 understand why this is so I can provide better information to you.
 I've received some passenger pointers from a coworker and am going to
 play with the logging options to see if I can affect this.

OK, I missed this fact in the discussion, sorry.

 Would it be possible to run passenger in a mode where it won't spawn
 more than one thread and see if the problem disappears?
 
 You mean setting PassengerMaxPoolSize to 1?  I can try that on one of
 the production VMs but I'll have to revert it immediately if it causes
 any other problems.

I really don't know anything about Passenger, but reading the
documentation I don't think that's the correct settings.
I couldn't find in the documentation how passenger or rack use ruby
threads, so I'm unsure about what to do (one solution would be to add
some sync on the puppet side).

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Schedules. Who uses them and why?

2011-01-18 Thread Brice Figureau
On Mon, 2011-01-17 at 18:38 -0800, Nigel Kersten wrote:
 I'm trying to get a feel for the actual use cases for the Schedule
 type in Puppet.

Can you elaborate?

 Anyone care to help me out with some real world examples?

I'm using schedules so that my puppetd run apt-get update only once
per day.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Schedules. Who uses them and why?

2011-01-18 Thread Brice Figureau
On 18/01/11 18:12, Nigel Kersten wrote:
 On Tue, Jan 18, 2011 at 8:50 AM, Brice Figureau
 brice-pup...@daysofwonder.com mailto:brice-pup...@daysofwonder.com
 wrote:
 
 On Mon, 2011-01-17 at 18:38 -0800, Nigel Kersten wrote:
  I'm trying to get a feel for the actual use cases for the Schedule
  type in Puppet.
 
 Can you elaborate?
 
 
 I've never used them much, people don't talk about them a lot, and I
 want to get a feel for what people use them for :)

I think this is an under-estimated feature most people don't know about.

 
  Anyone care to help me out with some real world examples?
 
 I'm using schedules so that my puppetd run apt-get update only once
 per day.
 
 
 Interesting. How do you deal with pushing out a new repository? 

I'm afraid it never happened, so I never cared :)
But I suppose you can temporarily add a resource override with schedule
= undef

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Using puppet to fix a lot of files permissions

2011-01-07 Thread Brice Figureau
On Fri, 2011-01-07 at 15:33 +0100, Sébastien Barthélémy wrote:
 Hello again,
 
 thank you for your answers. I use puppet 2.6.4.
 I don't think I have a site.pp
 
 On Wed, 5 Jan 2011, Patrick wrote:
  I'm finding that with my version of puppet (2.6.4), the checksum line 
 has no effect on the run time when run on a directory containing 10 files 
 that total 7.5GB.  I am not
  using the source parameter and it takes just over 2 seconds to run with 
 checksum = none or checksum = md5.  My test computer is a Core 2 Duo 
 running on a laptop.  I am
  not using a puppetmaster to test.
  I'm still interested to see if adding that line helps though.
 
 I think the line does not help.
 
 Here are subsequent calls to puppet.
 The 3 first runs are with the line, the last line without.
 
 The first call is really long because it had work to do
 (perms to fix), the subsequents were no-ops.
 
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
38779.74 real 38230.77 user   153.85 sys
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
388.47 real   380.06 user 4.00 sys
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
398.06 real   390.21 user 4.08 sys
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
385.04 real   376.70 user 4.00 sys

It would be interesting to try with 0.25.5 (and still use checksum =
none), to compare times and see if we have a regression or not.

How many files do you have in total?

And if you run with --debug, is there a part of the log where it looks
like it is spending most of its time?

We used to have some problems with recursive file resources back in the
0.24/0.25 days that we fixed in the latest 0.25 releases. It is possible
some of them re-surfaced in 2.6...
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Using puppet to fix a lot of files permissions

2011-01-07 Thread Brice Figureau
On 07/01/11 20:44, Jeff McCune wrote:
 -Jeff
 
 On Jan 7, 2011, at 6:51 AM, Brice Figureau
 brice-pup...@daysofwonder.com wrote:
 
 On Fri, 2011-01-07 at 15:33 +0100, Sébastien Barthélémy wrote:
 Hello again,

 thank you for your answers. I use puppet 2.6.4.
 I don't think I have a site.pp

 On Wed, 5 Jan 2011, Patrick wrote:
 I'm finding that with my version of puppet (2.6.4), the checksum line
 has no effect on the run time when run on a directory containing 10 files
 that total 7.5GB.  I am not
 using the source parameter and it takes just over 2 seconds to run with
 checksum = none or checksum = md5.  My test computer is a Core 2 Duo
 running on a laptop.  I am
 not using a puppetmaster to test.
 I'm still interested to see if adding that line helps though.

 I think the line does not help.

 Here are subsequent calls to puppet.
 The 3 first runs are with the line, the last line without.

 The first call is really long because it had work to do
 (perms to fix), the subsequents were no-ops.

 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
   38779.74 real 38230.77 user   153.85 sys
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
   388.47 real   380.06 user 4.00 sys
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
   398.06 real   390.21 user 4.08 sys
 $ sudo time puppet -l /tmp/puppet.log ~/puppet/respeer.pp
   385.04 real   376.70 user 4.00 sys

 It would be interesting to try with 0.25.5 (and still use checksum =
 none), to compare times and see if we have a regression or not.
 
 I think checksumming is a red herring.
 
 How many files do you have in total?
 
 This is the primary issue think. Puppet models all of the files and
 Directories as resources and adds dependency relationships among them.
  It then sorts the graph.  I suspect the sorting of the resource graph
 is your performance issue here.

To my knowledge the sub-child resources are spawned during the
transaction evaluation so even though they're part of the graph they're
never really sorted (the sort phase happens before). But you're right
the graph becomes deep and large, which consumes memory.

What might happen is what happened pre 0.25.5: too many events are
generated (at least one for every file change) and propagated to all the
other nodes (ie a n square problem).
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Using puppet to fix a lot of files permissions

2011-01-06 Thread Brice Figureau
On Wed, 2011-01-05 at 13:25 -0800, Patrick wrote:
 
 On Jan 5, 2011, at 10:13 AM, Brice Figureau wrote:
 
  On 05/01/11 18:11, Sébastien Barthélémy wrote:
   Hello,
   
   I store camera pictures in a git repository, which became quite
   big:
   104 GB for the whole (non-bare) repository.
   
   I wanted to fix the files permissions, and thought puppet might be
   the 
   good tool for this (I like its declarative way of simplifying my 
   life).
   
   I gave it a try, with the following statements
   
   node navi {
   file {
   /Users/seb/Pictures/pictures/:
   mode = 0640,
   owner = seb,
   group = staff,
   recurse = true,
   ignore = .git
}
   file {
   /Users/seb/Pictures/pictures/.encfs5/:
   mode = 0600,
   owner = seb,
   group = staff,
   recurse = true,
}
   file {
   /Users/seb/Pictures/pictures/.git/:
   mode = 0600,
   owner = seb,
   group = staff,
   recurse = true,
}
   file {
   /Users/seb/Pictures/pictures/.git/hooks/:
   mode = 0700,
   owner = seb,
   group = staff,
   recurse = true,
}
   }
   
   And a call to sudo puppet -l /tmp/puppet.log ~/statement.pp
   
   Well, that was 4 hours ago and since then, ruby is eating 100% of
   my CPU
   (of one core of my 2.26GHz core 2 duo). 
   From the log file, I can tell that puppet is indeed fixing perms,
   at a 
   rate lower than one file per 10 seconds.
   
   I think find, xargs and chmod would take a few minutes at most
   (will try 
   later).
   
   Why is puppet so slow at this job? Is there any way I could
   improve the 
   speed?
  
  Puppet is md5 checksumming all your files and that is a lng and
  slooow operation.
  
  If you run 2.6, you should add:
  checksum = none
  
  to your file resources, and it should be way faster.
  
 
 
 
 
 I'm finding that with my version of puppet (2.6.4), the checksum line
 has no effect on the run time when run on a directory containing 10
 files that total 7.5GB.  I am not using the source parameter and it
 takes just over 2 seconds to run with checksum = none or checksum
 = md5.  My test computer is a Core 2 Duo running on a laptop.  I am
 not using a puppetmaster to test.

Then puppet is not checksumming your files (or it would take longer than
2s). It's possible that now we default to checksum none when there is no
source or content.

 I'm still interested to see if adding that line helps though.

Clearly no. Do you think 2s is too long?
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Using puppet to fix a lot of files permissions

2011-01-06 Thread Brice Figureau
On Thu, 2011-01-06 at 10:12 -0500, Mark Stanislav wrote:
  Clearly no. Do you think 2s is too long?
  
  
  That wasn't what I meant.  I was wondering if somehow it was
 defaulting to on with all those photos the original poster had.
 
 Perhaps they have a site.pp default? Wouldn't be the first time
 someone ran into that.

Or the OP is using puppet 0.25 or 0.24 which didn't support the none
checksum.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Using puppet to fix a lot of files permissions

2011-01-05 Thread Brice Figureau
On 05/01/11 18:11, Sébastien Barthélémy wrote:
 Hello,
 
 I store camera pictures in a git repository, which became quite big:
 104 GB for the whole (non-bare) repository.
 
 I wanted to fix the files permissions, and thought puppet might be the 
 good tool for this (I like its declarative way of simplifying my 
 life).
 
 I gave it a try, with the following statements
 
 node navi {
 file {
  /Users/seb/Pictures/pictures/:
  mode = 0640,
  owner = seb,
  group = staff,
  recurse = true,
  ignore = .git
   }
 file {
  /Users/seb/Pictures/pictures/.encfs5/:
  mode = 0600,
  owner = seb,
  group = staff,
  recurse = true,
   }
 file {
  /Users/seb/Pictures/pictures/.git/:
  mode = 0600,
  owner = seb,
  group = staff,
  recurse = true,
   }
 file {
  /Users/seb/Pictures/pictures/.git/hooks/:
  mode = 0700,
  owner = seb,
  group = staff,
  recurse = true,
   }
 }
 
 And a call to sudo puppet -l /tmp/puppet.log ~/statement.pp
 
 Well, that was 4 hours ago and since then, ruby is eating 100% of my CPU
 (of one core of my 2.26GHz core 2 duo). 
 From the log file, I can tell that puppet is indeed fixing perms, at a 
 rate lower than one file per 10 seconds.
 
 I think find, xargs and chmod would take a few minutes at most (will try 
 later).
 
 Why is puppet so slow at this job? Is there any way I could improve the 
 speed?

Puppet is md5 checksumming all your files and that is a lng and
slooow operation.

If you run 2.6, you should add:
checksum = none

to your file resources, and it should be way faster.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Separating puppetmaster file serving and catalogs

2010-12-16 Thread Brice Figureau
On Wed, 2010-12-15 at 20:15 -0800, Patrick wrote:
 On Dec 15, 2010, at 1:48 PM, Brice Figureau wrote:
 
  On 15/12/10 12:04, Patrick wrote:
  I'm looking for a way to run more than one puppetmaster on the same
  server under passenger.  Most of the puppet CPU load is waiting for
  the catalogs to compile.  This also seems to be mostly what takes
  large amounts of RAM.  I have storedconfigs on.
  
  If you don't need the full storedconfigs, you can use
 thin_storedconfigs
  for wy better performance.
 
 Thanks.  I'm actually doing that, and misspoke in the first post.
 
  Is there a better way to do this?  What I really want is for the
  cheap file requests to stop being blocked by the expensive catalog
  requests and keep the RAM usage low on the file serving processes.
  
  You can use what I called file serving offloading:
  http://www.masterzen.fr/2010/03/21/more-puppet-offloading/
 
 The file offloading is interesting.  So if I'm reading that right,
 that only makes a difference if some of the files are not in sync?

Actually yes, because the file content is sent only if the checksum
differs (and if you provision many new nodes at the same time, then it
can help). One solution would be to offload metadata computation to a
native nginx module (it's something easy to do once you know how to code
nginx module).

 My original error was that I didn't set:
 SSLProxyEngine on
 
 Now I'm just getting errors that say all requests are forbidden.  I
 assume this is because the puppetmaster isn't seeing the headers from
 apache that have the SSL information.

You must setup your file serving master exactly like your catalog (or
general) master.

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-16 Thread Brice Figureau
On Wed, 2010-12-15 at 19:47 -0500, Ashley Penney wrote:
 Just to reply to this - like I said earlier I can get this problem
 with 1 node checking in against puppetmaster.  All the puppetmasterd
 processes use maximum CPU.  It's not a scaling issue considering
 serving one node is certainly not going to max out a newish physical
 server.

This looks like a bug to me. 

Do your manifests use many file sources? 
And/or recursive file resources?
It's possible that those masters are spending their time checksuming
files.

Like I said earlier in the thread the only real way to know is to use
Puppet introspection:
http://projects.puppetlabs.com/projects/1/wiki/Puppet_Introspection

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-15 Thread Brice Figureau
On Tue, 2010-12-14 at 00:24 -0800, Chris wrote:
 Hi
 
 I recently upgraded my puppet masters (and clients) from 0.24.8 to
 2.6.4
 
 Previously, my most busy puppet master would hover around about 0.9
 load  average, after the upgrade, its load hovers around 5
 
 I am running passenger and mysql based stored configs.
 
 Checking my running processes, ruby (puppetmasterd) shoots up to 99%
 cpu load and stays there for a few seconds before dropping again.
 Often there are 4 of these running simultaneously, pegging each core
 at 99% cpu.

I would say it is perfectly normal. Compiling the catalog is a hard and
complex problem and requires CPU. 

The difference between 0.24.8 and 2.6 (or 0.25 for what matters) is that
some performance issues have been fixed. Those issues made the master be
more I/O bound under 0.24, but now mostly CPU bound in later versions.

Now compare the compilation time under 0.24.8 and 2.6 and you should see
that it reduced drastically (allowing to fit more compilation in the
same time unit). The reverse of the medal is that now your master
requires transient high CPU usage.

I don't really get what is the issue about using 100% of CPU?

You're paying about the same price when your CPU is used and when it's
idle, so that shouldn't make a difference :)

If that's an issue, reduce the concurrency of your setup (run less
compilation in parallel, implement splay time, etc...).

 It seems that there has been a serious performance regression between
 0.24 and 2.6 for my configuration

I think it's the reverse that happened.

 I hop the following can help work out where...
 
 I ran puppetmasterd through a profiler to find the root cause of this
 (http://boojum.homelinux.org/profile.svg).  The main problem appears
 to be in /usr/lib/ruby/site_ruby/1.8/puppet/parser/ast/resource.rb, in
 the evaluate function.
 
 I added a few timing commands around various sections of that function
 to find the following breakdown of times spent inside it, and the two
 most intensive calls are
 ---
 paramobjects = parameters.collect { |param|
   param.safeevaluate(scope)
 }
 ---
 
 and
 ---
 resource_titles.flatten.collect { |resource_title|
   exceptwrap :type = Puppet::ParseError do
 resource = Puppet::Parser::Resource.new(
   fully_qualified_type, resource_title,
   :parameters = paramobjects,
   :file = self.file,
   :line = self.line,
   :exported = self.exported,
   :virtual = virt,
   :source = scope.source,
   :scope = scope,
   :strict = true
 )
 
 if resource.resource_type.is_a? Puppet::Resource::Type
   resource.resource_type.instantiate_resource(scope, resource)
 end
 scope.compiler.add_resource(scope, resource)
 scope.compiler.evaluate_classes([resource_title],scope,false)
 if fully_qualified_type == 'class'
 resource
   end
 }.reject { |resource| resource.nil? }
 ---

Yes, this is what the compiler is doing during compilation: evaluating
resources and parameters. The more resources you use, the more the
compilation will take time and CPU.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-15 Thread Brice Figureau
On Wed, 2010-12-15 at 05:28 -0800, Chris wrote:
 
 On Dec 15, 12:42 pm, Brice Figureau brice-pup...@daysofwonder.com
 wrote:
  On Tue, 2010-12-14 at 00:24 -0800, Chris wrote:
   Hi
 
   I recently upgraded my puppet masters (and clients) from 0.24.8 to
   2.6.4
 
   Previously, my most busy puppet master would hover around about 0.9
   load  average, after the upgrade, its load hovers around 5
 
   I am running passenger and mysql based stored configs.
 
   Checking my running processes, ruby (puppetmasterd) shoots up to 99%
   cpu load and stays there for a few seconds before dropping again.
   Often there are 4 of these running simultaneously, pegging each core
   at 99% cpu.
 
  I would say it is perfectly normal. Compiling the catalog is a hard and
  complex problem and requires CPU.
 
  The difference between 0.24.8 and 2.6 (or 0.25 for what matters) is that
  some performance issues have been fixed. Those issues made the master be
  more I/O bound under 0.24, but now mostly CPU bound in later versions.
 
 If we were talking about only cpu usage, I would agree with you.  But
 in this case, the load average of the machine has gone up over 5x.
 And as high load average indicates processes not getting enough
 runtime, in this case it is an indication to me that 2.6 is performing
 worse than 0.24 (previously, on average, all processes got enough
 runtime and did not have to wait for system resources, now processes
 are sitting in the run queue, waiting to get a chance to run)

Load is not necessarily an indication of an issue. It can also mean some
tasks are waiting for I/O not only CPU. 
The only real issue under load is if service time is beyond an
admissible value, otherwise you can't say it's bad or not.
If you see some hosts reporting timeouts, then it's an indication that
service time is not good :)

BTW, do you run your mysql storedconfig instance on the same server?
You can activate thin_storeconfigs to reduce the load on the mysql db.

 
  I don't really get what is the issue about using 100% of CPU?
 Thats not the issue, just an indication of what is causing it
 
 
  You're paying about the same price when your CPU is used and when it's
  idle, so that shouldn't make a difference :)
 Generally true, but this is a on VM which is also running some of my
 radius and proxy instances, amongst others.
 
 
  If that's an issue, reduce the concurrency of your setup (run less
  compilation in parallel, implement splay time, etc...).
 splay has been enabled since 0.24
 
 My apache maxclients is set to 15 to limit concurrency.

I think this is too many except if you have 8 cores. As Trevor said in
another e-mail in this thread, 2PM/Core is the best.

Now it all depends on your number of nodes and sleeptime. I suggest you
use ext/puppet-load to find your setup real concurrency.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-15 Thread Brice Figureau
On 15/12/10 19:27, Ashley Penney wrote:
 This issue is definitely a problem.  I have a support ticket in with
 Puppet Labs about the same thing.  My CPU remains at 100% almost
 constantly and it slows things down significantly.  If you strace it you
 can see that very little appears to be going on.  This is absolutely not
 normal behavior.  Even when I had 1 client checking in I had all cores
 fully used.

I do agree that it's not the correct behavior. I suggest you to strace
or use any other ruby introspection techniques to find what part of the
master is taking CPU.
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-15 Thread Brice Figureau
On 15/12/10 19:35, Disconnect wrote:
 me too. All the logs show nice quick compilations but the actual wall
 clock to get anything done is HUGE.
 
 Dec 15 13:10:29 puppet puppet-master[31406]: Compiled catalog for
 puppet.foo.com http://puppet.foo.com in environment production in
 21.52 seconds

This looks long.

 Dec 15 13:10:51 puppet puppet-agent[8251]: Caching catalog for
 puppet.foo.com http://puppet.foo.com
 
 That was almost 30 minutes ago. Since then, it has sat there doing
 nothing...
 $ sudo strace -p 8251
 Process 8251 attached - interrupt to quit
 select(7, [6], [], [], {866, 578560}
 
 lsof shows:
 puppetd 8251 root6u  IPv4   11016045  0t0  TCP
 puppet.foo.com:33065-puppet.foo.com:8140 http://puppet.foo.com:8140
 (ESTABLISHED)

Note: we were talking about the puppet master taking 100% CPU, but
you're apparently looking to the puppet agent, which is a different story.

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-15 Thread Brice Figureau
On 15/12/10 20:24, Disconnect wrote:
 On Wed, Dec 15, 2010 at 2:14 PM, Brice Figureau
 brice-pup...@daysofwonder.com mailto:brice-pup...@daysofwonder.com
 wrote:

 Note: we were talking about the puppet master taking 100% CPU, but
 you're apparently looking to the puppet agent, which is a different story.

 
 The agent isn't taking cpu, it is hanging waiting for the master to do
 anything. (The run I quoted earlier eventually ended with a timeout..)
 The master has pegged the cpus, and it seems to be related to file
 resources:

Oh, I see.

 $ ps auxw|grep master
 puppet   31392 74.4  4.7 361720 244348 ?   R10:42 162:06 Rack:
 /usr/share/puppet/rack/puppetmasterd  
  
 
 puppet   31396 70.0  4.9 369524 250200 ?   R10:42 152:32 Rack:
 /usr/share/puppet/rack/puppetmasterd  
  
 
 puppet   31398 66.2  3.9 318828 199472 ?   R10:42 144:10 Rack:
 /usr/share/puppet/rack/puppetmasterd  
  
 
 puppet   31400 66.6  4.9 369992 250588 ?   R10:42 145:04 Rack:
 /usr/share/puppet/rack/puppetmasterd  
  
 
 puppet   31406 68.6  3.9 318292 200992 ?   R10:42 149:31 Rack:
 /usr/share/puppet/rack/puppetmasterd  
  
 
 puppet   31414 67.0  2.4 243800 124476 ?   R10:42 146:00 Rack:
 /usr/share/puppet/rack/puppetmasterd  
 

Note that they're all running. That means there is none left to serve
file content if they are all busy for several seconds (in our case
around 20) while compiling catalogs.

 Dec 15 13:42:23 puppet puppet-master[31406]: Compiled catalog for
 puppet.foo.com http://puppet.foo.com in environment production in
 30.83 seconds
 Dec 15 13:42:49 puppet puppet-agent[10515]: Caching catalog for
 puppet.foo.com http://puppet.foo.com
 Dec 15 14:00:18 puppet puppet-agent[10515]: Applying configuration
 version '1292438512'
 ...
 Dec 15 14:14:56 puppet puppet-agent[10515]: Finished catalog run in
 882.43 seconds
 Changes:
 Total: 6
 Events:
   Success: 6
 Total: 6
 Resources:
   Changed: 6
   Out of sync: 6
 Total: 287

That's not a big number.

 Time:
Config retrieval: 72.20

This is also suspect.

  Cron: 0.05
  Exec: 32.42
  File: 752.33

Indeed.

Filebucket: 0.00
 Mount: 0.98
   Package: 6.13
  Schedule: 0.02
   Service: 9.09
Ssh authorized key: 0.07
Sysctl: 0.00
 
 real34m56.066s
 user1m6.030s
 sys0m26.590s
 

That just means your master are so busy serving catalogs that they
barely have the time to serve files. One possibility is to use file
content offloading (see one of my blog post about this:
http://www.masterzen.fr/2010/03/21/more-puppet-offloading/).

How many nodes are you compiling at the same time? Apparently you have 6
master processes running at high CPU usage.

As I said earlier, I really advise people to try puppet-load (which can
be found in the ext/ directory of the source tarball since puppet 2.6)
to execise load againts a master. This will help you find your actual
concurrency.

But, if it's a bug, then could this be an issue with passenger?
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: puppetmaster 100%cpu usage on 2.6 (not on 0.24)

2010-12-15 Thread Brice Figureau
On 15/12/10 21:10, Disconnect wrote:
 As a datapoint, this exact config (with mongrel_cluster) was working
 great under 0.25.x. With fewer, slower cpus, slower storage (vm image
 files) and 2G of ram...

So I ask it again: could it be a problem with passenger more than an
issue with puppet itself?

It would really be interesting to use some ruby introspection[1] to find
exactly where the cpu is spent in those masters.

Like with passenger it reparses everything instead of just compiling?
(I simply don't know, just throwing out some ideas)

I myself use nginx + mongrel, but have only a dozen of nodes, so I don't
really qualify.

 I gave puppet-load a try, but it is throwing errors that I don't have
 time to dig into today:
 debug: reading facts from: puppet.foo.com.yaml
 /var/lib/gems/1.8/gems/em-http-request-0.2.15/lib/em-http/request.rb:72:in
 `send_request': uninitialized constant EventMachine::ConnectionError
 (NameError)
 from
 /var/lib/gems/1.8/gems/em-http-request-0.2.15/lib/em-http/request.rb:59:in
 `setup_request'
 from
 /var/lib/gems/1.8/gems/em-http-request-0.2.15/lib/em-http/request.rb:49:in
 `get'
 from ./puppet-load.rb:272:in `spawn_request'
 from ./puppet-load.rb:334:in `spawn'

Could it be that you're missing EventMachine?

 Running about 250 nodes, every 30 minutes.

Did you try to use mongrel?
Do you use splay time?

Just some math (which might be totally wrong), to give an idea of how I
think we can compute our optimal scaling case:
So with 250 nodes and a sleep time of 30 minutes, we need to overcome
250 compiles in every 30 minute time spans. If we assume a concurrency
of 2 and all nodes evenly spaced (in time), that means we must compile
125 nodes in 30 minutes. If each compilation takes about 10s, then that
means it'll take 1250s, which means 20 minutes so you have some room for
growth :)
Now during those 20min your 2 master processes will consume 100% CPU.
Since we're consuming the the CPU for only 66% of the 30 minute span,
you'll globally consume 66% of all your CPU available...

Hope that helps,

[1]: http://projects.puppetlabs.com/projects/1/wiki/Puppet_Introspection
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] apt-get -t lenny-backports

2010-12-09 Thread Brice Figureau
On 09/12/10 03:38, Daniel Pittman wrote:
 Sadly, no. I very much missed the feature. We ended up using the apt
 preferences file to implement that behaviour
 
 If I was doing it over I would use a define that added the package
 resource and also used concat to automatically build up the preferences
 entry.

I have a define called apt::source that at the same time adds a source
to sources.list.d and creates a fragment of apt preferences (with a
carefully set pin priority), which when all concatenated form the
central apt preferences file.

This works flawlessly :)
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] apt-get -t lenny-backports

2010-12-09 Thread Brice Figureau
On 09/12/10 19:19, Patrick wrote:
 
 On Dec 9, 2010, at 10:07 AM, Brice Figureau wrote:
 
 On 09/12/10 03:38, Daniel Pittman wrote:
 Sadly, no. I very much missed the feature. We ended up using the apt
 preferences file to implement that behaviour

 If I was doing it over I would use a define that added the package
 resource and also used concat to automatically build up the preferences
 entry.

 I have a define called apt::source that at the same time adds a source
 to sources.list.d and creates a fragment of apt preferences (with a
 carefully set pin priority), which when all concatenated form the
 central apt preferences file.

 This works flawlessly :)
 
 I would love to get my hands on that.  Any chance you could post it? 

Here it is:
https://gist.github.com/21ec1c202b7614a23086

It is not the complete apt class I'm using but should be a good start
for you. It's some old code I wrote 2 years ago, so the code style is
not perfect :)

It uses two definitions coming from David Schmidtt, concatenate_file
which handles the fragment pattern and config_file which is just a file
with the correct perms/backup...

Oh and bonus, this also handles custom apt keys.

Hope that helps,
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppet fileserver

2010-12-09 Thread Brice Figureau
On 09/12/10 21:36, Chris C wrote:
 I planned on moving to Passenger very soon.
 
 What about the file server?  Is there any worth in moving from
 nfs/autofs to puppet fileserver?

The only reason I can see is security, access control and auditing.
Every access is protected through SSL, can be logged and passes through
the authorization layer (ie fileserver.conf).

In terms of performance I don't think it would have any impact (provided
you run 2.6) on your master and client (but that mostly depends on your
access patterns). And you can still use file content offloading (see one
of my blog post for more information).
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: RFC: Make file content specification methods consistent.

2010-11-02 Thread Brice Figureau
On 02/11/10 17:43, Nigel Kersten wrote:
 On Tue, Nov 2, 2010 at 4:32 PM, Patrick kc7...@gmail.com wrote:
 *) Would creating a function that says, 'return the first argument
 that doesn't throw an exception' be useful? *) Is it even feasible
 to write?
 
 maybe... I'm having trouble thinking of a decent name for this though
 :)

It could be coalesce (which is used in SQL for almost the same
function), but since this word also has the meaning of merging, it might
defeat the purpose :)

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: RFC: Make file content specification methods consistent.

2010-11-01 Thread Brice Figureau
On 01/11/10 16:28, jcbollinger wrote:
[ snipped full proposal ]

 To sum up:
 
 Resolving the perceived consistency issue can be done without any
 change to the File resource.  It is sufficient to
 
 a) Fix the relative path problem for the file() function, and
 b) add a string concatentation function, and
 c) deprecate passing multiple arguments to template(), prefering
 instead applying the new conatentation function to several single-
 argument invocations of template().
 
 This approach is 100% backwards-compatible, inasmuch as deprecation of
 multiple arguments to template() does not imply removal of that
 feature.  Because the new features would all be in functions, they
 would be available everywhere in manifests that functions can be
 invoked, not just in File resources.  Furthermore, the new mechanism
 for concatenating template results would be both more expressive than
 the old and, because of its reliance on a general-purpose function,
 more broadly consistent than just with file() and File/source.

+1, this is I think the best, and more or less what I was thinking when
I read the OP mail.

One thing I'd really don't want to lose (and some of the previous
proposition removed it) is the hability to use:

content = my content

Thanks,
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppetdoc and wrong comparison

2010-10-29 Thread Brice Figureau
On Fri, 2010-10-29 at 10:52 +0100, Klaus Ethgen wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Hello,
 
 Am Mi den 27. Okt 2010 um 15:37 schrieb Brice Figureau:
  Which means the code has no line information.
 
 Hmmm...
 
   If I output the code with puts I get the following:
  []
  []
  #Puppet::Parser::AST::ResourceDefaults:0x7f1d0b836290
  #Puppet::Parser::AST::ResourceDefaults:0x7f1d0b82fa30
  [Filebucket[local]]
   
   I think there is something wrong but I do not know what.
   
   Anybody an idea?
  
  Please open a redmine ticket and include the smallest manifests that can
  trigger the bug, so that we can reproduce the bug.
 
 So you think that _is_ a bug and not a user error? ;-)
 
 I'll try to fill the bug report.
 
  BTW, do you have the same issue when running in so-called html mode?
 
 Fast test with »--mode html« gave:
Could not parse options: Invalid output mode html
 
 And looking at the documentation gave that html is no valid option. I
 think I did misunderstood this question?

Actually I meant running in mode rdoc, but producing html (to do this,
don't give a manifest pathname on the command line, puppetdoc will use
your current definition of --module-path and --manifest to find your
manifests and produce a bunch of html files in the doc/ directory).

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppetdoc and wrong comparison

2010-10-29 Thread Brice Figureau
On Fri, 2010-10-29 at 11:07 +0100, Klaus Ethgen wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Am Mi den 27. Okt 2010 um 15:37 schrieb Brice Figureau:
   If I output the code with puts I get the following:
  []
  []
  #Puppet::Parser::AST::ResourceDefaults:0x7f1d0b836290
  #Puppet::Parser::AST::ResourceDefaults:0x7f1d0b82fa30
  [Filebucket[local]]
   
   I think there is something wrong but I do not know what.
   
   Anybody an idea?
  
  Please open a redmine ticket and include the smallest manifests that can
  trigger the bug, so that we can reproduce the bug.
 
 Hach, I find a very simple manifest to reproduce it. Just have a single
 site.pp with 3(!) or more imports, independent if the imported
 modules/files exists or not, triggers the bug. So the following site.pp
 is enough:
import bla
import foo
import gna
 

Thanks, I will have a look to the issue.
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] puppetdoc and wrong comparison

2010-10-27 Thread Brice Figureau
On Tue, 2010-10-26 at 15:37 +0100, Klaus Ethgen wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Hello,
 
 at the moment I fight with puppetdoc and end in a ruby confusion.
 
 To the problem:
  puppetdoc --debug --trace --mode rdoc --all manifests/site.pp
 info: scanning: [manifests/site.pp]
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:82:in `output_resource_doc'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:82:in `sort'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:82:in `output_resource_doc'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:77:in `output_astnode_doc'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:67:in `output'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:66:in `each'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:66:in `output'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:47:in `manifestdoc'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:43:in `each'
 /usr/lib/ruby/1.8/puppet/util/rdoc.rb:43:in `manifestdoc'
 /usr/lib/ruby/1.8/puppet/application/doc.rb:82:in `rdoc'
 /usr/lib/ruby/1.8/puppet/application/doc.rb:59:in `send'
 /usr/lib/ruby/1.8/puppet/application/doc.rb:59:in `run_command'
 /usr/lib/ruby/1.8/puppet/application.rb:287:in `run'
 /usr/lib/ruby/1.8/puppet/application.rb:393:in `exit_on_fail'
 /usr/lib/ruby/1.8/puppet/application.rb:287:in `run'
 /usr/bin/puppetdoc:4
 Could not generate documentation: undefined method `=' for nil:NilClass
 
 The used version is Puppet 2.6.2 on a debian system.
 
 If I look to the code I find the following line:
code.sort { |a,b| a.line = b.line }.each do |stmt|

Which means the code has no line information.

 If I output the code with puts I get the following:
[]
[]
#Puppet::Parser::AST::ResourceDefaults:0x7f1d0b836290
#Puppet::Parser::AST::ResourceDefaults:0x7f1d0b82fa30
[Filebucket[local]]
 
 I think there is something wrong but I do not know what.
 
 Anybody an idea?

Please open a redmine ticket and include the smallest manifests that can
trigger the bug, so that we can reproduce the bug.

BTW, do you have the same issue when running in so-called html mode?
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] questions about certs on 2.6.1

2010-10-03 Thread Brice Figureau
On 28/09/10 15:26, Arnau Bria wrote:
 Hi all,
 
 I've a second puppet server (test) where I copied ONLY ca* from prod
 server. This server is running 2.6.1 + mongrel with SSLVerifyClient
 optional.
 
 I have 2 strange behaviours which I'd like to comment with some expert
 user.
 
 1.-)
  I'm running new clients against this new server, they request
 the sign, and then I can sign the client from master:
 [snipped]
 that's fine.

That's normal behaviour.

 But when I run an old client, which already have ca from prod server
 (which is the same a test one), it runs with no problem:

This is normal too. The client knows the CA, it can validate the new
master server certificate. The reverse is also true, the server can
validate the client cert because it was signed by the same CA
certificate (and it is not in the CRL).

 Old-client# puppetd --server ser01-test.pic.es --test
 info: Caching catalog at /var/lib/puppet/localconfig.yaml
 notice: Starting catalog run
 notice: Finished catalog run in 0.29 seconds
 
 And I can't see it at server side:
 #  puppetca --list  --all
 + test.pic.es (BB:1A:38:12:F8:83:EF:C6:D6:93:C2:1E:EB:FD:E2:89)
 + client.pic.es (87:08:04:8F:9B:CE:17:F6:1A:56:15:90:15:72:92:09)
 
 *notice old-client is not listed.

This is also normal. The client certificates *don't* need to be listed
to be validated and the communication to be secure.
The client cert is cached on the master when signing it, but it hasn't
to be.

 So, seems that old clients are attached to test server and cert
 security is not considered.

It is considered and security is enforced even when the certificate is
not present on the master.

 I can clean its cert, but nothing happens:
 
 # puppetca --clean oldclient.pic.es
 notice: Revoked certificate with serial 1781

Your certificate for oldclient is now revoked on *the test master*.
It isn't revoked on your production master.


 2.-) If I revoke (clean) a cert of a client, the cert is revoke but
 client is able to run against server:
 
 Server:
 
 #  puppetca --list  --all
 + test.pic.es (BB:1A:38:12:F8:83:EF:C6:D6:93:C2:1E:EB:FD:E2:89)
 + client.pic.es (87:08:04:8F:9B:CE:17:F6:1A:56:15:90:15:72:92:09)
 
 #  puppetca --clean client.pic.es
 notice: Revoked certificate with serial 2008
 notice: Removing file Puppet::SSL::Certificate client.pic.es at 
 '/var/lib/puppet/ssl/ca/signed/client.pic.es.pem'
 notice: Removing file Puppet::SSL::Certificate client.pic.es at 
 '/var/lib/puppet/ssl/certs/client.pic.es.pem'
 
 client:
 #  puppetd --server ser01-test.pic.es --test
 info: Caching catalog for client.pic.es
 info: Applying configuration version '1285678851'
 notice: Finished catalog run in 0.01 seconds
 
 
 Is it a desired behaviour? if yes, how may I revoke certs so clients
 can't connect to master again?

It shouldn't. Check your nginx/apache configuration, it should have the
necessary statements to check the crl.
For instance on my nginx master:
ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem;

You also need a nginx version that supports the CRL (ie = 0.7.64)
-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Proposal to remove redundant info in source = parameters

2010-09-27 Thread Brice Figureau
Hi,

It looks like I missed your original e-mail to puppet-dev.

On Fri, 2010-09-24 at 11:20 -0700, Nigel Kersten wrote:
 [cross-posting as I'd like to know whether my intuition about this
 being the most common case is correct]
 
 
 class foo {
 
   file { /etc/foo.conf:
 source = puppet:///modules/foo/foo.conf,
   }
 
 }
 
 For me, every single one of my source specifications refers to a file
 inside the current module. My intuition is that this is the most
 common case outside my own deployment, so why don't we optimize for
 it?
 
 class foo {
 
   file { /etc/foo.conf:
 source = foo.conf,
   }
 
 }
 
 eg the proposal is that if you don't specify the protocol, server
 address, modules prefix, module name, it is assumed you are referring
 to a file path relative to the 'files' subdirectory of the current
 module.
 
 If you wish to fully specify the source URI, you're free to do so.

My issue with your proposal is that at first glance it will look like a
local copy (which should require an absolute path) and not a remote
copy. This certainly violate the least surprise paradigm for new users.

What about a new URI scheme (ie module) which would do the same:

class foo {
   file { /etc/foo.conf:
 source = module://foo.conf,
   } 
 }

-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Tagging / Exported Resources

2010-09-15 Thread Brice Figureau
On Tue, 2010-09-14 at 17:21 -0700, CraftyTech wrote:
 Has anyone used this feature?

Yes I do and it works fine.
What db engine are you using?
See below for more ideas of debugging the issue.

 On Sep 14, 4:08 pm, CraftyTech hmmed...@gmail.com wrote:
  btw: I'm running puppet 0.25.5
 
  On Sep 14, 3:53 pm, CraftyTech hmmed...@gmail.com wrote:
 
 
 
   Hello All,
 
I've been battling with issue all day to no avail.  I'm exporting
   all host entries this way:
   class basics::host_export  { @@host{ $fqdn:
 ip = $ipaddress,
 host_aliases = $hostname,
 tag = $group
 }}
   and I'm collecting them this way:
   class basics::host_collect { Host |tag==$group | }
 
   The values for $group is obviously defined.  The thing is that
   specifying the tag=$group doesn't work for me, I always get (from
   debug mode):
 
   debug: Scope(Class[basics::basics::host_collect]): Collected 0 Host
   resources in 0.01 seconds..
 
   It doesn't collect any nodes even though I've defined a couple.  If I
   do the same thing without the tag definition, it works fine; i.e:
 
   class basics::host_export  { @@host{ $fqdn:
 ip = $ipaddress,
 host_aliases = $hostname
 }}
   class basics::host_collect { Host | | }
 
   The thing is that I don't want to get all hosts, I want to filter per
   group.  Can anyone share how they're doing this?  Thanks,

Are you sure you already have exported some hosts? 
You can check your exported resources database for this resource, they
should have the exported column non-null (or 1).

You should also check, still in the database, that this resource is
tagged with the correct value of $group.

Check $group is defined in both classes (use some notice statement to
debug). It is possible that because of scope issues $group is not
defined when you collect.

Try to collect by title (ie title == knownhost) to see if that works
better. That will let you know if the system is working correctly for
you.

HTH, 
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] auth failure under unicorn with 2.6.1rc2

2010-08-27 Thread Brice Figureau
On Thu, 2010-08-26 at 15:09 -0600, Dan Urist wrote:
 On Thu, 26 Aug 2010 22:34:59 +0200
 Brice Figureau brice-pup...@daysofwonder.com wrote:
 
  On 26/08/10 21:55, Dan Urist wrote:
   I'm trying to set up a puppetmaster under unicorn using the ubuntu
   maverick packages (currently at version 2.6.1rc2), and I'm getting
   the following error:
   
   r...@test.puppet.cms.ucar.edu $ puppetd -t
   err: Could not retrieve catalog from remote server: Error 403 on
   SERVER: Forbidden request:
   test.puppet.cms.ucar.edu(128.117.224.193) access
   to /catalog/test.puppet.cms.ucar.edu [find] at line 98 warning: Not
   using cache on failed catalog err: Could not retrieve catalog;
   skipping run
   
   I'm using the standard auth.conf, but if I turn off auth by adding
   this to the top of the file everything works:
   
   path /
   auth no
   allow *
  
  Of course you understand the security risk if you run with this
  auth.conf :)
 
 Yes, I just tried this for testing.

OK, I prefer to check :)

   Has anyone seen this, or know of a workaround?
  
  The usual cause is that the SSL end point didn't propagate to the
  master the fact that this node's certificate validates.
  
  This is usally done by adding some HTTP headers in the request, and
  you need to tell puppet what those headers are.
  For rack you need to set:
  
  [puppetmasterd]
  ssl_client_header = SSL_CLIENT_S_DN
  ssl_client_verify_header = SSL_CLIENT_VERIFY
 
 I have this, but it's under master rather than puppetmasterd. I've
 tried it under puppetmasterd and I'm getting the same failure.

Yes, you should use master for 2.6, but puppetmasterd for 0.25.

  Off course you also need to configure the ssl endpoint to set those
  headers when the cerficate is valid (and also when it's invalid).
  You didn't mention what was the SLL endpoint in your configuration so
  I can't really help for this.
 
 I'm using nginx, and I've followed the docs at:
 http://projects.puppetlabs.com/projects/1/wiki/Using_Unicorn
 
 The relevant parts of my nginx config, per the doc, are: 
 
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Client-Verify $ssl_client_verify; 
  proxy_set_header X-Client-DN $ssl_client_s_dn;

Note that the config snippets I sent you refers those headers as
SSL_CLIENT_VERIFY and not X_CLIENT_VERIFY.
Either correct the configuration or nginx, but both should use the same
header names.

  proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
  proxy_read_timeout 120;
 
 So as far as I can see, those headers are being set. Any hints on
 debugging this?

There are several possibilities:

* check puppet uses the correct $ssldir. I've already seen people using
a different $ssldir when running the master differently, in which case
the master regenerates a CA, and client certs are not compatible
anymore.

* check that the client cert is valid (ie it was signed by your master
current $ssldir CA). This can be done with openssl

* run nginx in debug mode to check it sets correctly the upstream
headers

* use tcpdump/wireshard to capture the http traffic between nginx and
unicorn and check the headers are there and correct.

* add some Puppet.notice() statements in puppet ruby rack adapter (in
lib/puppet/network/http/rack/rest.rb) around line 93 to print the
various values and which branch of the if is taken.

Hope that helps,
-- 
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

-- 
You received this message because you are subscribed to the Google Groups 
Puppet Users group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



  1   2   3   >