Issue #3365 has been updated by Juan Pablo Daniel Borgna.
I have some similar behavior, the problem is when my target directory already contains files, it seems that the whole tree is evaluated. In my case, a catalog run took 1200 seconds, I was using target /usr/src to store 10 megs of files, but in that directory I already had the kernel sources and headers. Just by changing the target to an empty directory the time went down to 32 seconds. HTH Saludos, Juan Pablo. ---------------------------------------- Bug #3365: 100% CPU usage https://projects.puppetlabs.com/issues/3365#change-90026 * Author: Dieter Van de Walle * Status: Needs More Information * Priority: Normal * Assignee: * Category: * Target version: * Affected Puppet version: 0.25.4 * Keywords: * Branch: ---------------------------------------- Hi, I've been experimenting with Puppet for a few days now, and overall I'm pretty impressed on how easy Puppet makes it to manage configurations. However, one thing has been ruïning my enthusiasm thoroughly, and that is the massive CPU consumption of Puppet. At first I used puppet to source in and manage a few hundred megabytes of data, so I presumed Puppet just wasn't made to provide such large amounts of data. So I set up my own apt repository and created some custom packages to as an alternative way to transfer data. I also learned about the checksum file property, and that the default value of md5 can cause a lot of CPU consumption. So I turned checksumming of (checksum => undef) . But now puppet is still happily eating away 100% CPU for tens of minutes at a time, with no apparent things happening. (puppetd -tv --trace --debug, but nothing appearing in the console while Puppet is cooking the CPU.) I believe the following resource is to blame: file { "/some/data/dir": owner => "$username", group => "$username", recurse => "true", ensure => "directory", checksum => undef } I just want this resource to make sure that all files in the directory are owned by user and group $username. /some/data/dir contains 300M in 6000+ files. This resource executes swiftly, but after the last file has been chown'd, the puppet hogs the CPU with 100% usage, lasting for looong. (Looong being: 30+ minutes, and me hitting CTRL-C being impatient and frustrated with seeing nothing happen.) Some top output: 9570 root 25 0 228m 151m 3664 R 99 29.7 14:31.27 puppetd I don't really understand why I'm getting this. Is Puppet unable to handle this request? What is happening? I'm a bit disappointed to run into such an issue while just doing some trivial tests... If I can't solve this I can't see how Puppet can be usable for me (and there aren't that many alternatives..). I don't know Ruby, and I'm not really fan of the debug-before-use approach... Some information about my setup: puppetd en puppetmasterd are 0.25.4 Both running on Xen Dom-U instances uname -a: Linux hostname 2.6.18.8 #2 SMP Wed May 27 15:54:07 CEST 2009 x86_64 GNU/Linux Ubuntu intrepid 8.10 dpkg --list | grep ruby: ii ruby 4.2 An interpreter of object-oriented scripting ii ruby1.8 1.8.7.72-1 Interpreter of object-oriented scripting lan Not really any logging to show, since nothing is logged... I'm aware this isn't much to go on, but I'll try to provide you with anything you may need if you just ask for it. -- You have received this notification because you have either subscribed to it, or are involved in it. To change your notification preferences, please click here: http://projects.puppetlabs.com/my/account -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/puppet-bugs?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
