Hello, just trying to understand the workload behind the compilation of catalogs puppet master is doing each time the client does a request to the master. I understand the clients send the facts to the master and the master based on the facts and the manifests compiles the catalog. I would expect that this can be optimized with some caching so if neither the manifest and the set of facts doesn't change the compilation of that catalog results to cache hit and that request is served significantly faster leaving lower load footprint then the first request or request after some changes in manifests or in facts. In reality I don't see any difference that would suggest some caching in the master, eg. eachtime the client contact the master the compilation takes about 6 seconds which seams be unbelievably long.
can anyone please explain the way the master caches the catalogs or what are the reasons it doesn't? thanks, Antony. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
