Well I found the cause of my 1% duplication rate. I was using the 
recommendation from this page 
(http://projects.puppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML)
 
to generate a facts.yaml file for mcollective. I got rid of that and my 
catalog duplication went up to 73%. I'm not sure what else is changing, my 
catalogs are huge and I don't know how to diff unsorted json files. 

I also moved to a server with a 10 disk RAID10 and performance is better.  
I'm still having trouble tuning autovacuum. Either vacuums never finish 
because they're constantly delayed, or they eat up all the IO and things 
grind to a halt. And even when IO seems low there are still times where the 
puppetdb queue swells to over 1000 before draining. 


On Tuesday, October 29, 2013 2:32:54 PM UTC-4, Ryan Senior wrote:
>
> 1.5% catalog duplication is really low and from a PuppetDB perspective, 
> means a lot more database I/O.  I think that probably explains the problems 
> you are seeing.  A more typical duplication percentage would be something 
> over 90%.
>
> The next step here is figuring out why the duplication percentage is so 
> low.  There's a ticket I'm working on now [1] to help in debugging these 
> kinds of issues with catalogs, but it's not done yet.  One option you have 
> now is to query for the current catalog of a node after a few subsequent 
> catalog updates.  You can do this using curl and the catalogs API [2]. 
>  That API call will give you a JSON representation of the catalog data from 
> PuppetDB for that node.  You can then compare the JSON files and see if you 
> maybe have a resource that is changing with each run.  If you need help 
> getting that information or want some more help troubleshooting the output, 
> head over to #puppet on IRC [3] and one of the PuppetDB folks can help you 
> out. 
>
>
> 1 - https://projects.puppetlabs.com/issues/22977
> 2 - https://docs.puppetlabs.com/puppetdb/1.5/api/query/v3/catalogs.html
> 3 - http://projects.puppetlabs.com/projects/1/wiki/Irc_Channel
>
>
> On Tue, Oct 29, 2013 at 11:50 AM, David Mesler 
> <david....@gmail.com<javascript:>
> > wrote:
>
>> Resource duplication is 98.7%, catalog duplication is 1.5%. 
>>
>> On Tuesday, October 29, 2013 9:06:37 AM UTC-4, Ken Barber wrote:
>>>
>>> Hmm. 
>>>
>>> > I reconfigured postgres based on the recommendations from pgtune and 
>>> your 
>>> > document. I still had a lot of agent timeouts and eventually after 
>>> running 
>>> > overnight the command queue on the puppetdb server was over 4000. 
>>> Maybe I 
>>> > need a box with traditional RAID and a lot of spindles instead of the 
>>> SSD. 
>>> > Or maybe I need a cluster of postgres servers (if that's possible), I 
>>> don't 
>>> > know. The puppetdb docs said a laptop with a consumer grade SSD was 
>>> enough 
>>> > for 5000 virtual nodes so I was optimistic this would be a simple 
>>> setup. Oh 
>>> > well. 
>>>
>>> So the reality is, you are effectively running 5200 nodes in 
>>> comparison with the vague statement in the docs. This is because you 
>>> are running every 15 minutes, whereas the statement presumes running 
>>> every hour. 
>>>
>>> Can we get a look at your dashboard? In particular your catalog and 
>>> resource duplication rate? 
>>>
>>> ken. 
>>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to puppet-users...@googlegroups.com <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/puppet-users/46312de5-62fb-4844-9ab6-a93a01abfe24%40googlegroups.com
>> .
>>
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/c92a6d01-bed2-462a-a536-69f0dae33fc0%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to