Wyatt, Thank you very much for your time and reply. I greatly appreciate it. I ran your query and your suspicions are correct. Some DB servers lead the pack with a massive amount of data due to all the disk that is there. We will probably just make these facts nil on all machines as we don't need them. I would assume this will relieve the strain on PuppetDB and remove the resets/etc. Again,* thank you* very much. If I could buy you a beer, I would. The machines in question are a mix of RHEL5/6/7.
Mike On Tuesday, April 19, 2016 at 10:53:28 AM UTC-5, Wyatt Alt wrote: > > Hey Mike, > > The unsatisfying answer is that PuppetDB handles giant facts > (particularly array-valued facts) pretty badly right now, and facter's > disks, partitions, and mountpoints facts can all get pretty huge in > cases such as SANs and similar. Can you try and see if the bulk of those > fact paths are coming from a small set of your nodes? I expect this > query might help: > > https://gist.github.com/wkalt/4a58b9a97c79eee31971e5fc04dec0e4 > > You can mask the facts on a per-node basis by creating a custom fact > with value nil and weight 100 as described here: > > https://docs.puppet.com/facter/3.1/custom_facts.html#fact-precedence > > (this assumes you aren't using these facts for anything, but that sounds > like the case.) > > Longer term, this is something we need to fix on our end. I created > https://tickets.puppetlabs.com/browse/PDB-2631 to track the issue. > https://tickets.puppetlabs.com/browse/FACT-1345 may also be related. > > If you get those nodes tracked down, would you mind telling us the > operating system? > > Wyatt > > > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/e09f34ec-fe6e-4d9a-9149-239e666f633c%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
