>> can you give some more detail on how the cache will be used? If a fact >> is found on disk via the rb file and there's nothing in the cache will >> it then simply run the slow way? and update the cache? >> > Facter itself will not use the cache. If you have an application that needs > facts and needs them quickly, you may read the yaml file on disk. An > entirely separate process will update the cache. At this first step, the > update process is planned to be a cron job, with hope of an actual facter > daemon later.
Err ... really? So we're not changing what we do at all today then? People have been rolling facilities like this for quite some time already (ie. with mcollective and yaml plugin), it doesn't seem like we're adding much value at all. And from memory PE already does this. I mean a cron job with: facter -y > /var/lib/facter/cache.yaml Is already do-able. So what I'm failing to understand is ... what are the changes in facter we are proposing today then? >> sounds like there would be various chicken and egg situations with >> arranging >> for pluginsync to have happened before attempts to build the cache so I >> am looking to hear some more details to determine if that might be an >> issue. > > At this time, we are not proposing Puppet use the external cache from the > disk. So nothing changes at all then? The problem we are setting out to solve is more or less - not solved? >> I wouldnt go so far as saying that :) generating fact caches via cronjob >> or puppet writing out yaml files has been a recurring headache for people >> > Tell me more, please. Atomic writes to the cache files for one. ken. -- You received this message because you are subscribed to the Google Groups "Puppet Developers" group. To post to this group, send email to puppet-dev@googlegroups.com. To unsubscribe from this group, send email to puppet-dev+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-dev?hl=en.