On Wed, May 23, 2012 at 7:30 PM, Sean Millichamp <[email protected]> wrote: > On Wed, 2012-05-23 at 06:24 -0700, jcbollinger wrote: > >> That understanding of storeconfigs looks right, but I think the >> criticism is misplaced. It is not Deepak's line of thinking that is >> dangerous, but rather the posited strategy of purging (un)collected >> resources. Indeed, I rate resource purging as a bit dangerous *any* >> way you do it. Moreover, the consequences of a storeconfig DB blowing >> up are roughly the same regardless of the DBMS managing it or the >> middleware between it an the Puppetmaster. I don't see how the >> existence of that scenario makes PuppetDB any better or worse. > > Indeed, it *is* dangerous, but so are many things we do as system > administrators. The key is in gauging the risk and then choosing the > right path accordingly. In my environment I am not always able to know > the complete history of resources as changes may come from unexpected > places. It is less than ideal, but it is one aspect of my reality. In > that situation, the selective use of purging becomes quite key in > keeping things that need to be "cleaned up" cleaned up. > > I don't put anything in exported resources with purging that would be > capable of bringing down a production application, thankfully, but there > is quite a bit that could quite possibly cause a variety of headaches, > alerts, and tickets on a massive scale for a while during the > reconvergence. > > In additioanl, we are in a transition to PE and the Compliance tool will > allow me another way of handling that in a more manual admin-review > approach (to catch resources that get added outside of Puppet's > knowledge). > > What I really need is some tool by which I can mark exported resources > as absent instead of purging them from the database when they are no > longer needed (such as deleting a host). That would eliminate most, if > not all, of the intersections of purging and exported resources that I > have. Right now I use a Ruby script I found quite a while back to > delete removed nodes and all of their data. I'm sure there is a way to > mark the resources as ensure => absent instead, but I've not gone > digging into the DB structure.
We don't yet have such a tool for PuppetDB, but it's definitely on our radar. The current `puppet node clean --unexport` reaches directly into the ActiveRecord storeconfigs database to make ad hoc changes to resources, which is inappropriate for PuppetDB, which has a strict catalog lifecycle. We're working to figure out an appropriate way to provide the same functionality. > >> If you cannot afford to wait out a repopulation of some resource, then >> you probably should not risk purging its resource type. If you do not >> purge, then a storeconfig implosion just leaves your resources >> unmanaged. If you choose to purge anyway then you need to understand >> that you thereby assume some risk in exchange for convenience; >> mitigating that risk probably requires additional effort elsewhere >> (e.g. DB replication and failover, backup data center, ...). > > Indeed, as I said above, it is about risk management. Deepak's statement > I had responded to wasn't the first time I had read the "oh, just wait > for it to repopulate" statement and I wanted to be certain that wasn't > actually something that was considered in the design with regards to > updates, etc. on the stability of the storeconfigs data. We definitely didn't take safe repopulation as a given. We know many if not most storeconfigs users will likely suffer at least some inconvenience or at worst some outages if their data has to be repopulated; we're not blasé about the issue. We haven't cut any corners in PuppetDB around safeguarding your data. It's simply a design ideal we would like to promote. When it's reasonable to design your exports/collects thusly, it's beneficial for storeconfigs data to be easily regenerable. After all, that's what Puppet purports to allow you to do with your infrastructure, and it would be great not to allow storeconfigs to disrupt that ability. And on that note, where you find a case that this just isn't possible today, let us know. I'd love for this to be the norm. Mostly the reason for mentioning it is because many people hear "database" and automatically think "oh great now I have to set up replication, backups, failover, etc". But before going off and doing all that work, it's important to ensure this really is data you care about replicating, backing up, making highly available, etc. Depending on your needs (for instance, if you're not a storeconfigs user at all), the answer *may* be no. > > At some point you have to trust tools that have earned that trust > (either via testing or real world use or both) to do the job that they > say they are going to do. Puppet has years of earning that trust with > me. Could something corrupt and destroy the database and cause me a lot > of trouble? Sure, but that could be said of many tools. That's why we > have backups, DR systems, etc. even though the "in the now" when it > fails can be painful as heck. However, as long as Puppet Labs is > designing it to be dependable and upgrade-safe (which it sounds like > they are) then I'll continue to trust it (with prudent testing, of > course) because they've earned it. > > Sean > > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To post to this group, send email to [email protected]. > To unsubscribe from this group, send email to > [email protected]. > For more options, visit this group at > http://groups.google.com/group/puppet-users?hl=en. > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
