Issue #4458 has been updated by Markus Roberts. Status changed from Available In Testing Branch to Closed
commit:3a6ca54a6e8ad353f207bc85433d1204b5ca48bc Fix #4458 - Do not dump the whole environment when instances can't be found ---------------------------------------- Bug #4458: 404 error message is getting to big http://projects.puppetlabs.com/issues/4458 Author: Peter Meier Status: Closed Priority: Normal Assigned to: Brice Figureau Category: plumbing Target version: 2.6.1 Affected version: 2.6.1rc1 Keywords: Branch: http://github.com/masterzen/puppet/tree/tickets/2.6.x/4458 I'm not sure if that is totally related, but imho after a mongrel gem update I started getting @502 Bad Gateway@ from my nginx -> mongrel setup on every client for certain resources. I didn't get these messages when switching to a webrick based puppetmaster. It failed on both 0.25.5 and 2.6.1rc1. This is why I suspect the mongrel update, as I updated that one as well when starting to update to 2.6 and switching back to 0.25.5 and vice versa. Actually it was for all resources that were instances of the that define: http://git.puppet.immerda.ch/?p=module-common.git;a=blob;f=manifests/defines/module_dir.pp;h=0ff50eb7fe8275b1ff8d6ac07a01fc87bcf2cee8;hb=c180d27fdb319e19af59ae27a678bcf12fb74cfc The first file source is supposed to fail on the failing resources, so the second one is selected. However according to the error messages the @502 Bad Gateway@ came from the first (failing) resource. Investigation in the nginx logs, revealed that the failing error message is @upstream sent too big header while reading response header from upstr...@. Some research suggested that proxy_buffer_size should be increased. I tried various changes, like increasing the amount of buffers or the buffer size or both. None helped. After capturing the traffic with tcpdump between nginx and the mongrel based puppetmasters and looking at it it became very fast clear, that the puppetmaster is sending a huge 404 response going over multiple (!) tcp packets. And after like 10 tcp packets nginx simply reset the connection and sent a @502 Bad Gateway@ to the client. Investigation in the code showed that we dump in @lib/puppet/network/http/handler.rb@ the whole indirector_request: <pre> return do_exception(response, "Could not find instances in #{indirection_request.indirection_name} with '#{indirection_request.to_hash.inspect}'", 404) </pre> Changing that to: <pre> return do_exception(response, "Could not find instances in #{indirection_request.indirection_name} with '#{indirection_request.to_hash.inspect[0,30]}'", 404) </pre> made the bad gateway go away. >From my own experience (suddenly storing huge stacktraces in rails cookies due >to Error-Objects being serialized) I can tell that such a dump can get out of >control very easily. Imho there are 3 possibilities: 1. Do nothing and let me figure out why nginx resets the tcp connection of the response and how to let nginx accept such huge responses 1. cut the inspected hash, as I do it now in my workaround 1. be more specific what we want to have in the dumped error message. I would vote for option 3, as the puppetmaster is responding with a huge dump of the indirector_request, which becomes totally useless due to the information overflow. This is simply not justifying the dump in favor of error tracking. I could write a patch for option 3, however as I am not sure how such a selective dump should look like and what would be important at that state and if it is the right way to go, I'd like to have some opinions on that. -- You have received this notification because you have either subscribed to it, or are involved in it. To change your notification preferences, please click here: http://projects.puppetlabs.com/my/account -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/puppet-bugs?hl=en.
