So here's the really weird part. A week later I come back from
vacation and it all works.

Something odd is going on here and I'm going to investigate more thoroughly.

--Paul

On Tue, Jul 14, 2009 at 10:58 PM, Teyo Tyree<[email protected]> wrote:
>
>
> On Sat, Jul 11, 2009 at 1:18 AM, Greg <[email protected]> wrote:
>>
>> Paul,
>>
>> I've seen similar behaviour, but it shows up for me with the list of
>> classes. I have a staging server for testing rolling out new puppet
>> configs. Upon getting the new config, puppet seems to use the same
>> server until restarting. I don't have a solution yet, but heres what I
>> know to add to the conversation.
>>
>> I tried using:
>>
>>  service { "puppetd":
>>    ensure => running,
>>    subscribe => File["/etc/puppet/puppet.conf"]
>>  }
>>
>> And that worked... For a while... This has 2 interesting side effects
>> for me (on Solaris, at least)
>>
>> 1. It would stop things mid-run. As soon as a puppet.conf was updated
>> it would restart. Mostly that is OK, but if you have schedules,
>> sometimes they get triggered without actually doing any work because
>> Puppet is shutting down. I suspect this is because it checks an item,
>> then receives the shutdown signal and doesn't get to finish the job
>> its doing.
>>
>> 2. *Sometimes* puppet would not shut down correctly. Would get the
>> signal, start to shut down then hang. If I ever figure out why or how
>> its doing this I will submit a bug report. This happens for us only
>> occasionally, and usually SMF kicks in and puts it into maintenance
>> state at which point it kills with a -9 and then waits for someone to
>> svcadm clear it.
>>
>> For us, this started happening long after we upgraded from 0.24.7 to
>> 0.24.8... We also run our staging server on a different port to the
>> production Puppet server to make sure that it doesn't accidentally get
>> used.
>>
>> The only thing I can think of is that maybe the server name gets
>> cached somewhere else other than config - and maybe it isn't being
>> cleaned out when the config is being re-read... I can understand there
>> being a server connection cached for the run, but once its finished it
>> should in theory be cleared out...
>>
>> Greg
>>
>> On Jul 11, 9:31 am, Paul Lathrop <[email protected]> wrote:
>> > Dear Puppeteers,
>> >
>> > I'm in desperate need of help. Here's the story:
>> >
>> > When I boot up new machines, they have a default puppet.conf which
>> > causes them to talk to our production puppetmaster at
>> > puppet.digg.internal. Some of these machines are destined for our
>> > development environment, and there is a custom fact 'digg_environment'
>> > that the default config uses to pass out an updated puppet.conf file.
>> > For these development machines, this file points server= to
>> > puppet.dev.digg.internal, which has a node block for the machine that
>> > then has their full configuration.
>> >
>> > This all seemed to work great until recently, and I'm not sure what
>> > changed.
>> >
>> > Now, what happens is that the machine boots with the default
>> > puppet.conf. It talks to the production puppetmaster, and downloads
>> > the correct puppet.conf which points server= to
>> > puppet.dev.digg.internal. In the logs, I see the "Reparsing
>> > /etc/puppet/puppet.conf" message. The report ends up getting sent to
>> > the development puppetmaster (puppet.dev.digg.internal). However, on
>> > subsequent runs, puppetd continues to talk to the production
>> > puppetmaster instead of getting it's config from the development
>> > puppetmaster! After a manual restart of the daemon, it works as
>> > expected. However, manual steps are a big bummer!
>> >
>> > The only change I can think of here is that we switched to Debian
>> > Lenny. Puppet version is 0.24.8. Any help would be appreciated!
>> >
>> > Thanks,
>> > Paul
>>
> The bad news:
> We need to track down why exactly the server parameter is getting cached.
>  Additionally, puppet should not restart in the middle of a transaction
> (There is a ticket for 0.25 to make this behavior optional, but currently it
> should restart post transaction.  Both of these are bugs and should be
> reported as such.
> The good news:
> Paul, one work around for your issue is to do something completely different
> at provisioning time.  What I do is use a very simple init script to
> bootstrap puppetd.  Instead of using puppetd to bootstrap itself, just use
> the puppet executable and a simple bootstrap module in your init script.
>  The bootstrap manifest should use the services resource type to
> start/restart puppetd and to disable the bootstrap init script and a file
> resource to manage puppet.conf.  This approach won't address any changes to
> puppet.conf after provisioning, but should address your specific issue at
> provisioning time.
> -teyo
>
> --
> Teyo Tyree :: www.reductivelabs.com :: +1.615.275.5066
>
> >
>



-- 
"My pants growl with the hunger of a thousand bubblebees. And it feels
like a Koala crapped a rainbow in my brain!" -MasterShakezula

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to