>> Which version of Ruby and Puppet ?
>>
>> 'Config retrieval time' include the 'caching catalog time' (write
>> catalog on
>> disk in YAML format). Your catalog is pretty big, so caching could be
>> very
>> slow.
>> You can check this by add theses lines in
>> lib/puppet/indirector/indirection.rb on the agent side:
>>
>> +      beginning_time = Time.now
>>      Puppet.info "Caching #{self.name} for #{request.key}"
>>      cache.save request(:save, result, *args)
>> +      Puppet.debug "Caching catalog time: #{(Time.now -
>> beginning_time)}"

Here're my results for a catalog of ~2000 resources, some of which are
tidys of big directories:

    [root@aaa ~]# time puppetd --test --noop
    notice: Ignoring --listen on onetime run
    info: Retrieving plugin
    info: Loading facts in mysql_exists
    info: Loading facts in mysql_exists
    info: Caching catalog for aaa
    info: Caching catalog time: 12.668795
    info: Applying configuration version '1337759696'
    notice: Finished catalog run in 65.62 seconds
    
    real        4m41.662s
    user        1m29.677s
    sys         0m13.375s
    [root@aaa ~]# puppetd --version
    2.7.1
    [root@aaa ~]# ruby --version
    ruby 1.8.5 (2006-08-25) [x86_64-linux]
    [root@aaa ~]# lsb_release -a
    LSB
Version:        
:core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3.1-amd64:graphics-3.1-ia32:graphics-3.1-noarch
    Distributor ID:     ScientificSL
    Description:        Scientific Linux SL release 5.4 (Boron)
    Release:    5.4
    Codename:   Boron
    [root@aaa ~]# 

There is no excessive swap or IO while the agent is running. Compilation
of the catalog takes ~100s, mostly due to not yet having switched to
PuppetDB (;-)

Best Regards, David

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en.

Reply via email to