Hi,

Thanks for your help.

On 10/01/18 06:36, Matthaus Owens wrote:
> Chris,
> To better help you, it would be great to know a few more things about
> your installation. First question: are you running puppetserver 5.0.0
> or something later in the 5.x series (and is it the same on all
> servers)? Second, what version of the puppet-agent are on those
> servers? puppetserver 5.1.3 included a fix for
> https://tickets.puppetlabs.com/browse/SERVER-1922 which should improve
> performance some.

Hm. Interesting, thanks. I'll check out what a 5.0 -> 5.1 upgrade will do.

> 
> Hiera 3 + hiera-eyaml may also be contributing to the slowness. Here
> is one ticket (related to SERVER-1922) that indicated moving to hiera
> 5 improved compile times substantially:
> https://tickets.puppetlabs.com/browse/SERVER-1919

Also interesting but as noted on the last comment, a lot of the
structure was changed so that might not all have been hiera3 -> hiera5.

> To dig into what may be causing the compiles to be slower, I would
> recommend first checking out the client metrics.
> https://puppet.com/docs/puppetserver/5.1/http_client_metrics.html has
> some details, and I would be interested in the client metrics that
> page lists under the /puppet/v3/catalog. They are PuppetDB related
> requests, and as that was also upgraded alongside puppetserver it
> would be good to eliminate PuppetDB as a contributor. PuppetDB
> slowness can show up as slow catalog compiles, which in turn will hold
> jrubies for longer and might explain some of what you are seeing.

puppetservers are all the same.

We upgraded to:
# /opt/puppetlabs/server/bin/puppetserver -v
puppetserver version: 5.0.0

puppetdb is this, it should have been 5.0 as well but I stuffed it up.
# /opt/puppetlabs/server/bin/puppetdb -v
puppetdb version: 5.1.3


agents are all:
# /opt/puppetlabs/puppet/bin/puppet --version
5.0.0


The metrics say

        {
          "route-id": "puppet-v3-file_metadata-/*/",
          "count": 9373,
          "mean": 10217,
          "aggregate": 95763941
        },
        {
          "route-id": "puppet-v3-catalog-/*/",
          "count": 828,
          "mean": 94773,
          "aggregate": 78472044
        },
        {
          "route-id": "puppet-v3-node-/*/",
          "count": 831,
          "mean": 62709,
          "aggregate": 52111179
        },
        {
          "route-id": "puppet-v3-file_metadatas-/*/",
          "count": 4714,
          "mean": 9288,
          "aggregate": 43783632
        },
        {
          "route-id": "puppet-v3-report-/*/",
          "count": 780,
          "mean": 3433,
          "aggregate": 2677740
        },



      "http-client-metrics": [
        {
          "count": 821,
          "mean": 48,
          "aggregate": 39408,
          "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.command.replace_catalog.full-response",
          "metric-id": [
            "puppetdb",
            "command",
            "replace_catalog"
          ]
        },
        {
          "count": 832,
          "mean": 25,
          "aggregate": 20800,
          "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.command.replace_facts.full-response",
          "metric-id": [
            "puppetdb",
            "command",
            "replace_facts"
          ]
        },
        {
          "count": 780,
          "mean": 19,
          "aggregate": 14820,
          "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.command.store_report.full-response",
          "metric-id": [
            "puppetdb",
            "command",
            "store_report"
          ]
        },
        {
          "count": 215,
          "mean": 43,
          "aggregate": 9245,
          "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.facts.find.full-response",
          "metric-id": [
            "puppetdb",
            "facts",
            "find"
          ]
        }
      ]


So I think that's showing it's quick to pass it off to puppetdb when
it's storing changes.

puppetdb logs are telling me that 'replace catalog' is taking 2-3
seconds, and 'replace facts' is taking 10-20 seconds (previous puppetdb
wasn't logging the time taken, so I can't compare).

I tried changing puppetdb logging to debug but it doesn't tell me what
it's doing with those 'replace' commands (I don't think, I might've
missed it). I haven't found a way to manually process one of those
files, do you know if there is a way to do it?

I've set up postgres logging to alert for queries over 200ms (both on
the primary & replica) and I get very little (a couple of queries every
now and then), so I don't think it's the database.


Cheers,
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/1b549321-8db4-c352-5b75-ebe8eb0e5a79%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to