But anyway I remember that caching the catalog (read serializing it as yaml
to disk) has been very slow for years and also known. It usually makes the
agent look like it's hanging. This especially happens on huge catalogs - I
have catalogs with up to 10k resources.

It would be great to improve that, from our point of view, but I
wonder a bit: when is this a killer problem for everyone?

If I push a simple change and want to apply it immediately, hence manually, and puppet seems to hang for 2 minutes serializing the catalog to disk, then this is not a killer problem, but at least very annoying. I'm usually using tags to skip most of the catalog while pushing immediate changes, but as still the whole catalog has to be serialized, tags only speed up things while applying the catalog.

This means that if the shortened run (using tags) goes for ~4 minutes and 2 minutes seem to be spent to serialize the catalog, it slows you down quite a bit. Puppet hasn't to be super fast, but if you start thinking twice or thrice if you're really ready to run it with the new changes, we're over that point that we could say it's still reasonable fast.

Not to forget the memory consumption, we mentioned in #2892, during serialization, which can spike quite high and affect the other running services. And as the only thing that I win (afair) with the serialization is to have a cached catalog (which I don't really use, as I run things from cron without using the cached catalog), it's quite a high price I have to pay.

Having a puppet release focused on stability and performance would really be
appreciated. I think there could some room for improvements at various
places.

I don't think we will ever have a release that is exclusively focused
on performance.

The discovery of the horrible performance drop in 2.7, though, has
brought us to see that we need to focus on it in a more formal way.

You can expect that performance, stability, and correctness will drive
the roadmap for the platform team much more than "shiny new features"
will over the coming months.

These are vital things to deliver, and my team is absolutely committed to them.

...and I am genuinely sorry that we have not communicated about this
to everyone effectively.  This should have been obvious outside our
walls, and wasn't.

Afair a couple of points have been raised in the past few years where puppet could be improved regarding performance and resource usage. But none of them were really followed.

However, all the past few release became in my opinion usually significantly slower, without having a real investigation and answer why this has happened and if we're willing to pay that price.

Having a focus on performance and also looking at changes regarding their performance impact (or improvement) would already be an improvement for me. So if we would know with a new release that puppet is slower or even faster and why that's the case, this would be a first good step! :)

Thanks a lot!

~pete

--
You received this message because you are subscribed to the Google Groups "Puppet 
Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en.

Reply via email to