On Mar 20, 2010, at 6:43 AM, Peter Meier wrote:
Here is my attempt at fixing this issue and taking into account
Luke's
ideas. I did only some minimal testing, but it looks like it works
fine
(and is now ultra fast for the case Peter is concerned about).
Do you expect cases where it shouldn't be ultra fast now?
The patch is still against 0.25.x, and can be located in my github
repo-
sitory in the branch tickets/0.25.x/3396. This branch also contains
the
checksum => none patch of #2929.
Please review,
debug: Time for triggering 3 events to edges in 0.00 seconds
debug: Time for triggering 23044 events to edges in 13.70 seconds
debug: Time for triggering 0 events to edges in 0.00 seconds
debug: Time for triggering 0 events to edges in 0.00 seconds
debug: Time for triggering 0 events to edges in 0.00 seconds
debug: Time for triggering 0 events to edges in 0.00 seconds
debug: Finishing transaction 23456251255100 with 23044 changes
real 4m9.660s
user 3m3.727s
sys 0m25.846s
Great Work!
It was still hanging for about 30s after reporting to have finished
the transaction and still burning cpu, but this is rather negligible
concerning the amount of time it wasted before and this might also
not be related to that issue.
We're actually getting to the point where recursive file management is
reasonable to do on large file sets. Not quite, but we're a heckuva
lot closer. Wish we'd known earlier what a barrier this (relatively
simple, in the end) problem was to efficiency.
--
Today at work an ethernet switch decided to take the 'N' out of NVRAM
-- Richard Letts
---------------------------------------------------------------------
Luke Kanies -|- http://reductivelabs.com -|- +1(615)594-8199
--
You received this message because you are subscribed to the Google Groups "Puppet
Developers" group.
To post to this group, send email to puppet-...@googlegroups.com.
To unsubscribe from this group, send email to
puppet-dev+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/puppet-dev?hl=en.