https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=36721

--- Comment #9 from David Cook <[email protected]> ---
I had another crack at this...

I watched a "starman worker" startup using a hardcoded delay and strace, and I
took note of every non-Koha Perl module that gets loaded in startup, and there
are about 90. 

__Memory savings__
After adding the 90 modules to Koha::Preload, I did manage to get a significant
drop in RAM used across a large number of instances (about 4-5GB), even when
each instance has a small number of workers.

__Startup time__
The more interesting thing to note was that "koha-plack --restart --quiet
$(koha-list --enabled --plack))" took 4 minutes instead of 1 minute. 

So preloading a lot of modules actually made the overall restart slower! I
think that's because koha-plack only moves on to the next instance once the
starman master has launched, and it will be slower to launch since it's loading
a lot of modules. 

__Startup load__
While the instances with preloaded modules were slower to load, the load was
much much lower. A load of 2 where 2 workers are used. And that was very
stable.

__Reload time/load__
Without preloading, you can reload a large number of instances in about 30
seconds, but it puts a huge load on the system.

With preloading, it seems the same amount of time and same load to reload. At
least in terms of time reported by the "koha-plack" tool. Anecdotally, it seems
like the actual reload time is faster overall, which would make sense.

--

So overall... there's pros and cons. Preloading modules means lower memory use
and higher stability, but it does mean slower starts/restarts.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are watching all bug changes.
_______________________________________________
Koha-bugs mailing list
[email protected]
https://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-bugs
website : http://www.koha-community.org/
git : http://git.koha-community.org/
bugs : http://bugs.koha-community.org/

Reply via email to