On Fri, Sep 3, 2021 at 7:10 PM Kevin Lyda <ke...@lyda.ie> wrote:

> [sent a second time, now to the list, sorry]
>
> On Fri, Sep 3, 2021 at 3:53 PM Christian Schneider
> <cschn...@cschneid.com> wrote:
> > How can you say "it never was a problem" if we never had to live without
> stat cache?
> > Can you back up that claim with numbers? There are some of us who run
> high-volume websites where system load increase could be a problem.
>
> Using this bash script:
>
> #!/bin/bash
> echo "Without cache"
> time ./sapi/cli/php -d enable_stat_cache=3DFalse "$@"
> echo "With cache"
> time ./sapi/cli/php "$@"
>
> To run this php script:
>
> <?php
> $iterations =3D 1000000;
> function all_the_stats($filename) {
>     @lstat($filename);
>     @stat($filename);
> }
> while ($iterations--) {
>     all_the_stats(__FILE__);
> }
>
> I see this output:
>
> Without cache
>
> real 0m7.326s
> user 0m5.877s
> sys 0m1.448s
> With cache
>
> real 0m5.010s
> user 0m5.009s
> sys 0m0.000s
>
> So that's 2 seconds slower to do 2 million uncached stat calls vs
> cached with a 100% cache hit rate (minus the first stat/lstat calls).
>
> Technically, yes, it's slower, but I'd suggest that making 2 million stat
> calls to a single file is a bad idea. And remember, the cache holds *one*
> file. If you stat a second file it's a cache miss.
>

These numbers look pretty good to me. It would be great if someone on
Windows and macos could repeat this experiment, so we have an idea of how
other platforms fare in this worst-case scenario.

Regards,
Nikita

Reply via email to