Hello, Following on from a chat with some very helpful contributors on GitHub (https://github.com/php/php-src/issues/16484), I'd like to request feedback on a change and potentially put it forward as an RFC please.
Change Description: The `opcache.file_cache` folder must currently be writable, but I propose a read-only mode (potentially enabled via `opcache.file_cache_read_only=1`) that allows the folder to be used for loading files when it isn't writable. `opcache.file_cache_read_only=1` will fail unless `opcache.validate_timestamps=0` and `opcache.revalidate_freq=0`, indicating the code is intended to be read-only. An example PR to achieve this is here: https://github.com/php/php-src/pull/16551 (I'd love some feedback please!). Why?: When building containers (i.e. Docker) that contain both the PHP runtime and application code (that isn’t going to change once built into the container), a lot of CPU cycles are wasted re-warming the opcache every time an instance of the container starts, even though the code will always remain the same. In a large distributed container based platform like Lambda, App Runner, App Engine or Cloud Run, there are significant performance gains & cost savings to be realised by being able to pre-warm the JIT at container build time (i.e. in the CI/CD pipeline), then load it from disk at container startup. It is fairly common on these large container platforms for the entire file-system to be read-only (e.g. Kubernetes `readOnlyRootFilesystem`) as a security hardening measure, which makes using the existing `opcache.file_cache` impossible. Usage: The intended usage for this change is to build the opcache via the CLI in the CI/CD workflow, with something like this: `@php -dopcache.enable_cli=true -dopcache.file_cache=$(pwd)/opcache -dopcache.file_cache_only=true prewarm.php` ```php <?php /** * Composer autoloader... */ $filesToLoad = require __DIR__ . '/vendor/composer/autoload_files.php'; // Prevent composer autoloading files... foreach ($filesToLoad as $fileIdentifier => $file) { $GLOBALS['__composer_autoload_files'][$fileIdentifier] = true; } require __DIR__ . '/vendor/autoload.php'; $finder = (new Symfony\Component\Finder\Finder()) ->files() ->name('/\.php$/') ->ignoreDotFiles(false) ->ignoreVCSIgnored(false) ->exclude([ 'vendor/composer', ]) ->notPath([ 'fuel/core/bootstrap.php', 'fuel/core/vendor/htmlawed/htmlawed.php', ]) ->in(__DIR__); foreach($finder as $file) { $filepath = $file->getRealPath(); echo "Compiling file " . $file->getRelativePathname() . " ... " . ( opcache_is_script_cached($filepath) ? 'EXISTS' : ( opcache_compile_file($filepath) ? 'OK' : 'FAIL' ) ) . "\n"; } ``` Then including the `opcache` folder, along with the application code, and the following values in `php.ini` inside the Docker container build: ``` ; Tune opcache opcache.revalidate_freq=0 ...other values... ## this below should be true on production opcache.enable_file_override=true ## this below should be false on production opcache.validate_timestamps=false ; Enable pre-warmed opcache opcache.file_cache=/workspace/opcache opcache.file_cache_read_only=true opcache.file_cache_consistency_checks=false ``` Considerations: Part of the opcache file path when stored on disk is the `zend_system_id`, which from my testing, only stays the same on the exact same build of PHP (and as a result this means if your service is restarting to install a PHP update, the opcache files are no longer valid anyway, but not necessarily a problem with Docker containers that stay static until updated as a whole). Benchmarks: First request without opcache warm'd: ``` $ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:32768/_ah/warmup time_namelookup: 0.000032s time_connect: 0.000160s time_appconnect: 0.000000s time_pretransfer: 0.000193s time_redirect: 0.000000s time_starttransfer: 0.221376s ---------- time_total: 0.221478s ``` First request with opcache pre-warm'd: ``` $ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:32768/_ah/warmup time_namelookup: 0.000029s time_connect: 0.000164s time_appconnect: 0.000000s time_pretransfer: 0.000198s time_redirect: 0.000000s time_starttransfer: 0.053059s ---------- time_total: 0.053171s ``` The CPU on the machine I'm testing from is a lot more powerful than the CPU allocation for containers in production (it's lots of small instances that scale horizontally), so I'm expecting the improvement to be even more dramatic there. Struggling to benchmark that properly though, as prod/pre-prod for serverless all has a read-only root FS. ** What does everyone think? Feedback welcome please. **