Edit report at https://bugs.php.net/bug.php?id=64502&edit=1
ID: 64502
User updated by: normadize at gmail dot com
Reported by: normadize at gmail dot com
Summary: BIG Request: files or memcached based storage
(please read before dismissing)
Status: Analyzed
Type: Feature/Change Request
Package: opcache
PHP Version: 5.5.0beta1
Block user comment: N
Private report: N
New Comment:
"You posted this in two places, so I will add my answer here as well:"
I didn't know which has more visibility. Apologies if this is a hassle. I'll
reply here too, once again, but do let me know which one I should stick to if
required.
"this would be exactly the same if you had a file-based storage mechanism"
Indeed, and I mentioned that as well. However, the OS would cache those files
after the first hit, so after that point you should see great benefits as
you're effectively reading from RAM ... granted, until that portion of OS file
cache is overridden.
So on average, this should have visible advantages.
If however you always read from disk, i.e. OS does not cache the files, then
yes, the advantages are not great, but it's unlikely for the OS not to cache
them and get at least a few OS file-cache hits (i.e. RAM) before they get
wiped, especially on busy websites.
p.s. I no longer have the evidence, but I did mention I saw big improvements a
long time ago when eAccelerator also had a file-based storage engine.
Previous Comments:
------------------------------------------------------------------------
[2013-03-24 15:20:18] [email protected]
You posted this in two places, so I will add my answer here as well:
You are making a lot of assumptions here based on no evidence. The way opcode
caches work, it's not like it creates a single op_array of an entire
application
and writes that. Each included file is turned into an op_array, so when you say
an app has to load hundreds of files, this would be exactly the same if you had
a file-based storage mechanism. The OS file cache would work exactly the same
for php script files vs. op_array files, so no difference there. When I
actually
tested this with APC I saw less than a 5% speedup reading op_arrays from disk
vs. simply re-compiling from the script. This is likely because we actually
have
to read more stuff in the compiled op_array case since it isn't just op_arrays
stored. We also store function and object lists alongside so the savings you
get
from not having to recompile is eaten by the extra disk reads you need.
And yes, a memcache backend would suffer from the same fate assuming it is a
standard distributed memcache setup. A local memcache instance with no network
traffic might work, but I can't see that being available to many SuPHP users.
------------------------------------------------------------------------
[2013-03-24 15:00:51] normadize at gmail dot com
Description:
------------
Now that Zend Optimizer Plus will make it to 5.5, I think it's time to
resurface this discussion. PLEASE do read it before dismissing it.
Time changed. There are a lot of SuPHP (and the likes) installations out there
that suffer from horrible performance ... and as we know, all current opcode
cachers fail. SuPHP and the likes now account for quite a lot of php
installations, a really non-negligible number.
All those installations would greatly benefit from a storage engine that
survives between requests. Both users and hosting providers would be extremely
grateful!
There are so many slow and badly written but still very popular scripts out
there cough like WordPress cough, especially when they have all sort of other
popular and slow plugins being executed as part of a single request. This tends
to be the norm now with websites based on WordPress, Drupal, etc ... and a ton
of them are hosted in SuPHP setups.
I know the cons of file-based storage but let's reconsider it for a moment.
MEMCACHED.
would be a nice solution but most probably (especially for SuPHP environments),
hosting providers won't allow it or won't provide it at all since it would be a
memory hog as clients fill up the server's RAM with cached php opcode.
It would still be great to have as a storage engine though.
FILE-BASED.
Everybody says cache hits would be slow. And yes, they would slower than SHM.
However:
it would still be considerably faster than loading and executing an entire
chain of scripts long scripts, e.g. WordPress + tons of plugins (all those php
files have to be read anyway without an opcode cacher)
The OS would cache the opcode files transparently in its file cache -- and
hosting providers will not disable that as they don't want their disks thrashed
-- so this would actually be pretty much as fast as SHM-based solutions as long
as the files are in the OS's file cache. On average, it would still be
considerably faster than without any opcode cache at all ... probably on par
with a Memcached-based cache which incurs network latency.
The OS would automatically take care of wiping the oldest accessed files
from its file cache when memory fills up, so garbage collection would be pretty
simple.
This solution would be great since the server's memory won't be hogged (as
opposed to memcached or SHM-based solutions), and it would not require any
dependencies for it to run (as opposed to memcached).
Both users and providers alike would enjoy a faster service + less resources
hogged. It would be a great compromise for those numerous SuPHP-and-friends
setups.
I know that tons of people would employ such a solution if it existed!
Hoping you'll consider this.
Cheers.
p.s. I remember when eAccelerator had file-based storage and it was making a
great deal of difference for such big and slow scripts. Now there is no PHP
opcode cache (that I know of) which works with SuPHP.
------------------------------------------------------------------------
--
Edit this bug report at https://bugs.php.net/bug.php?id=64502&edit=1