Edit report at https://bugs.php.net/bug.php?id=63413&edit=1
ID: 63413
Comment by: ben at indietorrent dot org
Reported by: ben at indietorrent dot org
Summary: Intermittent warning and fatal error on require()
statement
Status: Open
Type: Bug
Package: Scripting Engine problem
Operating System: Windows 7 x64
PHP Version: 5.4.8
Block user comment: N
Private report: N
New Comment:
@giunta dot gaetano at gmail dot com:
I can't thank you enough; your insightful comments reveal the "missing link":
NTFS junction points. I didn't think they were relevant until reading your
comments.
In my comment dated [2012-11-16 16:29 UTC], I stated that I setup a virtual
machine with the same WAMP stack components that are installed on the "problem"
machines, but that I was unable to reproduce the problem. Well, I forgot to
configure the site-root as an NTFS junction point in those tests.
As soon as I changed the site-root from a regular directory to an NTFS function
point, I was able to reproduce the problem almost immediately.
As stated in my previous comments, concurrency seems to be a factor in how
likely the failures are to occur. Unfortunately, this fact may point to a
race-condition.
It seems safe to say that include() and require() do not perform reliably
across NTFS junction points.
Whether or not the same root cause is behind the NFS problems on Linux remains
to be seen.
Previous Comments:
------------------------------------------------------------------------
[2013-02-14 10:11:41] giunta dot gaetano at gmail dot com
Btw, did some testing on my rig: win7 64bit, apache 2.4.3/vc10 from Apache
Lounge, php 5.3.20/vc9.
Using the test scripts provided above, and "ab" hitting them 100 times in a row
with concurrency ranging from 1 to 64.
When no NTFS junctions in use => no sign of errors whatsoever
When an NTFS junction in use => one or two php errors do happen, across the
whole test (127k requests).
NB: just accessing the main file over the junction is ok. The problems happen
then the "require" call is for a file over in the junction-ed directory
------------------------------------------------------------------------
[2013-02-13 17:34:19] giunta dot gaetano at gmail dot com
A behaviour which has been puzzling me and that might (or not) be related: we
also have some failing code which assumes that filemtime should not be zero
(for an existing file). This is generally happening on Linux servers at
customers (php in mode_prefork), at times of high load, for nfs-mounted files
------------------------------------------------------------------------
[2012-11-16 16:29:11] ben at indietorrent dot org
Additional testing indicates that this problem is likely related to a specific
piece of software that has been installed on the affected machines, and not PHP
or the manner in which it is integrated with Apache.
I tested the steps-to-reproduce with the exact same project/code-base on a LAMP
stack (Ubuntu 12.04 + Apache 2.2.22 + MySQL 5.5.24 + PHP 5.3.10) and cannot
reproduce the issue, no matter how hard I hammer the server with requests.
As mentioned previously, I am unable to reproduce this issue with a comparable
stack on Mac OS 10.8, either.
These facts pointed to a Windows-specific cause, perhaps related to Apache's
"winnt" MPM, so I setup a VM with a pristine Windows 7 x86 installation. I
installed the same stack components as are installed on the computers on which
this issue occurs. Yet, after several hours of hammering the server with
constant page-requests, not a single error has been registered in PHP's error
log.
If at any point I'm able to determine which software causes this issue, I will
post my findings here.
------------------------------------------------------------------------
[2012-11-05 15:45:02] ben at indietorrent dot org
Another update.
I began to suspect that this is a thread-safety issue, so I downloaded the
latest non-thread-safe version of PHP and configured Apache to serve PHP files
via Fast-CGI (mod_fcgid).
To my surprise, this problem still occurs, and it seems to be worse with
Fast-CGI than with Mod-PHP.
Also, I tried to reproduce the problem on a Mac with Mac OS 10.8 and a fairly
modern MAMP installation that runs PHP via Mod-PHP. No matter how hard I
hammered on Apache, these spurious require() failures did not occur.
------------------------------------------------------------------------
[2012-11-01 21:07:42] ben at indietorrent dot org
It occurred to me to examine Apache's internal error log for entries that
correspond to those in PHP's error log when this occurs. I noticed what may be
an important clue.
Sample entries in PHP's error log:
-----------------------
[01-Nov-2012 13:29:41 America/New_York] PHP Warning:
require(../include/global.inc.php): failed to open stream: No such file or
directory in C:\Users\Ben\Documents\Projects\project
Testing\project-svn\trunk\public\auxiliary\css.php on line 5
[01-Nov-2012 13:29:41 America/New_York] PHP Fatal error: require(): Failed
opening required '../include/global.inc.php' (include_path='.;C:\Program
Files\php\includes;C:\Program Files\php\PEAR\pear') in
C:\Users\Ben\Documents\Projects\project
Testing\project-svn\trunk\public\auxiliary\css.php on line 5
[01-Nov-2012 13:29:43 America/New_York] PHP Warning:
require(C:/Users/Ben/Documents/Apache/project/protected/constants.inc.php):
failed to open stream: No such file or directory in
C:\Users\Ben\Documents\Projects\project
Testing\project-svn\trunk\public\include\global.inc.php on line 18
[01-Nov-2012 13:29:43 America/New_York] PHP Fatal error: require(): Failed
opening required
'C:/Users/Ben/Documents/Apache/project/protected/constants.inc.php'
(include_path='.;C:\Program Files\php\includes;C:\Program Files\php\PEAR\pear')
in C:\Users\Ben\Documents\Projects\project
Testing\project-svn\trunk\public\include\global.inc.php on line 18
-----------------------
Roughly-corresponding entries in Apache's error log:
-----------------------
[Thu Nov 01 13:29:21.235844 2012] [core:error] [pid 6780:tid 1136] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:65162] AH00127: Cannot map GET
/project/public/js/jquery.accordion.js HTTP/1.1 to file, referer:
https://ben-pc/project/public/company/history/
[Thu Nov 01 13:29:41.141441 2012] [core:error] [pid 6780:tid 1128] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:65204] AH00127: Cannot map GET
/project/public/common/js/libs/modernizr-2.5.3.min.js HTTP/1.1 to file,
referer: https://ben-pc/project/public/contact/service-inquiry/
-----------------------
While I don't see any direct correlation between the files cited in PHP's error
log and those cited in Apache's error log, the message "(OS 32)The process
cannot access the file because it is being used by another process" seems very
relevant to the observed behavior in PHP.
This observation inspired me to create a very simple test script which, much to
my surprise, enables me to produce this issue at will. I was able to reproduce
the issue in Windows' Safe Mode, too, which seems to eliminate virus scanners,
filter drivers, and other similar causes.
The key to reproducing this issue is making as many requests per unit of time
as possible. In other words, holding-down the browser's refresh key (e.g.,
Ctrl+R or F5) for some period of time eventually causes require() failures to
be logged.
Here's the script:
<?php
$file = 'C:/Users/Ben/Documents/Apache/require-test-include.php';
$counter = 0;
while ($counter < 500) {
require($file);
flush();
$counter++;
}
?>
The contents of "require-test-include.php" are probably largely irrelevant, but
I'm using the following:
<?php
echo '.';
?>
A sample excerpt from PHP's error log contains entries such as the following:
-----------------------
[01-Nov-2012 16:01:53 America/New_York] PHP Warning:
require(C:/Users/Ben/Documents/Apache/require-test-include.php): failed to open
stream: No such file or directory in
C:\Users\Ben\Documents\Apache\require-test.php on line 8
[01-Nov-2012 16:01:53 America/New_York] PHP Fatal error: require(): Failed
opening required 'C:/Users/Ben/Documents/Apache/require-test-include.php'
(include_path='.;C:\Program Files\php\includes;C:\Program Files\php\PEAR\pear')
in C:\Users\Ben\Documents\Apache\require-test.php on line 8
-----------------------
Apache's error log fills-up with entries such as the following (I've included
six lines to demonstrate the timestamp variances):
-----------------------
[Thu Nov 01 15:59:02.327506 2012] [core:error] [pid 4420:tid 1040] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:62710] AH00127: Cannot map GET /require-test.php HTTP/1.1 to
file
[Thu Nov 01 15:59:02.389509 2012] [core:error] [pid 4420:tid 1040] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:62710] AH00127: Cannot map GET /require-test.php HTTP/1.1 to
file
[Thu Nov 01 15:59:02.954339 2012] [core:error] [pid 4420:tid 748] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:62740] AH00127: Cannot map GET /require-test.php HTTP/1.1 to
file
[Thu Nov 01 15:59:03.093347 2012] [core:error] [pid 4420:tid 748] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:62747] AH00127: Cannot map GET /require-test.php HTTP/1.1 to
file
[Thu Nov 01 15:59:03.491370 2012] [core:error] [pid 4420:tid 948] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:62768] AH00127: Cannot map GET /require-test.php HTTP/1.1 to
file
[Thu Nov 01 15:59:03.758385 2012] [core:error] [pid 4420:tid 1040] (OS 32)The
process cannot access the file because it is being used by another process. :
[client 10.0.1.57:62776] AH00127: Cannot map GET /require-test.php HTTP/1.1 to
file
-----------------------
It is worth noting that requesting a "normal" page from one of my PHP
applications and then switching to this test script immediately thereafter, and
hammering on it with F5, dramatically increases the number of failed require()
occurrences. Conversely, if I hammer on the above script only (and don't browse
to other application URLs on localhost), the frequency of the require()
failures is reduced to only a few per several thousand requests.
As a final point of note, when the file that is passed to require() references
an NTFS junction point (created with "mklink.exe /J"), require() performance is
degraded noticeably (expected, perhaps), which makes this issue easier to
reproduce. This observation may be at the root of
https://bugs.php.net/bug.php?id=39484 ("intermittent include failure under
symlinked virtual hosts").
How can one feel confident in deploying a PHP application that is guaranteed to
be under a heavy load when this type of problem exists? Even if "this is not a
bug", per se, the fact that this scenario is not handled more gracefully
(either in PHP or Apache) is disconcerting, to say the least.
------------------------------------------------------------------------
The remainder of the comments for this report are too long. To view
the rest of the comments, please view the bug report online at
https://bugs.php.net/bug.php?id=63413
--
Edit this bug report at https://bugs.php.net/bug.php?id=63413&edit=1