php-general Digest 16 Aug 2010 17:21:20 -0000 Issue 6896

2010-08-16 Thread php-general-digest-help

php-general Digest 16 Aug 2010 17:21:20 - Issue 6896

Topics (messages 307486 through 307489):

erratic results from ftp_rawlist
307486 by: clancy_1.cybec.com.au

Re: login to protected directory by php
307487 by: Ashley Sheridan

Including files on NFS mount slow with APC enabled
307488 by: Mark Hunting
307489 by: Jonathan Tapicer

Administrivia:

To subscribe to the digest, e-mail:
php-general-digest-subscr...@lists.php.net

To unsubscribe from the digest, e-mail:
php-general-digest-unsubscr...@lists.php.net

To post to the list, e-mail:
php-gene...@lists.php.net


--
---BeginMessage---
I had a procedure for listing the files on a remote directory recursively using 
FTP, using
the code below. I thought it was working, but when I tried to use it yesterday 
I found it
listed every second directory, and returned false for the others.

I then tried replacing line A with line B (shown at the end of the procedure). 
The
procedure then worked (usually), but the number of times I had to try the 
ftp_rawlist was
most often 2, but varied randomly from 1 to 5, and on one occasion failed after 
five
tries.

I get the feeling that for some reason the system takes some time to recover 
after one
invocation of ftp_rawlist, before it is ready to run it again, but I can find no
indication of why this should be so, or what I can do about it.

Can anyone cast any light on this?  My remote website is running under PHP 5.1.6

(I have now investigated the recursive option on this command, and realise that 
I should
use this, instead of calling the procedure recursively, although this will 
require a
significant amount of work to implement.)

Clancy

/*  ---
1.0 List the contents of remote directory $source recursively
---*/
function list_recsv ($conn_id, $source_dir, $recsv)
{
//echo 'h5Rndx_'.__LINE__.': Source_dir = '.$source_dir.'/h5';  
set_time_limit(5);
$files = ftp_rawlist ($ftp_id, $source_dir) // Line A
$i = 0; $j = 0; $n = count ($files); $sub_dirs = '';
echo 'h5nbsp;/h5h5TB_11_'.__LINE__.': Get_recsv: '.$source_dir.' has 
'.$n.' files,
Try = '.$try.'/h5';
while ($i  $n)
{
$aa = decrypt ($files[$i]); 
// (Code to list files removed)

if (substr($aa[0], 0, 1) == 'd')
{
$sub_dirs[$j++] = $aa[8];
}
++$i;
}
if ($recsv)
{$i = 0; while ($i  $j)
{
$file = $sub_dirs[$i++]; 
$new_source = $source_dir.$file;
list_recsv ($ftp_id, $new_source, $recsv, $down_load);
}
} 
}

Line B: $try = 1; 
while ((($files = ftp_rawlist ($ftp_id, $source_dir)) == false)  $try  5) { 
++$try; } 
---End Message---
---BeginMessage---
On Mon, 2010-08-16 at 09:27 +0530, kranthi wrote:

 i would configure apache to let php interpreter handle all kinds of
 extensions ( http://httpd.apache.org/docs/2.0/mod/mod_mime.html#addhandler
 )
 
 even then u'll have go through all the steps pointed out by Ash.
 the only advantage of this method is more user friendly URL
 


That would be very slow and prone to failure. What happens when Apache
comes across a binary file that contains ?php inside? It seems
unlikely, but I've found the more unlikely something should be, the more
often it occurs at just the wrong moment! For example, a document that
talks about PHP, or an image that randomly contains those characters as
part of the bitmap data?

Also, the idea of tying an ID into the DB does still allow you to use
friendly URLs, but is the ability to guess filenames really something
you want in a security system? It would be more prone to brute force
attacking I think.

Thanks,
Ash
http://www.ashleysheridan.co.uk


---End Message---
---BeginMessage---
I am struggling with the performance of some websites that use a lot of
includes (using include_once). The files are on a NFS mount (NFSv4), and
I use APC to speed things up. APC has enough memory, and I see all
included files are in the APC cache. apc.include_once_override is
enabled. This is on a Ubuntu Linux 10.04 server.

What surprises me is that using strace I see apache open()ing all
included files. I would think this is not necessary as APC has these
files in its cache. Opening a file takes 1-3 ms, the websites include
100-200 files, so this costs about half a second for each request. I
tried a lot to prevent this but my options are exhausted. Is it normal
that all these files are open()ed each time, and if so how can I speed
up these includes?

Part of the trace is below, look especially at 

[PHP] erratic results from ftp_rawlist

2010-08-16 Thread clancy_1
I had a procedure for listing the files on a remote directory recursively using 
FTP, using
the code below. I thought it was working, but when I tried to use it yesterday 
I found it
listed every second directory, and returned false for the others.

I then tried replacing line A with line B (shown at the end of the procedure). 
The
procedure then worked (usually), but the number of times I had to try the 
ftp_rawlist was
most often 2, but varied randomly from 1 to 5, and on one occasion failed after 
five
tries.

I get the feeling that for some reason the system takes some time to recover 
after one
invocation of ftp_rawlist, before it is ready to run it again, but I can find no
indication of why this should be so, or what I can do about it.

Can anyone cast any light on this?  My remote website is running under PHP 5.1.6

(I have now investigated the recursive option on this command, and realise that 
I should
use this, instead of calling the procedure recursively, although this will 
require a
significant amount of work to implement.)

Clancy

/*  ---
1.0 List the contents of remote directory $source recursively
---*/
function list_recsv ($conn_id, $source_dir, $recsv)
{
//echo 'h5Rndx_'.__LINE__.': Source_dir = '.$source_dir.'/h5';  
set_time_limit(5);
$files = ftp_rawlist ($ftp_id, $source_dir) // Line A
$i = 0; $j = 0; $n = count ($files); $sub_dirs = '';
echo 'h5nbsp;/h5h5TB_11_'.__LINE__.': Get_recsv: '.$source_dir.' has 
'.$n.' files,
Try = '.$try.'/h5';
while ($i  $n)
{
$aa = decrypt ($files[$i]); 
// (Code to list files removed)

if (substr($aa[0], 0, 1) == 'd')
{
$sub_dirs[$j++] = $aa[8];
}
++$i;
}
if ($recsv)
{$i = 0; while ($i  $j)
{
$file = $sub_dirs[$i++]; 
$new_source = $source_dir.$file;
list_recsv ($ftp_id, $new_source, $recsv, $down_load);
}
} 
}

Line B: $try = 1; 
while ((($files = ftp_rawlist ($ftp_id, $source_dir)) == false)  $try  5) { 
++$try; } 

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] login to protected directory by php

2010-08-16 Thread Ashley Sheridan
On Mon, 2010-08-16 at 09:27 +0530, kranthi wrote:

 i would configure apache to let php interpreter handle all kinds of
 extensions ( http://httpd.apache.org/docs/2.0/mod/mod_mime.html#addhandler
 )
 
 even then u'll have go through all the steps pointed out by Ash.
 the only advantage of this method is more user friendly URL
 


That would be very slow and prone to failure. What happens when Apache
comes across a binary file that contains ?php inside? It seems
unlikely, but I've found the more unlikely something should be, the more
often it occurs at just the wrong moment! For example, a document that
talks about PHP, or an image that randomly contains those characters as
part of the bitmap data?

Also, the idea of tying an ID into the DB does still allow you to use
friendly URLs, but is the ability to guess filenames really something
you want in a security system? It would be more prone to brute force
attacking I think.

Thanks,
Ash
http://www.ashleysheridan.co.uk




[PHP] Including files on NFS mount slow with APC enabled

2010-08-16 Thread Mark Hunting
I am struggling with the performance of some websites that use a lot of
includes (using include_once). The files are on a NFS mount (NFSv4), and
I use APC to speed things up. APC has enough memory, and I see all
included files are in the APC cache. apc.include_once_override is
enabled. This is on a Ubuntu Linux 10.04 server.

What surprises me is that using strace I see apache open()ing all
included files. I would think this is not necessary as APC has these
files in its cache. Opening a file takes 1-3 ms, the websites include
100-200 files, so this costs about half a second for each request. I
tried a lot to prevent this but my options are exhausted. Is it normal
that all these files are open()ed each time, and if so how can I speed
up these includes?

Part of the trace is below, look especially at the first line where 2ms
are lost while this file is in the APC cache:

open(/[removed]/library/Zend/Application.php, O_RDONLY) = 1440 0.002035
fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000137
fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000124
fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000133
mmap(NULL, 11365, PROT_READ, MAP_SHARED, 1440, 0) = 0x7faf3f068000
0.000395
stat(/[removed]/library/Zend/Application.php, {st_mode=S_IFREG|0755,
st_size=11365, ...}) = 0 0.000219
munmap(0x7faf3f068000, 11365)   = 0 0.000151
close(1440) = 0 0.000845

Thanks,
Mark

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Including files on NFS mount slow with APC enabled

2010-08-16 Thread Jonathan Tapicer
Hi,

APC checks by default if every included file was modified doing a stat
call. You can disable it, setting apc.stat to 0
(http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat). Try
if that improves the performance. Of course, you should manually
delete the APC opcode cache every time you modify a PHP file, since
APC won't detect that it was modified.

Regards,

Jonathan

On Mon, Aug 16, 2010 at 10:21 AM, Mark Hunting m...@netexpo.nl wrote:
 I am struggling with the performance of some websites that use a lot of
 includes (using include_once). The files are on a NFS mount (NFSv4), and
 I use APC to speed things up. APC has enough memory, and I see all
 included files are in the APC cache. apc.include_once_override is
 enabled. This is on a Ubuntu Linux 10.04 server.

 What surprises me is that using strace I see apache open()ing all
 included files. I would think this is not necessary as APC has these
 files in its cache. Opening a file takes 1-3 ms, the websites include
 100-200 files, so this costs about half a second for each request. I
 tried a lot to prevent this but my options are exhausted. Is it normal
 that all these files are open()ed each time, and if so how can I speed
 up these includes?

 Part of the trace is below, look especially at the first line where 2ms
 are lost while this file is in the APC cache:

 open(/[removed]/library/Zend/Application.php, O_RDONLY) = 1440 0.002035
 fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000137
 fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000124
 fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000133
 mmap(NULL, 11365, PROT_READ, MAP_SHARED, 1440, 0) = 0x7faf3f068000
 0.000395
 stat(/[removed]/library/Zend/Application.php, {st_mode=S_IFREG|0755,
 st_size=11365, ...}) = 0 0.000219
 munmap(0x7faf3f068000, 11365)           = 0 0.000151
 close(1440)                             = 0 0.000845

 Thanks,
 Mark

 --
 PHP General Mailing List (http://www.php.net/)
 To unsubscribe, visit: http://www.php.net/unsub.php



--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Including files on NFS mount slow with APC enabled

2010-08-16 Thread Mark Hunting
Thanks for your answer. I have tested this option before, and it indeed
disables the stat() operation. However, it doesn't disable the open()
operation, which is about 10x slower than the stat() call (see my
example trace).

On 08/16/2010 07:21 PM, Jonathan Tapicer wrote:
 Hi,

 APC checks by default if every included file was modified doing a stat
 call. You can disable it, setting apc.stat to 0
 (http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat). Try
 if that improves the performance. Of course, you should manually
 delete the APC opcode cache every time you modify a PHP file, since
 APC won't detect that it was modified.

 Regards,

 Jonathan

 On Mon, Aug 16, 2010 at 10:21 AM, Mark Hunting m...@netexpo.nl wrote:
   
 I am struggling with the performance of some websites that use a lot of
 includes (using include_once). The files are on a NFS mount (NFSv4), and
 I use APC to speed things up. APC has enough memory, and I see all
 included files are in the APC cache. apc.include_once_override is
 enabled. This is on a Ubuntu Linux 10.04 server.

 What surprises me is that using strace I see apache open()ing all
 included files. I would think this is not necessary as APC has these
 files in its cache. Opening a file takes 1-3 ms, the websites include
 100-200 files, so this costs about half a second for each request. I
 tried a lot to prevent this but my options are exhausted. Is it normal
 that all these files are open()ed each time, and if so how can I speed
 up these includes?

 Part of the trace is below, look especially at the first line where 2ms
 are lost while this file is in the APC cache:

 open(/[removed]/library/Zend/Application.php, O_RDONLY) = 1440 0.002035
 fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000137
 fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000124
 fstat(1440, {st_mode=S_IFREG|0755, st_size=11365, ...}) = 0 0.000133
 mmap(NULL, 11365, PROT_READ, MAP_SHARED, 1440, 0) = 0x7faf3f068000
 0.000395
 stat(/[removed]/library/Zend/Application.php, {st_mode=S_IFREG|0755,
 st_size=11365, ...}) = 0 0.000219
 munmap(0x7faf3f068000, 11365)   = 0 0.000151
 close(1440) = 0 0.000845

 Thanks,
 Mark

 --
 PHP General Mailing List (http://www.php.net/)
 To unsubscribe, visit: http://www.php.net/unsub.php


 
   


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php