Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread Graham Cossey
On Fri, Jan 8, 2010 at 3:48 AM, Robert Cummings rob...@interjinn.com wrote:


 They almost always make your shit run faster.

I love your final statement Robert!
A reply of good grammar and vocabulary summarised most succinctly.


-- 
Graham

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread Andrew Ballard
On Thu, Jan 7, 2010 at 10:48 PM, Robert Cummings rob...@interjinn.com wrote:
 ...
 They almost always make your shit run faster.

You know they make medicine for that?   ;-)

Andrew

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread Robert Cummings

Graham Cossey wrote:

On Fri, Jan 8, 2010 at 3:48 AM, Robert Cummings rob...@interjinn.com wrote:


They almost always make your shit run faster.


I love your final statement Robert!
A reply of good grammar and vocabulary summarised most succinctly.


:)


--
http://www.interjinn.com
Application and Templating Framework for PHP

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread Robert Cummings

clanc...@cybec.com.au wrote:

On Thu, 07 Jan 2010 22:48:59 -0500, rob...@interjinn.com (Robert Cummings) 
wrote:


clanc...@cybec.com.au wrote:

Thank you all for your comments. I did not know about bytecode caches. They're 
an
interesting concept, but if I am interpreting the paper
http://itst.net/654-php-on-fire-three-opcode-caches-compared correctly they 
only double
the average speed of operation, which is rather less than I would have anticipated. 
I strongly advise that you take the time to try a bytecode cache. Within 
linux environments I am partial to eaccelerator. In IIS environments I 
now use WinCache from Microsoft. From my own observations with a 
multitude of different types of PHP web applications I find that the 
speed gain is closer to 5 times faster on average.


Five times faster is certainly more attractive than twice as fast. But under 
what
circumstances is this achieved? Unfortunately these days it is difficult to 
find any solid
information on how things actually work, but my impression is that caches only 
work for
pages which are frequently accessed. If this is correct, and (as I suspect) 
somebody looks
at my website once an hour, the page will not be in the cache, so it won't 
help. Also one
of the more popular parts of this website is my photo album, and for this much 
of the
access time will be the download time of the photos. Furthermore as each 
visitor will look
at a different set of photos, even with heavy access it is unlikely that any 
given photo
would be in a cache.


A particular cache of bytecode is usually pushed out of memory when the 
configured maximum amount of memory for the bytecode cache is about to 
be exceeded. Additionally, the particular cache that gets eliminated is 
usually the oldest or least used cache. Given this, and your purported 
usage patterns, your pages will most likely remain in the cache until 
such time as you update the code or restart the webserver.



Despite these comments the access times for my websites seem to be pretty good 
--
certainly a lot better than many commercial websites -- but have a look at
http://www.corybas.com/, and see what you think. (I am in the process of 
updating this,
and know that the technical notes are not currently working, but there is 
plenty there to
show you what I'm trying to do.)


I'm not disputing your good enough statistics. I'm merely asserting that 
a bytecode cache will resolve your concerns about file access times when 
your code is strewn across many compartmentalized files. In addition, I 
am advising that it is good practice to always install a bytecode cache. 
One of the first things I do when setting up a new system is to ensure I 
put an accelerator in place. Once it's in place, no matter how many 
pages or sub sites I put up, the accelerator is already in place and 
providing benefits.


Cheers,
Rob.
--
http://www.interjinn.com
Application and Templating Framework for PHP

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread J Ravi Menon
Hi,

A note on bytecode caching and include/include_once performance. A
while ago when we were profiling our code, we did notice that file
includes do take a noticeable percentage of overall overhead (enough
for us to look into it more deep). We are using apc cache on a
standard LAMP platform (linux 2.6 series, apache 2.2x and PHP 5
series).

Our includes were using 'relative' paths (e.g.  include_once
'../common/somefile.inc' or include_once 'lib/somefuncs.inc' ) and
within APC cache logic, it resolves such relative paths to absolute
paths via a realpath() calls. This can be fairly file-system intensive
(lots of syscalls like stat() and readlink() to resolve symlinks
etc...). APC uses absolute paths as key into the opcode cache.

This gets worse if it has to find your files via the 'ini_path'
setting (and most of your library or common code is not in the first
component or so ).

So from APC cache perspective, it is most efficient if your include
paths are all absolute (realpath() logic is skipped) - e.g.:

include_once $BASE_DIR . '/common/somefile.inc';
include_once $BASE_DIR . '/lib/somefuncs.inc';

and so on where '$BASE_DIR' could be set via apache Setenv directives
( $_SERVER['BASE_DIR'] or even hardcoded all over the place).

There were other issues with include vs include_once and apc cache,
but I don't recall why there were performance difference (with include
only even with relative paths, the performance was better, but
managing dependencies is to cumbersome).

Not sure how other bytecode cache handles relative paths but I suspect
it has to do something similar.

From a pure code readability point of view and more automated
dependency management (as close to compiled languages as possible), I
do favor include_once/require_once strategy with absolute path
strategy, but it is not unheard of where to squeeze out maximal
performance, a giant single 'include' is done. Sometimes this is done
on prod. systems where a parser goes through and generates this big
include file, and ensure it is placed somewhere in the beginning the
main 'controller.php' (MVC model) and all other includes stripped off.

Hope this helps in making your decision.

Ravi


On Fri, Jan 8, 2010 at 8:59 AM, Robert Cummings rob...@interjinn.com wrote:
 clanc...@cybec.com.au wrote:

 On Thu, 07 Jan 2010 22:48:59 -0500, rob...@interjinn.com (Robert Cummings)
 wrote:

 clanc...@cybec.com.au wrote:

 Thank you all for your comments. I did not know about bytecode caches.
 They're an
 interesting concept, but if I am interpreting the paper
 http://itst.net/654-php-on-fire-three-opcode-caches-compared correctly
 they only double
 the average speed of operation, which is rather less than I would have
 anticipated.

 I strongly advise that you take the time to try a bytecode cache. Within
 linux environments I am partial to eaccelerator. In IIS environments I now
 use WinCache from Microsoft. From my own observations with a multitude of
 different types of PHP web applications I find that the speed gain is closer
 to 5 times faster on average.

 Five times faster is certainly more attractive than twice as fast. But
 under what
 circumstances is this achieved? Unfortunately these days it is difficult
 to find any solid
 information on how things actually work, but my impression is that caches
 only work for
 pages which are frequently accessed. If this is correct, and (as I
 suspect) somebody looks
 at my website once an hour, the page will not be in the cache, so it won't
 help. Also one
 of the more popular parts of this website is my photo album, and for this
 much of the
 access time will be the download time of the photos. Furthermore as each
 visitor will look
 at a different set of photos, even with heavy access it is unlikely that
 any given photo
 would be in a cache.

 A particular cache of bytecode is usually pushed out of memory when the
 configured maximum amount of memory for the bytecode cache is about to be
 exceeded. Additionally, the particular cache that gets eliminated is usually
 the oldest or least used cache. Given this, and your purported usage
 patterns, your pages will most likely remain in the cache until such time as
 you update the code or restart the webserver.

 Despite these comments the access times for my websites seem to be pretty
 good --
 certainly a lot better than many commercial websites -- but have a look at
 http://www.corybas.com/, and see what you think. (I am in the process of
 updating this,
 and know that the technical notes are not currently working, but there is
 plenty there to
 show you what I'm trying to do.)

 I'm not disputing your good enough statistics. I'm merely asserting that a
 bytecode cache will resolve your concerns about file access times when your
 code is strewn across many compartmentalized files. In addition, I am
 advising that it is good practice to always install a bytecode cache. One of
 the first things I do when setting up a new system is to ensure I put an
 

Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread J Ravi Menon
Sorry forgot to mention that we used APC with apc.stat turned off
which will give a little bit more performance gain, but it does mean
flushing the cache on every code push (which is trivial).

Ravi


On Fri, Jan 8, 2010 at 11:30 AM, J Ravi Menon jravime...@gmail.com wrote:
 Hi,

 A note on bytecode caching and include/include_once performance. A
 while ago when we were profiling our code, we did notice that file
 includes do take a noticeable percentage of overall overhead (enough
 for us to look into it more deep). We are using apc cache on a
 standard LAMP platform (linux 2.6 series, apache 2.2x and PHP 5
 series).

 Our includes were using 'relative' paths (e.g.  include_once
 '../common/somefile.inc' or include_once 'lib/somefuncs.inc' ) and
 within APC cache logic, it resolves such relative paths to absolute
 paths via a realpath() calls. This can be fairly file-system intensive
 (lots of syscalls like stat() and readlink() to resolve symlinks
 etc...). APC uses absolute paths as key into the opcode cache.

 This gets worse if it has to find your files via the 'ini_path'
 setting (and most of your library or common code is not in the first
 component or so ).

 So from APC cache perspective, it is most efficient if your include
 paths are all absolute (realpath() logic is skipped) - e.g.:

 include_once $BASE_DIR . '/common/somefile.inc';
 include_once $BASE_DIR . '/lib/somefuncs.inc';

 and so on where '$BASE_DIR' could be set via apache Setenv directives
 ( $_SERVER['BASE_DIR'] or even hardcoded all over the place).

 There were other issues with include vs include_once and apc cache,
 but I don't recall why there were performance difference (with include
 only even with relative paths, the performance was better, but
 managing dependencies is to cumbersome).

 Not sure how other bytecode cache handles relative paths but I suspect
 it has to do something similar.

 From a pure code readability point of view and more automated
 dependency management (as close to compiled languages as possible), I
 do favor include_once/require_once strategy with absolute path
 strategy, but it is not unheard of where to squeeze out maximal
 performance, a giant single 'include' is done. Sometimes this is done
 on prod. systems where a parser goes through and generates this big
 include file, and ensure it is placed somewhere in the beginning the
 main 'controller.php' (MVC model) and all other includes stripped off.

 Hope this helps in making your decision.

 Ravi


 On Fri, Jan 8, 2010 at 8:59 AM, Robert Cummings rob...@interjinn.com wrote:
 clanc...@cybec.com.au wrote:

 On Thu, 07 Jan 2010 22:48:59 -0500, rob...@interjinn.com (Robert Cummings)
 wrote:

 clanc...@cybec.com.au wrote:

 Thank you all for your comments. I did not know about bytecode caches.
 They're an
 interesting concept, but if I am interpreting the paper
 http://itst.net/654-php-on-fire-three-opcode-caches-compared correctly
 they only double
 the average speed of operation, which is rather less than I would have
 anticipated.

 I strongly advise that you take the time to try a bytecode cache. Within
 linux environments I am partial to eaccelerator. In IIS environments I now
 use WinCache from Microsoft. From my own observations with a multitude of
 different types of PHP web applications I find that the speed gain is 
 closer
 to 5 times faster on average.

 Five times faster is certainly more attractive than twice as fast. But
 under what
 circumstances is this achieved? Unfortunately these days it is difficult
 to find any solid
 information on how things actually work, but my impression is that caches
 only work for
 pages which are frequently accessed. If this is correct, and (as I
 suspect) somebody looks
 at my website once an hour, the page will not be in the cache, so it won't
 help. Also one
 of the more popular parts of this website is my photo album, and for this
 much of the
 access time will be the download time of the photos. Furthermore as each
 visitor will look
 at a different set of photos, even with heavy access it is unlikely that
 any given photo
 would be in a cache.

 A particular cache of bytecode is usually pushed out of memory when the
 configured maximum amount of memory for the bytecode cache is about to be
 exceeded. Additionally, the particular cache that gets eliminated is usually
 the oldest or least used cache. Given this, and your purported usage
 patterns, your pages will most likely remain in the cache until such time as
 you update the code or restart the webserver.

 Despite these comments the access times for my websites seem to be pretty
 good --
 certainly a lot better than many commercial websites -- but have a look at
 http://www.corybas.com/, and see what you think. (I am in the process of
 updating this,
 and know that the technical notes are not currently working, but there is
 plenty there to
 show you what I'm trying to do.)

 I'm not disputing your good enough statistics. I'm merely asserting 

Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-08 Thread Phpster



On Jan 8, 2010, at 10:44 AM, Andrew Ballard aball...@gmail.com wrote:

On Thu, Jan 7, 2010 at 10:48 PM, Robert Cummings  
rob...@interjinn.com wrote:

...
They almost always make your shit run faster.


You know they make medicine for that?   ;-)

Andrew

--



Tacos?

Bastien

Sent from my iPod 


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-07 Thread Daniel Egeberg
On Thu, Jan 7, 2010 at 04:11, Daevid Vincent dae...@daevid.com wrote:
 I think it's a case by case basis. Generally File I/O is expensive, but
 then again, as you say, having everything in a couple files is also
 sub-optimal for organizing and keeping things modular.

That is easily sorted out using automated build tools like Phing, Ant
or GNU Make.

Either way, I wouldn't worry about the extra I/O unless you've got *a
lot* of traffic.

-- 
Daniel Egeberg

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-07 Thread clancy_1
On Wed, 6 Jan 2010 19:11:07 -0800, dae...@daevid.com (Daevid Vincent) wrote:

 

 -Original Message-
 From: Al [mailto:n...@ridersite.org] 
 Sent: Wednesday, January 06, 2010 5:09 PM
 To: php-general@lists.php.net
 Subject: [PHP] Re: PHP programming strategy; lots of little 
 include files, or a few big ones?
 
 
 
 On 1/6/2010 7:18 PM, clanc...@cybec.com.au wrote:
  I have a flexible program, which can do many different 
 things according to the type of
  data it is fed.  Ideally the flexibility is achieved by 
 calling different functions,
  though when the functionality is ill-defined I sometimes 
 just include blocks of code.
 
  Ideally, from the point of program maintenance, each module 
 should not be too long --
  preferably just a page or so. This doesn't raise problems 
 in a compiled language, but in
  an interpreted language like PHP the programmer must decide 
 whether to lump a whole lot of
  functions into a single large include file, or to include 
 lots of little files as the
  particular functions are needed.
 
  The first case can lead to memory bloat, as there are 
 likely to be a lot of unused
  functions in memory on any given pass, whereas the second 
 case may require lots of little
  files to be loaded.
 
  Are there likely to be significant performance costs for 
 either approach, and what are
  your feelings about the relative virtues of the two approaches?

I think it's a case by case basis. Generally File I/O is expensive, but
then again, as you say, having everything in a couple files is also
sub-optimal for organizing and keeping things modular.

I suggest you go with smaller files that are organized into logical
'chunks'. For example, functions that are used frequently are grouped into
a common.inc.php rather than by topic (such as file/date/xml/forms/etc).
And then use topical includes for the rest.

More importantly, I suggest you get a good caching system like memecached
or any of the others out there. Then you can pre-compile and load these
files and the whole point becomes close to moot.

ÐÆ5ÏÐ 
http://daevid.com

Thank you all for your comments. I did not know about bytecode caches. They're 
an
interesting concept, but if I am interpreting the paper
http://itst.net/654-php-on-fire-three-opcode-caches-compared correctly they 
only double
the average speed of operation, which is rather less than I would have 
anticipated. 

As I would have to understand yet another system to implement them, and I 
suspect I'd have
to do a significant amount of rearranging, I don't think I will worry about 
them unless my
webpages unexpectedly become extremely popular.

 Al's suggestion that my code is probably infinitesimal compared with PHP 
suggests that I
shouldn't be worrying about memory requirements.  On the other hand I agree 
with David
that the advantages of using relatively small easy to understand modules 
probably outweigh
the costs of loading a larger number of files.


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-07 Thread clancy_1
On Wed, 06 Jan 2010 23:20:26 -0500, kolb0...@umn.edu (Daniel Kolbo) wrote:

Daevid Vincent wrote:
  
 
 -Original Message-
 From: Al [mailto:n...@ridersite.org] 
 Sent: Wednesday, January 06, 2010 5:09 PM
 To: php-general@lists.php.net
 Subject: [PHP] Re: PHP programming strategy; lots of little 
 include files, or a few big ones?



 On 1/6/2010 7:18 PM, clanc...@cybec.com.au wrote:
 I have a flexible program, which can do many different 
 things according to the type of
 data it is fed.  Ideally the flexibility is achieved by 
 calling different functions,
 though when the functionality is ill-defined I sometimes 
 just include blocks of code.
 Ideally, from the point of program maintenance, each module 
 should not be too long --
 preferably just a page or so. This doesn't raise problems 
 in a compiled language, but in
 an interpreted language like PHP the programmer must decide 
 whether to lump a whole lot of
 functions into a single large include file, or to include 
 lots of little files as the
 particular functions are needed.

 The first case can lead to memory bloat, as there are 
 likely to be a lot of unused
 functions in memory on any given pass, whereas the second 
 case may require lots of little
 files to be loaded.

 Are there likely to be significant performance costs for 
 either approach, and what are
 your feelings about the relative virtues of the two approaches?
 
 I think it's a case by case basis. Generally File I/O is expensive, but
 then again, as you say, having everything in a couple files is also
 sub-optimal for organizing and keeping things modular.
 
 I suggest you go with smaller files that are organized into logical
 'chunks'. For example, functions that are used frequently are grouped into
 a common.inc.php rather than by topic (such as file/date/xml/forms/etc).
 And then use topical includes for the rest.
 
 More importantly, I suggest you get a good caching system like memecached
 or any of the others out there. Then you can pre-compile and load these
 files and the whole point becomes close to moot.
 
 ÐÆ5ÏÐ 
 http://daevid.com
 
 Some people, when confronted with a problem, think 'I know, I'll use
 XML.'
 Now they have two problems. 
 
 

I had a similar issue but with classes (not functions).
I opted to keep my classes short and succinct, rather than shoving all
the method functionality into one all-purpose class.
Instead of blindly loading all the little classes on each http request,
I used (and was recommended on this list to use) __autoload().  The
script would only load my classes if the individual request needed it.
This helped to avoid the memory bloat.  I've heard magic functions like
__autoload are a bit slower, but the code is so much cleaner b/c of it.

Also, an opcode cache as suggested previously would facilitate the rapid
include of many small files.

Unfortunately, php does not offer an __autoload() type function to
autoload functions.

If you are able to encapsulate your functions functionality into classes
you may be able to use the above solution of using an opcode cache to
help __autoload() a bunch of small classes.

Although PHP doesn't offer an _autoload() function to autoload functions, it 
does provide
function_exists, and this can readily be used to achieve the same end:

If (!function_exists(feedback_handler)) { include 
('Feedback_handler.php'); }

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-07 Thread Robert Cummings

clanc...@cybec.com.au wrote:

Thank you all for your comments. I did not know about bytecode caches. They're 
an
interesting concept, but if I am interpreting the paper
http://itst.net/654-php-on-fire-three-opcode-caches-compared correctly they 
only double
the average speed of operation, which is rather less than I would have anticipated. 


I strongly advise that you take the time to try a bytecode cache. Within 
linux environments I am partial to eaccelerator. In IIS environments I 
now use WinCache from Microsoft. From my own observations with a 
multitude of different types of PHP web applications I find that the 
speed gain is closer to 5 times faster on average.



As I would have to understand yet another system to implement them, and I 
suspect I'd have
to do a significant amount of rearranging, I don't think I will worry about 
them unless my
webpages unexpectedly become extremely popular.


That's your perogative, but you started this thread with a question 
about file access times. By your latest argument (above) you may as well 
ignore it since when and if the issue becomes salient then you can worry 
about it. However, I think that's disingenuous at best since your pages 
will appear slower on average, and you're just wasting CPU resources.



Al's suggestion that my code is probably infinitesimal compared with PHP 
suggests that I
shouldn't be worrying about memory requirements.  On the other hand I agree 
with David
that the advantages of using relatively small easy to understand modules 
probably outweigh
the costs of loading a larger number of files.


You've missed several points. Bytecode caches allow you to skip the trip 
to the filesystem. They allow you to skip the parse and compile stage of 
PHP. They almost always make your shit run faster.


Cheers,
Rob.
--
http://www.interjinn.com
Application and Templating Framework for PHP

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-07 Thread clancy_1
On Thu, 07 Jan 2010 22:48:59 -0500, rob...@interjinn.com (Robert Cummings) 
wrote:

clanc...@cybec.com.au wrote:
 Thank you all for your comments. I did not know about bytecode caches. 
 They're an
 interesting concept, but if I am interpreting the paper
 http://itst.net/654-php-on-fire-three-opcode-caches-compared correctly they 
 only double
 the average speed of operation, which is rather less than I would have 
 anticipated. 

I strongly advise that you take the time to try a bytecode cache. Within 
linux environments I am partial to eaccelerator. In IIS environments I 
now use WinCache from Microsoft. From my own observations with a 
multitude of different types of PHP web applications I find that the 
speed gain is closer to 5 times faster on average.

Five times faster is certainly more attractive than twice as fast. But under 
what
circumstances is this achieved? Unfortunately these days it is difficult to 
find any solid
information on how things actually work, but my impression is that caches only 
work for
pages which are frequently accessed. If this is correct, and (as I suspect) 
somebody looks
at my website once an hour, the page will not be in the cache, so it won't 
help. Also one
of the more popular parts of this website is my photo album, and for this much 
of the
access time will be the download time of the photos. Furthermore as each 
visitor will look
at a different set of photos, even with heavy access it is unlikely that any 
given photo
would be in a cache. 

Despite these comments the access times for my websites seem to be pretty good 
--
certainly a lot better than many commercial websites -- but have a look at
http://www.corybas.com/, and see what you think. (I am in the process of 
updating this,
and know that the technical notes are not currently working, but there is 
plenty there to
show you what I'm trying to do.)

 As I would have to understand yet another system to implement them, and I 
 suspect I'd have
 to do a significant amount of rearranging, I don't think I will worry about 
 them unless my
 webpages unexpectedly become extremely popular.

That's your perogative, but you started this thread with a question 
about file access times. By your latest argument (above) you may as well 
ignore it since when and if the issue becomes salient then you can worry 
about it. However, I think that's disingenuous at best since your pages 
will appear slower on average, and you're just wasting CPU resources.

Unfortunately I am cursed with an insatiable curiosity, and spend far too much 
time
thinking about philosophical questions like this. But I have also been 
programming almost
forever, and I have learned that it would now take me five times as long as (I 
hope) it
would take you to implement something new like byte code caching.

 Al's suggestion that my code is probably infinitesimal compared with PHP 
 suggests that I
 shouldn't be worrying about memory requirements.  On the other hand I agree 
 with David
 that the advantages of using relatively small easy to understand modules 
 probably outweigh
 the costs of loading a larger number of files.

You've missed several points. Bytecode caches allow you to skip the trip 
to the filesystem. They allow you to skip the parse and compile stage of 
PHP. They almost always make your shit run faster.

Perhaps. But my visitors appear to be happy now, and my hosts aren't 
complaining that I'm
overloading their system.

Clancy

PS. It does sadden me that there don't seem to be many people here who are 
interested in
the philosophy of programming, as against the quick and dirty fix.


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-06 Thread Al



On 1/6/2010 7:18 PM, clanc...@cybec.com.au wrote:

I have a flexible program, which can do many different things according to the 
type of
data it is fed.  Ideally the flexibility is achieved by calling different 
functions,
though when the functionality is ill-defined I sometimes just include blocks of 
code.

Ideally, from the point of program maintenance, each module should not be too 
long --
preferably just a page or so. This doesn't raise problems in a compiled 
language, but in
an interpreted language like PHP the programmer must decide whether to lump a 
whole lot of
functions into a single large include file, or to include lots of little files 
as the
particular functions are needed.

The first case can lead to memory bloat, as there are likely to be a lot of 
unused
functions in memory on any given pass, whereas the second case may require lots 
of little
files to be loaded.

Are there likely to be significant performance costs for either approach, and 
what are
your feelings about the relative virtues of the two approaches?


It is highly unlikely you are going to create any significant memory bloat. 
Your code will likely be infinitesimal compared PHP's memory requirement.


I suggest 3 files, one with your configuration settings, so they are all in one 
place and easy to find and change, another file with your functions and the 
third file contains the code for handling the internet interface. Obviously, the 
interface file controls everything by calling various functions as needed.


Al...

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



RE: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-06 Thread Daevid Vincent
 

 -Original Message-
 From: Al [mailto:n...@ridersite.org] 
 Sent: Wednesday, January 06, 2010 5:09 PM
 To: php-general@lists.php.net
 Subject: [PHP] Re: PHP programming strategy; lots of little 
 include files, or a few big ones?
 
 
 
 On 1/6/2010 7:18 PM, clanc...@cybec.com.au wrote:
  I have a flexible program, which can do many different 
 things according to the type of
  data it is fed.  Ideally the flexibility is achieved by 
 calling different functions,
  though when the functionality is ill-defined I sometimes 
 just include blocks of code.
 
  Ideally, from the point of program maintenance, each module 
 should not be too long --
  preferably just a page or so. This doesn't raise problems 
 in a compiled language, but in
  an interpreted language like PHP the programmer must decide 
 whether to lump a whole lot of
  functions into a single large include file, or to include 
 lots of little files as the
  particular functions are needed.
 
  The first case can lead to memory bloat, as there are 
 likely to be a lot of unused
  functions in memory on any given pass, whereas the second 
 case may require lots of little
  files to be loaded.
 
  Are there likely to be significant performance costs for 
 either approach, and what are
  your feelings about the relative virtues of the two approaches?

I think it's a case by case basis. Generally File I/O is expensive, but
then again, as you say, having everything in a couple files is also
sub-optimal for organizing and keeping things modular.

I suggest you go with smaller files that are organized into logical
'chunks'. For example, functions that are used frequently are grouped into
a common.inc.php rather than by topic (such as file/date/xml/forms/etc).
And then use topical includes for the rest.

More importantly, I suggest you get a good caching system like memecached
or any of the others out there. Then you can pre-compile and load these
files and the whole point becomes close to moot.

ÐÆ5ÏÐ 
http://daevid.com

Some people, when confronted with a problem, think 'I know, I'll use
XML.'
Now they have two problems. 


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-06 Thread Daniel Kolbo
Daevid Vincent wrote:
  
 
 -Original Message-
 From: Al [mailto:n...@ridersite.org] 
 Sent: Wednesday, January 06, 2010 5:09 PM
 To: php-general@lists.php.net
 Subject: [PHP] Re: PHP programming strategy; lots of little 
 include files, or a few big ones?



 On 1/6/2010 7:18 PM, clanc...@cybec.com.au wrote:
 I have a flexible program, which can do many different 
 things according to the type of
 data it is fed.  Ideally the flexibility is achieved by 
 calling different functions,
 though when the functionality is ill-defined I sometimes 
 just include blocks of code.
 Ideally, from the point of program maintenance, each module 
 should not be too long --
 preferably just a page or so. This doesn't raise problems 
 in a compiled language, but in
 an interpreted language like PHP the programmer must decide 
 whether to lump a whole lot of
 functions into a single large include file, or to include 
 lots of little files as the
 particular functions are needed.

 The first case can lead to memory bloat, as there are 
 likely to be a lot of unused
 functions in memory on any given pass, whereas the second 
 case may require lots of little
 files to be loaded.

 Are there likely to be significant performance costs for 
 either approach, and what are
 your feelings about the relative virtues of the two approaches?
 
 I think it's a case by case basis. Generally File I/O is expensive, but
 then again, as you say, having everything in a couple files is also
 sub-optimal for organizing and keeping things modular.
 
 I suggest you go with smaller files that are organized into logical
 'chunks'. For example, functions that are used frequently are grouped into
 a common.inc.php rather than by topic (such as file/date/xml/forms/etc).
 And then use topical includes for the rest.
 
 More importantly, I suggest you get a good caching system like memecached
 or any of the others out there. Then you can pre-compile and load these
 files and the whole point becomes close to moot.
 
 ÐÆ5ÏÐ 
 http://daevid.com
 
 Some people, when confronted with a problem, think 'I know, I'll use
 XML.'
 Now they have two problems. 
 
 

I had a similar issue but with classes (not functions).
I opted to keep my classes short and succinct, rather than shoving all
the method functionality into one all-purpose class.
Instead of blindly loading all the little classes on each http request,
I used (and was recommended on this list to use) __autoload().  The
script would only load my classes if the individual request needed it.
This helped to avoid the memory bloat.  I've heard magic functions like
__autoload are a bit slower, but the code is so much cleaner b/c of it.

Also, an opcode cache as suggested previously would facilitate the rapid
include of many small files.

Unfortunately, php does not offer an __autoload() type function to
autoload functions.

If you are able to encapsulate your functions functionality into classes
you may be able to use the above solution of using an opcode cache to
help __autoload() a bunch of small classes.

hth,
dK
`

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: PHP programming strategy; lots of little include files, or a few big ones?

2010-01-06 Thread Robert Cummings



Daniel Kolbo wrote:

Daevid Vincent wrote:
 


-Original Message-
From: Al [mailto:n...@ridersite.org] 
Sent: Wednesday, January 06, 2010 5:09 PM

To: php-general@lists.php.net
Subject: [PHP] Re: PHP programming strategy; lots of little 
include files, or a few big ones?




On 1/6/2010 7:18 PM, clanc...@cybec.com.au wrote:
I have a flexible program, which can do many different 

things according to the type of
data it is fed.  Ideally the flexibility is achieved by 

calling different functions,
though when the functionality is ill-defined I sometimes 

just include blocks of code.
Ideally, from the point of program maintenance, each module 

should not be too long --
preferably just a page or so. This doesn't raise problems 

in a compiled language, but in
an interpreted language like PHP the programmer must decide 

whether to lump a whole lot of
functions into a single large include file, or to include 

lots of little files as the

particular functions are needed.

The first case can lead to memory bloat, as there are 

likely to be a lot of unused
functions in memory on any given pass, whereas the second 

case may require lots of little

files to be loaded.

Are there likely to be significant performance costs for 

either approach, and what are

your feelings about the relative virtues of the two approaches?

I think it's a case by case basis. Generally File I/O is expensive, but
then again, as you say, having everything in a couple files is also
sub-optimal for organizing and keeping things modular.

I suggest you go with smaller files that are organized into logical
'chunks'. For example, functions that are used frequently are grouped into
a common.inc.php rather than by topic (such as file/date/xml/forms/etc).
And then use topical includes for the rest.

More importantly, I suggest you get a good caching system like memecached
or any of the others out there. Then you can pre-compile and load these
files and the whole point becomes close to moot.

ÐÆ5ÏÐ 
http://daevid.com


Some people, when confronted with a problem, think 'I know, I'll use
XML.'
Now they have two problems. 





I had a similar issue but with classes (not functions).
I opted to keep my classes short and succinct, rather than shoving all
the method functionality into one all-purpose class.
Instead of blindly loading all the little classes on each http request,
I used (and was recommended on this list to use) __autoload().  The
script would only load my classes if the individual request needed it.
This helped to avoid the memory bloat.  I've heard magic functions like
__autoload are a bit slower, but the code is so much cleaner b/c of it.

Also, an opcode cache as suggested previously would facilitate the rapid
include of many small files.


They'll mostly likely already be loaded and compiled in memory. The 
filesystem probably won't even get hit.


Cheers,
Rob.
--
http://www.interjinn.com
Application and Templating Framework for PHP

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php