> > i can point you to PHP gurus if you need them, though.
>
> It would be good for me to gather more info about that option
> now. I wasn't aware that PHP had taken the step to work as a
> server module. Last time I used it I don't think it worked that
> way (1996). If you have any good links handy, send em my way,
> plz, otherwise I'll just see what the SE's bring me.


one of the guys on my team wrote the book (literally.. on
advance from the publisher and everything) on hooking PHP into
Apache.   his name is Charles Fisher, and with a little digging
you can probably find some of the articles he's done online.
i'll let him know you're looking for resources and pass along
whatever he sends back.



> > specifically why do you need database caching and precompilation
> > of templates?   obviously, they're both related to performance
> > speed, but what's their value in absolute terms?
>
> How do I define the absolute terms? For the database caching, CF
> keeps connections to the database open so many users can share
> the same connection. I think they call it connection pooling,
> but I'm not sure. End result, better performance and less
> overhead. As far as precompilation of templates, I don't know
> what the absolute benefit is, other than the claimed performance
> gain, which comes as a combination of not having to keep parsing
> the same templates and not having to do as much file IO.


that's about what i expected.. my asking of the question was
Socratic (aka: being a provoking jerk).  ;-)   the point to
carry away from the whole deal is that the features pay for
themselves with performance.   the difference is that
performance is the realm of measurement, and features are the
realm of marketing.   i've seen one too many pieces of
electronic gear with a "mode standby indicator" (otherwise known
as an LED that tells you the machine is turned *off*, of all
things) to believe that any meaningful information can be
extracted from user documentation.



> > would an
> > Apache/PHP server that can deliver the same number of pages in
> > the same amount of time at equal or lower processing load be
> > adequate?
>
> "Is that the case", is the question. Where do I get the answers?


the more pertinent question is, "does solution X have the power
to handle what i want to do?"   close on its heels would be "if
i go with solution X and it doesn't pan out, am i SOL?"   my
personal opinion is that unix/Apache gives you better odds in
both areas than NT/IIS.   it's also less expensive to
test/reject unix and move to NT than it is to go the other
direction.

when it comes to server performance, the only rule which applies
across the board is "your mileage may vary".   benchmarks and
reviews are nice, but unless you duplicate everything in the
test conditions faithfully, you may see widely different
results.   if you want hard numbers about what will work with
your hardware and load profile, you need to build and run your
own tests.

the good news is that the hard numbers are almost completely
irrelevant unless they show a truly massive difference between
two alternatives.   benchmarks are strictly a "let's try it and
see what happens" tool.   they're great as a diagnostic for
maintenance purposes, but that's about it.   the ability to
predict future performance in a new environment based on
previous benchmarks has an accuracy rating somewhere between
flipping a coin and reading tea leaves.


in broad terms, almost any hardware/OS/software configuration
you're likely to find these days will be adequate for low to
high-moderate loads.   the technology is so ridiculously
overpowered these days relative to bandwidth that it's hard to
screw up a machine badly enough that it won't provide adequate
minimal performance.

this is a religious issue for me, actually.   i know enough
about server performance that i start getting vehement when i
hear most people talking about it.   the simple fact of the
matter is that in a webserver, the motherboard is a fancy
interface between the network card and the hard drive.

operating strictly on processing power, a 25MHz 386 can choke a
T1.   today's low-end consumer PC has roughly the same crunching
power as the first Cray supercomputer, so forget the digital
side of the eqation.   there's so much overkill that any more is
meaningless.

the primary bottleneck in server performance is drive latency.
it doesn't matter how many megahertz you have on the board when
your drive seek time is measured in microseconds.   it also
doesn't matter how many simultaneous server threads you can run
if they all have to stand in line waiting for the drive.

if you want to kick the performance of a webserver through the
roof, make sure your files stay in RAM.   it cuts out about
99.999% of the access overhead.   the best solution is to put
everything in a RAMdisk, but that's not always feasible.   the
next best option is to increase the size and block size of your
file buffers.   with a little tuning, you can get one or two
orders of magnitude improvement just with that.   bottom line, a
webserver with more than one CPU is like a fish with a pair of
Air Jordans.   if you're going to buy anything, buy RAM and know
how to use it.

at the drive level itself, if you're using anything less than
SCSI2, you shouldn't be talking about performance.   granted a
well-tuned IDE/EDO system can prod some serious buttock, but
that's not off-the-shelf technology these days.   for really
phenomenal load levels, or seriously large numbers of files
which all see roughly equal demand, you can make good use of a
redundant RAID stack.   if you're not working under those
conditions, though, you can probably save the money.


when it comes to the OS, the big question is, "how much
redundant, unnecessary crap does it carry, and how much of it
can you turn off?"   short of the ability to tune things like
the filesystem block size and file buffers, the next largest
waste of server performance is the latency due to unwanted
processes chewing up CPU time.

that's one of the areas where NT scores worst, in my book.
aside from the fact that there's no capacity to tune the
low-level system parameters, there's a bunch of gunk in the
middleware that can't be eliminated.   the OS is welded shut, so
if Microsoft wants it to support a wide range of different uses,
they have to build in hooks for all of them.   for any given
use, the OS is maintaining a load of unnecessary services that
are pure waste.   that overhead is the price of flexibility, but
the price is unnecessary if you have a well-defined purpose for
the machine.

obviously, unix is at the opposite end of the scale.   you can
decide exactly what should and should not be included in the
kernel, and define system parameters as you choose.   you have
complete control over the boot process, so once you know what
you're doing, it's easy to create a system with exactly the
capacities and services you want.







mike stone  <[EMAIL PROTECTED]>   'net geek..
been there, done that,  have network, will travel.



____________________________________________________________________
--------------------------------------------------------------------
 Join The NEW Web Consultants Association FORUMS and CHAT:
   Register Today at: http://just4u.com/forums/
Web Consultants Web Site : http://just4u.com/webconsultants
   Give the Gift of Life This Year...
     Just4U Stop Smoking Support forum - helping smokers for
      over three years-tell a friend: http://just4u.com/forums/
          To get 500 Banner Ads for FREE
    go to http://www.linkbuddies.com/start.go?id=111261
---------------------------------------------------------------------

Reply via email to