2009/4/7 Bastien Koert <phps...@gmail.com>:
> On Tue, Apr 7, 2009 at 1:10 PM, Michael A. Peters <mpet...@mac.com> wrote:
>> PJ wrote:
>>> Bob McConnell wrote:
>>>> From: PJ
>>>>> First, let me thank you all for responding and offering suggestions. I
>>>>> appreciate it and I am learning things.
>>>>> However, it looks like my message is not getting across:
>>>>> The problem is not to retrieve only the authors whose last names begin
>>>>> with A:
>>>> Actually, it appears you simply don't like the accurate answers you have
>>>> been given.
>>> First, let me say that I am pretty fresh to all this.
>>> Second, I believe I can get the authors with several more queries as I
>>> have done for listings of all the books and by categories, but I am
>>> trying to limit the queries thinking that that will speed up the data
>> A friend of mine who manages many large scale websites with massive
>> databases says that isn't always the case, especially if you don't have a
>> dedicated SQL server with very fast disks and lots of memory.
>> He's found that in many situations it is faster to do several sql queries
>> and let php sort it out then to use a bunch of joins, subselects, etc. in
>> order to reduce the number of sql queries.
>> Has to do with how sql works on the filesystem, and the IO that can result
>> from sql needing to do a more complex query, and what is fastest varies upon
>> your setup. I think he said sub-selects are the worst because sql has to
>> create a virtual table for the subselect and that can really slow the query
>> down, but I might be mistaken about that.
>> Thus unless he has a problem application that is way too slow on hardware
>> he can't upgrade, he opts for what is easier code to read and maintain.
>> Sometimes that's faster than hard to read queries anyway.
> Simple queries will almost alwasy be faster than a large join. Join query
> response times can be affected by the order of the join if the primary
> table is not the largest one amoung other factors.
Well .. this really depends and - if speaking of large scale sites
with a lot of traffic - you must recognize that you just move the
payload to another server. in that case this would be the webserver.
I give the example of Apache: one process takes up as much RAM as the
"biggest" page it serves. so if a few pageviews require the
application to parse a lot of data, all apache processes will use a
large amount of ram (I've seen systems with 120mb per process!).
High traffic requires a lot of processes and when as your webserver
starts to swap because the RAM is filled up, you're in deep shit. And
you'd wish to have a normalized DB layout and effective queries.
Setting up a DB-Slave might be better than buying a lot of webservers. ;)
Also, the DB server is build for what you're trying to accomplish, the
webserver running php and serving pages.. well, maybe the server would
spend his CPU time in another way if he'd had the chance... whatever.
> Another thing to consider here is that the data is relatively static.
> Perhaps you could build a xml representation on the first display of the
> book and pay the piper once in building the data. From there build an xml
> snippet ( and store it in the database perhaps in the main books table. Then
> on future calls to that book to display the same data, request the xml and
> use that to display all the data. That way you get the search functionality
> you are looking for and a one time hit to build that data into one common
> format. That would make the future reads of that data much quicker.
> Cat, the other other white meat
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php