I'm struggling to find some real numbers or guidelines on this and am hoping I can borrow from community experience instead of authoring my own tests. Theoretical: I have a huge web site with about 1 million different "pages" (urls) and hundreds of thousands of page serves per day.
Each page serve, I need to match the URL to a value and do something with it. Assuming I have a highly-capable server ( ex: 128GB of RAM or more and two quad core Xeons with standard 15k RPM drives ) and that both the database and the application servers are running on the same box and sharing the same hardware... how would the following two approaches compare at this scale (theoretically, unless you have personal experience with this scale in which case: CALL ME) in terms of performance and efficiency: 1. Caching a single struct with keys that match the URLs and values that correspond to the values in the application scope and using this code: application.struct[ url ] and 2. Looking up the URL in a database table on each request ? Thanks a million. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "CFCDev" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/cfcdev?hl=en -~----------~----~----~----~------~----~------~--~---
