Jean, great idea. That's exactly the strategy I was planning on.
Side benefit: If particular URLs are only accessed once every X hours/days,
they're free until they do (new content included).



On Thu, Mar 12, 2009 at 5:13 PM, Jean Moniatte <[email protected]> wrote:

> Hello,
>
> I would not trash up front the option of a database lookup at each request.
> 1 million records is not that much and with proper indexing you will have
> good performance. Loading it all in memory seems to be an overkill and will
> probably take some time when the application starts.
>
> Or maybe start with an empty application.url struct() and populate it from
> the database as pages are requested. If the url is found in the struct, use
> that, if not found, lookup the database and populate the application.url
> struct with your findings.
>
> Hope it helps.
>
> Thanks,
> Jean
>
> --
> Jean Moniatte
> UGAL
> http://www.ugal.com/
> [email protected]
> --
>
>
>
> On Thu, Mar 12, 2009 at 3:41 PM, denstar <[email protected]> wrote:
>
>>
>> On Thu, Mar 12, 2009 at 3:52 PM, David McGuigan wrote:
>> ...
>> > Assuming I have a highly-capable server ( ex: 128GB of RAM or more and
>> two
>> > quad core Xeons with standard 15k RPM drives ) and that both the
>> database
>> > and the application servers are running on the same box and sharing the
>> same
>> > hardware...
>>
>> Ouch.  If you need to have redundancy, one server isn't going to cut
>> it, especially if you've got appserver/db/webserver(?) running on the
>> same box.
>>
>> Servers are pretty cheap these days ($4,500-$5,500), so IMHO I'd go
>> for a "tiered" approach, meaning, split out your app server(s), your
>> db server(s), and your webserver(s).
>>
>> Virtual Machines can do some of this for you, which means you don't
>> need as many physical boxes, but you still want to have at least a
>> couple physical boxes, in case of hardware problems (ideally you'd
>> want the physical servers in different locations, even, in the Perfect
>> World-- or get some super-host type deal, who does all this crap for
>> you;]).
>>
>> JBoss has some pretty tits caching stuff built right in, much of it
>> geared for High Availability, so you might want to look into that, and
>> a lot of people are buzzing about "cloud" computing (Amazon, etc.),
>> which might be a slick option, depending on your context.  Twitter has
>> been pretty open about how they've scaled things, and trying to use
>> the cloud (latency was an issue, IIRC)... looking at how others have
>> approached the problem can only help.
>>
>> A lot of it depends on your context tho (what kind of content you
>> mostly serve, the amount of control you need, amount of cache
>> updating, as it were, etc.), so... well.  Eh.
>>
>> I'm no expert, so, take all this with a grain of salt-- if there is
>> one thing I know, it's that there are many means, to to the ends.
>>
>> Well, that, and JMeter ROCKS!  ;)
>>
>> --
>> (He's got the monkeys, let's see the monkeys)
>> - Aladdin (Prince Ali song)
>>
>>
>>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"CFCDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/cfcdev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to