That Mark Esher is something else, isn't he :)






On Thu, Mar 12, 2009 at 7:20 PM, David McGuigan <[email protected]>wrote:

> Excellent advice Mark. Thanks for doing my prototyping for me ;).
>
>
> On Thu, Mar 12, 2009 at 5:16 PM, Marc Esher <[email protected]> wrote:
>
>>
>> a million-key struct doesn't sound that big of a deal to me,
>> especially with 128 gig of ram.  regardless, you could write the code
>> fairly simply so that it doesn't matter which approach you choose.
>> Basically, you could hide it behind a cache, and then you can let the
>> cache settings control the behavior.
>>
>> here's a back of the napkin approach:
>>
>> function getURL(key ID){
>>    if (id not exist in urlstruct){
>>          fetch url from DB
>>          addToURLStruct(ID,URL)
>>    }
>>    return getURL(ID);
>> }
>>
>> function addToURLStruct(key,ID){
>>       urlstruct[id] = structnew();
>>      urlstruct[id].url = url;
>>     reap();
>> }
>>
>> function getURL(ID){
>>    urlstruct[id].count = urlstruct[id].count++;
>>    urlstruct[id].lastaccessed = now();
>>    return urlstruct[id].url;
>> }
>>
>> function reap(){
>>     Based on some param passed in when the cache was created, just
>> delete any keys that are either old (Least-recently-used) or
>> infrequently used (LFU)
>> }
>>
>>
>> so this way, you create this cache component initially with a cap of a
>> million or whatever. and you use JMeter to whip up a quicky load test.
>> See how it goes. if it starts bringing down the server, back it down
>> to half a million and rerun your tests.
>>
>> do that till you get the performance you need under
>> way-heavier-than-expected load.
>>
>>
>> your "client" code that uses this cache wil only ever do something like
>> this:
>>
>> application.cfc onApplicationSTart:
>>
>> application.urlcache =
>> createobject("urlcache").setPolicy("LRU").setCacheCap(1000000);
>>
>> then wherever you need to use it:
>>
>> myurl = application.urlcache.getURL(someID);
>>
>> now, let's say you put this into production and it starts behaving
>> very badly.  simple.... you build in the ability to reset the cache
>> cap in an admin screen or something, and that screen just calls
>> application.urlcache.setCacheCap(1000); and then setCacheCap() would
>> be responsible for kicking out any struct keys that would've been
>> reaped out based on the LRU or LFU policy (whichever you choose).
>>
>> So you can keep your server humming by changing this on the fly.
>> should this happen (which it won't b/c you're likely to have caught it
>> when you did your load tests with jmeter), you'll just be having a
>> number of users hitting the database when they get the URL, which was
>> one of your options anyway. So this gives you the ability to have both
>> ways.
>>
>> the other thing is that if you hide all this behind a simple cache
>> component, it'd be reasonably trivial to swap out the internal
>> implementation with memcached and your calling code would be none the
>> wiser.
>>
>>
>> you could also opt not to use struct but instead use a java soft
>> reference cache (look in the coldbox codebase for examples), in which
>> case the cache would in effect become memory sensitive.
>>
>> Something to think about, at any rate. Good luck!
>>
>> marc
>>
>>
>> On Thu, Mar 12, 2009 at 6:41 PM, denstar <[email protected]> wrote:
>> >
>> > On Thu, Mar 12, 2009 at 3:52 PM, David McGuigan wrote:
>> > ...
>> >> Assuming I have a highly-capable server ( ex: 128GB of RAM or more and
>> two
>> >> quad core Xeons with standard 15k RPM drives ) and that both the
>> database
>> >> and the application servers are running on the same box and sharing the
>> same
>> >> hardware...
>> >
>> > Ouch.  If you need to have redundancy, one server isn't going to cut
>> > it, especially if you've got appserver/db/webserver(?) running on the
>> > same box.
>> >
>> > Servers are pretty cheap these days ($4,500-$5,500), so IMHO I'd go
>> > for a "tiered" approach, meaning, split out your app server(s), your
>> > db server(s), and your webserver(s).
>> >
>> > Virtual Machines can do some of this for you, which means you don't
>> > need as many physical boxes, but you still want to have at least a
>> > couple physical boxes, in case of hardware problems (ideally you'd
>> > want the physical servers in different locations, even, in the Perfect
>> > World-- or get some super-host type deal, who does all this crap for
>> > you;]).
>> >
>> > JBoss has some pretty tits caching stuff built right in, much of it
>> > geared for High Availability, so you might want to look into that, and
>> > a lot of people are buzzing about "cloud" computing (Amazon, etc.),
>> > which might be a slick option, depending on your context.  Twitter has
>> > been pretty open about how they've scaled things, and trying to use
>> > the cloud (latency was an issue, IIRC)... looking at how others have
>> > approached the problem can only help.
>> >
>> > A lot of it depends on your context tho (what kind of content you
>> > mostly serve, the amount of control you need, amount of cache
>> > updating, as it were, etc.), so... well.  Eh.
>> >
>> > I'm no expert, so, take all this with a grain of salt-- if there is
>> > one thing I know, it's that there are many means, to to the ends.
>> >
>> > Well, that, and JMeter ROCKS!  ;)
>> >
>> > --
>> > (He's got the monkeys, let's see the monkeys)
>> > - Aladdin (Prince Ali song)
>> >
>> > >
>> >
>>
>>
>>
>
> >
>


-- 
“Come to the edge, he said. They said: We are afraid. Come to the edge, he
said. They came. He pushed them and they flew.”

Guillaume Apollinaire quotes

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"CFCDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/cfcdev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to