> On Nov 15, 2014, at 11:22 AM, Lerner, Steve <[email protected]> wrote:
> 
> Leif,
>  
> Thanks for the response. What we are going for here is gazillions of tiny 
> images- 76KB average size.
>  
> We’ll try tweaking average object size… what we’d love to do is just have ATS 
> read from disk only and have minimal to zero RAM at all… with no swapping of 
> course J

Don’t set it too high, or you will run out of what we call directory entries 
(think, I-node). When that happens, things can either go bad, or you simply 
won’t be able to use your disks full capacity :).

I’d start setting it to 32KB and see if that can hold your dataset. That would 
reduce your memory consumption (and startup time) by a factor of 4x. So, 
roughly 27GB of RAM for the indices should be needed.

Also remember to leave headroom for connections. Active connections consume 
more memory than inactive ones. This is an area where we do poorly on 
configuration management as well, hopefully we’ll introduce better connection 
pooling and resource recycling going forward.


>  
> Old school CDN style- our object library is so massive that this would work 
> for us- and as we all know its better to serve from disk closer to user than 
> to go over network back to origin.

Cool. Also remember that even going to origin (cache miss) over a CDN can be 
much faster for the end-user, because you will retain long lived, high 
capacity, and large negotiated window sizes between your CDN edge nodes and 
your Origin nodes. One thing to consider is to turn off slow-start restart on 
your CDN nodes.

Exciting to have you guys on board!

— Leif

Reply via email to