Leif, Thanks for the response. What we are going for here is gazillions of tiny images- 76KB average size.
We’ll try tweaking average object size… what we’d love to do is just have ATS read from disk only and have minimal to zero RAM at all… with no swapping of course ☺ Old school CDN style- our object library is so massive that this would work for us- and as we all know its better to serve from disk closer to user than to go over network back to origin. -Steve Steve Lerner | Sr. Member of Technical Staff, Network Engineering | M 212 495 9212 | [email protected]<mailto:[email protected]> | Skype: steve.lerner [Description: logo] From: Leif Hedstrom [mailto:[email protected]] Sent: Saturday, November 15, 2014 12:10 PM To: [email protected]<mailto:[email protected]> Subject: Re: proxy.config.cache.ram_cache.size query from eBay On Nov 13, 2014, at 4:40 PM, Lerner, Steve <[email protected]<mailto:[email protected]>> wrote: Hi gang- Phil Sorber referred me to this list. We are setting up clusters of Apache Traffic Server to beef up the front end of our image services which are… large in terms of volume… to say the least. We hope to be the big users of ATS and be a strong reference customer- so any help with is appreciated! Our first test cluster consistes of 23 machines, ubuntu12.04, Intel(R) 2x Xeon(R) CPU E5-2670 v2 @ 2.50GHz, 128G ram, 95T disk That is a lot of disk :) With default settings, you would consume roughly 110GB of RAM just for the indices. The calculation is (95*10^12 / 8000) * 10 Take comfort that with squid, you would use 10x as much (128 bytes per index entry). But you have three options: 1) increase the records.config setting for average object size. That is the 8000 number above. Doing so means you can store fewer objects in the cache. 2) buy more RAM 3) reduce disk capacity on each box I thought we had a wiki entry on this subject? Cheers, -- Leif Here is our query: We are setting records.config as: CONFIG proxy.config.cache.ram_cache.size INT 64G But we find that trafficserver ignores this limit and grows at the default rate of 1MB RAM / 1GB disk. Example of a current process: traffic_line -r proxy.config.cache.ram_cache.size returns 68,719,476,736 Which is about 64GB- correct! But looking at the process: 86050 nobody 20 0 108g 102g 4912 S 54 81.3 1523:33 /ebay/local/trafficserver/bin/traffic_server -M --httpport 80:fd=7 So basically we’ve set the process to only consume 64GB but its consuming 108GB… Does anyone have any ideas on why this happens or a way to fix it? We want to have constrained RAM but tons of disk- we’d much rather have the cache serve from disk then start swapping RAM Thanks in advance, Steve Steve Lerner | Sr. Member of Technical Staff, Network Engineering | M 212 495 9212 | [email protected]<mailto:[email protected]> | Skype: steve.lerner <image001.jpg>
