Leif,

For this eval config:

I am referring to Full Clustering 
https://docs.trafficserver.apache.org/en/latest/admin/cluster-howto.en.html

We have two of these, 11 machines each.

AND we are using load balancing to ‘stripe’ URLs across the 22 machines, so 
each one only gets a fixed named ‘range’ of URLs i.e. A-B goes on machine 1, 
C-D on machine 2, etc…

The clustering should prevent duplicate objects from happening despite load 
balancing

Make sense?
Thoughts?

-Steve


Steve Lerner | Sr. Member of Technical Staff, Network Engineering | M 212 495 
9212 | [email protected]<mailto:[email protected]> | Skype: steve.lerner
[Description: logo]

From: Leif Hedstrom [mailto:[email protected]]
Sent: Friday, November 21, 2014 11:46 AM
To: [email protected]; Lerner, Steve
Subject: Re: proxy.config.cache.ram_cache.size query from eBay


On Nov 21, 2014, at 9:31 AM, Lerner, Steve 
<[email protected]<mailto:[email protected]>> wrote:

Thanks for the tips- right now with avg object size set to 32K and RAM set to 
64G its running very nicely… with 1% of the traffic we are interested in (we 
have to keep in test until after retail season) we get 5% cache hit rate.
We’ll try ramping up traffic in January after the retail season and then things 
will get interesting. I’ll report in with our stats as we ramp up. We have 2x 
clusters at 11 machines each with URLS striped across all 22 with URL hashing.


Btw, Phil pointed out to me that I might have been confused about what you 
define as clustering. ATS has its own “clustering” feature (named just that), 
which does basically CARP like proxy-2-proxy sharing of the caches. What it 
lets you do is to increase your storage size by only caching each URL on one 
box. This is the feature that is also not particularly well understood or 
supported :-).

Is that what you mean when you say you have a “cluster” of ATS? Or just a 
cluster in general as in 11 boxes behind some sort of load balancing mechanism? 
The latter is something that is well understood and supported.

Cheers,

— Leif



-Steve

Steve Lerner | Sr. Member of Technical Staff, Network Engineering | M 212 495 
9212 | [email protected]<mailto:[email protected]> | Skype: steve.lerner
<image001.jpg>

From: Phil Sorber [mailto:[email protected]]
Sent: Thursday, November 20, 2014 9:11 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: proxy.config.cache.ram_cache.size query from eBay

On Sat Nov 15 2014 at 2:23:13 PM Lerner, Steve 
<[email protected]<mailto:[email protected]>> wrote:
Leif,

Thanks for the response. What we are going for here is gazillions of tiny 
images- 76KB average size.

We’ll try tweaking average object size… what we’d love to do is just have ATS 
read from disk only and have minimal to zero RAM at all… with no swapping of 
course ☺
If you want, you can set that RAM cache size setting to 0 to disable it. I 
think you will see a noticeable slowdown though, unless your RAM cache has a 0% 
CHR.

You will still have memory usage from the directory and other objects, however.


Old school CDN style- our object library is so massive that this would work for 
us- and as we all know its better to serve from disk closer to user than to go 
over network back to origin.

-Steve

Steve Lerner | Sr. Member of Technical Staff, Network Engineering | M 212 495 
9212 | [email protected]<mailto:[email protected]>
| Skype: steve.lerner
<image001.jpg>

From: Leif Hedstrom [mailto:[email protected]]
Sent: Saturday, November 15, 2014 12:10 PM

To: [email protected]<mailto:[email protected]>
Subject: Re: proxy.config.cache.ram_cache.size query from eBay


On Nov 13, 2014, at 4:40 PM, Lerner, Steve 
<[email protected]<mailto:[email protected]>> wrote:
Hi gang- Phil Sorber referred me to this list.

We are setting up clusters of Apache Traffic Server to beef up the front end of 
our image services which are… large in terms of volume… to say the least.
We hope to be the big users of ATS and be a strong reference customer- so any 
help with is appreciated!
Our first test cluster consistes of 23 machines, ubuntu12.04, Intel(R) 2x 
Xeon(R) CPU E5-2670 v2 @ 2.50GHz, 128G ram, 95T disk


That is a lot of disk :) With default settings, you would consume roughly 110GB 
of RAM just for the indices. The calculation is

   (95*10^12 / 8000) * 10


Take comfort that with squid, you would use 10x as much (128 bytes per index 
entry). But you have three options:

1) increase the records.config setting for average object size. That is the 
8000 number above. Doing so means you can store fewer objects in the cache.

2) buy more RAM

3) reduce disk capacity on each box

I thought we had a wiki entry on this subject?

Cheers,

-- Leif

Here is our query:

We are setting records.config as: CONFIG proxy.config.cache.ram_cache.size INT 
64G

But we find that trafficserver ignores this limit and grows at the default rate 
of 1MB RAM / 1GB disk.

Example of a current process:

traffic_line -r proxy.config.cache.ram_cache.size returns 68,719,476,736
Which is about 64GB- correct!

But looking at the process:

86050 nobody    20   0  108g 102g 4912 S   54 81.3   1523:33 
/ebay/local/trafficserver/bin/traffic_server -M --httpport 80:fd=7

So basically we’ve set the process to only consume 64GB but its consuming 108GB…

Does anyone have any ideas on why this happens or a way to fix it?
We want to have constrained RAM but tons of disk- we’d much rather have the 
cache serve from disk then start swapping RAM

Thanks in advance,

Steve

Steve Lerner | Sr. Member of Technical Staff, Network Engineering | M 212 495 
9212 | [email protected]<mailto:[email protected]> | Skype: steve.lerner
<image001.jpg>

Reply via email to