On this thread, what tools can expert users recommend to test trafficserver
for capacity, e.g. # reqs/sec - use httperf?

Tri

On Fri, Jun 10, 2011 at 11:03 AM, John Plevyak <[email protected]> wrote:

>
> There is also a question of RAM hit VS non-RAM hit.  RAM hits incur no
> seeks.
> Miss writes are aggregated so misses are constrained by disk write
> bandwidth.
> Non-RAM hits require seeks (approx 1 seek / MB) and that is what typically
> constrains
> performance for those operations.
>
> Unless you have mostly RAM hits, a large number of disks or very little CPU
> you
> will probably be disk or network constrained.
>
> I use a synthetic server with new URLs for misses and select from a
> "hotset"
> for hits which is either sized to fit in RAM not depending on the type of
> test.
>
> More sophisticated techniques often use a Zipf distribution, although there
> is some controversy over how well that models actual traffic.   You could
> also use logs to build a synthetic request stream which better models your
> traffic, but then network delay issues and peculiarities (dropped packets,
> MTU, etc.)  could be modeled as well and you are down the rabbit hole.
>
>
> cheers,
> john
>
>
> On Fri, Jun 10, 2011 at 10:17 AM, sridhar basam <[email protected]> wrote:
>
>>
>> On Fri, Jun 10, 2011 at 9:54 AM, Mike Partridge 
>> <[email protected]>wrote:
>>
>>> Is there an easy method to artificially vary the cache hit/miss ratio
>>> that people would recommend.  I am currently just generating more random
>>> content then can be cached by ATS?
>>> This is what I was in process of doing, but was curious if there was a
>>> better method others may have used.  I am trying to do this to benchmark ATS
>>> at different cache hit/miss ratios.
>>
>>
>> Hit/miss rates are determined by cache size and the ratio of requests
>> incoming that are cachable. Using a combination of the 2, should you be able
>> to vary the cache hit/miss rate.
>>
>>  Sridhar
>>
>
>

Reply via email to