Here are some numbers from our production servers, from a couple different 
groups.  We are running a heavily modified version of 7.1.2

One of our production server for our CDN:
proxy.process.cache.read.success 6037258
proxy.process.cache.read.failure 13845799

Here is another one from another group:
proxy.process.cache.read.success 5575072
proxy.process.cache.read.failure 26784750

I talked to another company and they were seeing about a 3% failure rate on 
their cache and they are using 8.0.7.  I created an issue on this: 
https://github.com/apache/trafficserver/issues/6713

-Bryan

> On Apr 26, 2020, at 4:32 PM, edd! <[email protected]> wrote:
> 
> Hi,
> 
> New to ATS, I compiled 8.0.7 from source yesterday on CentOS 7
> # source /opt/rh/devtoolset-7/enable
> # ./configure --enable-experimental-plugins
> # make && make install
> Testing it as a transparent forward proxy serving ~500 users with http 
> caching enabled. I have first tried raw cache storage then volume file but in 
> both cases I got so many read and few write failures on a 100 GB SSD 
> partition.
> 
> "proxy.process.cache.volume_0.bytes_used": "3323351040",
> "proxy.process.cache.volume_0.bytes_total": "106167836672",
> "proxy.process.cache.volume_0.ram_cache.total_bytes": "12884901888",
> "proxy.process.cache.volume_0.ram_cache.bytes_used": "6062080",
> "proxy.process.cache.volume_0.ram_cache.hits": "4916",
> "proxy.process.cache.volume_0.ram_cache.misses": "1411",
> "proxy.process.cache.volume_0.pread_count": "0",
> "proxy.process.cache.volume_0.percent_full": "3",
> "proxy.process.cache.volume_0.lookup.active": "0",
> "proxy.process.cache.volume_0.lookup.success": "0",
> "proxy.process.cache.volume_0.lookup.failure": "0",
> "proxy.process.cache.volume_0.read.active": "1",
> "proxy.process.cache.volume_0.read.success": "5566",
> "proxy.process.cache.volume_0.read.failure": "22084",
> "proxy.process.cache.volume_0.write.active": "8",
> "proxy.process.cache.volume_0.write.success": "5918",
> "proxy.process.cache.volume_0.write.failure": "568",
> "proxy.process.cache.volume_0.write.backlog.failure": "272",
> "proxy.process.cache.volume_0.update.active": "1",
> "proxy.process.cache.volume_0.update.success": "306",
> "proxy.process.cache.volume_0.update.failure": "4",
> "proxy.process.cache.volume_0.remove.active": "0",
> "proxy.process.cache.volume_0.remove.success": "0",
> "proxy.process.cache.volume_0.remove.failure": "0",
> "proxy.process.cache.volume_0.evacuate.active": "0",
> "proxy.process.cache.volume_0.evacuate.success": "0",
> "proxy.process.cache.volume_0.evacuate.failure": "0",
> "proxy.process.cache.volume_0.scan.active": "0",
> "proxy.process.cache.volume_0.scan.success": "0",
> "proxy.process.cache.volume_0.scan.failure": "0",
> "proxy.process.cache.volume_0.direntries.total": "13255088",
> "proxy.process.cache.volume_0.direntries.used": "9230",
> "proxy.process.cache.volume_0.directory_collision": "0",
> "proxy.process.cache.volume_0.frags_per_doc.1": "5372",
> "proxy.process.cache.volume_0.frags_per_doc.2": "0",
> "proxy.process.cache.volume_0.frags_per_doc.3+": "625",
> "proxy.process.cache.volume_0.read_busy.success": "4",
> "proxy.process.cache.volume_0.read_busy.failure": "351",
> 
> Disk read/write tests:
> hdparm -t /dev/sdb1
> /dev/sdb1:
>  Timing buffered disk reads: 196 MB in  3.24 seconds =  60.54 MB/sec
> 
> hdparm -T /dev/sdb1
> /dev/sdb1:
>  Timing cached reads:   11662 MB in  1.99 seconds = 5863.27 MB/sec
> 
> dd if=/dev/zero of=/cache/test1.img bs=1G count=1 oflag=dsync   
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB) copied, 67.6976 s, 15.9 MB/s
> 
> dd if=/dev/zero of=/cache/test2.img bs=512 count=1000 oflag=dsync   
> 1000+0 records in
> 1000+0 records out
> 512000 bytes (512 kB) copied, 0.374173 s, 1.4 MB/s
> 
> Please help,
> 
> Thank you,
> Eddi

Reply via email to