baricsz opened a new issue, #8998:
URL: https://github.com/apache/trafficserver/issues/8998

   I'm trying to set up an ATS reverse proxy cache in front of our services. I 
noticed that ATS writes a huge amount of data to the disk even after requesting 
a resource that results `TCP_REFRESH_HIT` cache state. Looks like it rewrites 
the whole cache storage.
   
   Environment
   * Ubuntu 22.04 based docker image
   * ATS 9.1.2
   * 100 GB cache storage file (same thing happens when using a raw disk)
   
   Attached the access log, only health check endpoint was requested 
before/after `index.html`
   ```
   id=firewall time="2022-08-03 10:27:03" fw=8e51c4744445 pri=6 proto=http 
duration=0.000 sent=198 rcvd=106 src=127.0.0.1 dst=0 dstname=localhost user=- 
op=GET arg="_ats_hc" result=200 ref="-" agent="curl/7.81.0" cache=TCP_MISS
   id=firewall time="2022-08-03 10:27:33" fw=8e51c4744445 pri=6 proto=http 
duration=0.000 sent=198 rcvd=106 src=127.0.0.1 dst=0 dstname=localhost user=- 
op=GET arg="_ats_hc" result=200 ref="-" agent="curl/7.81.0" cache=TCP_MISS
   id=firewall time="2022-08-03 10:27:48" fw=8e51c4744445 pri=6 proto=http 
duration=0.080 sent=2261 rcvd=225 src=172.22.18.3 dst=[x.x.x.x] 
dstname=[origin] user=- op=GET arg="index.html" result=200 ref="-" 
agent="curl/7.68.0" cache=TCP_REFRESH_HIT
   id=firewall time="2022-08-03 10:28:03" fw=8e51c4744445 pri=6 proto=http 
duration=0.000 sent=198 rcvd=106 src=127.0.0.1 dst=0 dstname=localhost user=- 
op=GET arg="_ats_hc" result=200 ref="-" agent="curl/7.81.0" cache=TCP_MISS
   id=firewall time="2022-08-03 10:28:33" fw=8e51c4744445 pri=6 proto=http 
duration=0.000 sent=198 rcvd=106 src=127.0.0.1 dst=0 dstname=localhost user=- 
op=GET arg="_ats_hc" result=200 ref="-" agent="curl/7.81.0" cache=TCP_MISS
   ```
   
   The cache only contained `index.html`, however seems like it consumes extra 
8192 bytes from the cache storage after each request. (there is no other 
alternate in the cache)
   
   ```JSON
   "proxy.process.cache.read_per_sec": "0.000000",
   "proxy.process.cache.write_per_sec": "2.000400",
   "proxy.process.cache.KB_read_per_sec": "0.000000",
   "proxy.process.cache.KB_write_per_sec": "4096.819336",
   "proxy.process.cache.bytes_used": "16384",
   "proxy.process.cache.bytes_total": "107240243200",
   "proxy.process.cache.ram_cache.total_bytes": "133906432",
   "proxy.process.cache.ram_cache.bytes_used": "8192",
   "proxy.process.cache.ram_cache.hits": "1",
   "proxy.process.cache.ram_cache.misses": "1",
   
   "proxy.process.cache.volume_0.bytes_used": "16384",
   "proxy.process.cache.volume_0.bytes_total": "107240243200",
   "proxy.process.cache.volume_0.ram_cache.total_bytes": "133906432",
   "proxy.process.cache.volume_0.ram_cache.bytes_used": "8192",
   "proxy.process.cache.volume_0.ram_cache.hits": "1",
   "proxy.process.cache.volume_0.ram_cache.misses": "1",
   ```
   
   Disk write starts ~30-60 sec later
   ![2022-08-03 12_34_11-Node Exporter Full - 
Grafana](https://user-images.githubusercontent.com/18755466/182784760-fe9e5e49-e54f-4c92-aa57-ab07522685f8.png)
   
   Is this intended or something wrong with the environment / configuration? 
   
   Thanks
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to