mlibbey commented on issue #8984:
URL: https://github.com/apache/trafficserver/issues/8984#issuecomment-1198363397

   My understanding is that it would very painful. As I understand it ([here is 
a good doc about the 
cache](https://docs.trafficserver.apache.org/developer-guide/cache-architecture/architecture.en.html)),
 the objects are not stored in contiguous blocks. They are intertwined with the 
other objects that are being written at the same time -- the index gives the 
locations. Once the index is gone, the locations are gone (like the rm). So, 
I'd think you'd need to go walk the disk -- if you still know the bytes of the 
original file (and any alternates), then its probably a straightforward block 
by block search. If you just know the original hash, its a lot worse -- you 
need to gather all the fragments and start concatenating and testing (but, you 
don't know which fragment is the start -- so the combinations grow). Problem 
gets worse if the disk is in use -- the cache continues to change. So, probably 
it'd be better to encrypt the disk, and cycle the key (thus making the entire 
disk complete
  gibberish) every time there is purge, or (much slower), do the equivalent of 
the shred command to the disk, or go the hardware method, and exclusively use 
RAM disk, and just reboot the machine instead of purging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to