rob05c edited a comment on pull request #6017:
URL: https://github.com/apache/trafficcontrol/pull/6017#issuecomment-879348855


   > With any kind of timed cache, isn't there a chance that a cache will end 
up using stale data to generate its config and clear its update status?
   
   In theory. In practice, with a 1s cache, it's extremely unlikely. The 
operator would have to make a change, then queue updates, and then `t3c` would 
have to get the update status, determine that it needs to update, and start 
fetching data, all in under a second.
   
   We could handle that race by adding the cache time to the Update Status 
endpoint, and then adding code to `t3c` to read that, and wait that long before 
requesting data. But again, I think the race is extremely unlikely. It's 
basically impossible with a human clicking "queue," the only way I could see it 
even being possible would be if an automated script was making the change and 
queueing, and `t3c` was also running frequently enough to statistically happen 
in the same second (which isn't possible today, because TO can't handle those 
requests/second; but maybe it will be possible for people to run `t3c` that 
often with this PR).
   
   That said - we need to modify the `/update_status` endpoint soon anyway, to 
fix a different and very common race: queueing, getting data, someone else 
queues while we're getting and generating config, and unsetting the update 
flag. We need to add the time queued to that endpoint, so `t3c` can determine 
if another queue happened while config was being applied, and not un-set the 
update or reval flag. I wouldn't object to adding the cache time to the 
endpoint at the same time, which should be trivially more dev time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to