alamb commented on code in PR #542:
URL: 
https://github.com/apache/arrow-rs-object-store/pull/542#discussion_r2511168910


##########
src/client/token.rs:
##########
@@ -33,15 +35,17 @@ pub(crate) struct TemporaryToken<T> {
 /// [`TemporaryToken`] based on its expiry
 #[derive(Debug)]
 pub(crate) struct TokenCache<T> {
-    cache: Mutex<Option<(TemporaryToken<T>, Instant)>>,
+    cache: ArcSwapOption<CacheEntry<T>>,

Review Comment:
   So in general, unless we have benchmark results that show arc-swap is 
necessary, I am opposed to adding a new dependency
   
   Did you try a `RWLock` before reaching for a new crate? I always worry about 
adding new crates like [arcswap](https://crates.io/crates/arc-swap) as I don't 
want to have to deal with a RUSTSEC report if/when it becomes abandoned.
   
    I do see there are many other users
   
   RWLocks would allow multiple concurrent readers but if you had a lot of 
writers you might still have contention.  If the you find update contention is 
too much, you could change to use `RWLock<Arc<..>>` so that the lock only needs 
to be held to clone an `Arc`
   
   I understand the docs for arc swap claims 
https://docs.rs/arc-swap/latest/arc_swap/
   
   > Better option would be to have 
[RwLock<Arc<T>>](https://doc.rust-lang.org/std/sync/struct.RwLock.html). Then 
one would lock, clone the 
[Arc](https://doc.rust-lang.org/nightly/alloc/sync/struct.Arc.html) and unlock. 
This suffers from CPU-level contention (on the lock and on the reference count 
of the [Arc](https://doc.rust-lang.org/nightly/alloc/sync/struct.Arc.html)) 
which makes it relatively slow. Depending on the implementation, an update may 
be blocked for arbitrary long time by a steady inflow of readers.
   
   
   I would imagine the overhead of actually using the token (making an HTTP 
request) is pretty huge compared to getting a lock.
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to