On Thu, 8 May 2025 08:59:59 GMT, Viktor Klang <[email protected]> wrote:
>> We logged several bugs on the LinkedBlockingDeque. This aggregates them into
>> a single bug report and PR.
>>
>> 1. LinkedBlockingDeque does not immediately throw InterruptedException in
>> put/take
>>
>> The LinkedBlockingDeque does not behave consistently with other concurrency
>> components. If we call putFirst(), putLast(), takeFirst(), or takeLast()
>> with a thread that is interrupted, it does not immediately throw an
>> InterruptedException, the way that ArrayBlockingQueue and
>> LInkedBlockingQueue does, because instead of lockInterruptibly(), we call
>> lock(). It will only throw an InterruptedException if the queue is full (on
>> put) or empty (on take). Since interruptions are frequently used as a
>> shutdown mechanism, this might prevent code from ever shutting down.
>>
>> 2. LinkedBlockingDeque.clear() should preserve weakly-consistent iterators
>>
>> LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by
>> linking f.prev and f.next back to f, allowing the iterators to continue from
>> the first or last respectively. This would be consistent with how the other
>> node-based weakly consistent queues LinkedBlockingQueue LinkedTransferQueue,
>> ConcurrentLinkedQueue/Deque work.
>>
>> The LBD already supports self-linking, since that is done by the
>> unlinkFirst() and unlinkLast() methods, and the iterators and spliterator
>> thus all support self-linking.
>>
>> This can be fixed very easily by linking both f.prev and f.next back to f.
>>
>> 3. LinkedBlockingDeque offer() creates nodes even if capacity has been
>> reached
>>
>> In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are
>> dynamically created upon each insertion unless this would bring the deque
>> above capacity." However, in the current implementation, nodes are always
>> created, even if the deque is full. This is because count is non-volatile,
>> and we only check inside the linkFirst/Last() methods whether the queue is
>> full. At this point we have already locked and have created the Node.
>> Instead, the count could be volatile, and we could check before locking.
>>
>> In the current version, calling offer() on a full LinkedBlockingDeque
>> creates unnecessary objects and contention. Similarly for poll() and peek(),
>> we could exit prior to locking by checking the count field.
>>
>> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>>
>> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and
>> then add that chain in bulk to the existing nodes. We count the nodes in
>> "int n" and then whilst hol...
>
> src/java.base/share/classes/java/util/concurrent/LinkedBlockingDeque.java
> line 865:
>
>> 863: long n = 0;
>> 864: for (E e : c) {
>> 865: Objects.requireNonNull(e);
>
> This makes me wonder: Does it make sense to create new nodes if we don't
> track if they will still fit into the capacity?
We could if you like, but that would subtly change the current behaviour. I
tried to make as few changes as possible.
> Out of curiosity, how does `it.remove()` work under these conditions?
If we call it.remove() on the first element, it delegates to unlinkFirst() (if
we are using an ascending iterator), and unlinkLast (if we are using a
descending iterator). Similarly, if we call it.remove() on the last element it
will call unlinkLast() or unlinkFirst(). With unlinkFirst(), it will make
f.next = f (thus linking back to itself) and with unlinkLast(), it will make
l.prev = l.
If we call it.remove() on a middle element, then we simply link the p.next = n;
n.prev = p; and does not do self-linking. Thus if we have an LBD with 1,2,3,4,5
with two iterators pointing onto 3, if one of them removes it, then the other
will continue with 3 (cached), 4, 5, and it won't go back to the beginning and
see duplicate elements.
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/24925#discussion_r2079836006
PR Review Comment: https://git.openjdk.org/jdk/pull/24925#discussion_r2079833797