[ 
https://issues.apache.org/jira/browse/CASSANDRA-20176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17909156#comment-17909156
 ] 

Benedict Elliott Smith commented on CASSANDRA-20176:
----------------------------------------------------

I think it would be better to revisit if this executor adds much value on a 
modern system. However, if this executor remains, the cost of these allocations 
should only be noticeable on a system that is far from saturation (as the 
system approaches saturation, the time spent managing threads in this pool 
should decline, unless perhaps a majority of active threads have moved to 
standard executors), and should anyway be small compared to the overhead of 
actually parking/unparking the threads. 

TL;DR: I don't think it's likely to be a particularly good time investment for 
the project to play very much with this particular data structure versus other 
potential avenues for system improvement.

> Reduce memory allocation in SEP Worker spin wait logic
> ------------------------------------------------------
>
>                 Key: CASSANDRA-20176
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-20176
>             Project: Apache Cassandra
>          Issue Type: Improvement
>          Components: Local/Other
>            Reporter: Dmitry Konstantinov
>            Assignee: Dmitry Konstantinov
>            Priority: Normal
>         Attachments: image-2025-01-01-13-14-02-562.png, 
> image-2025-01-01-13-15-16-767.png
>
>
> There is a quite massive memory allocation within spin waiting logic in SEP 
> Executor: org.apache.cassandra.concurrent.SEPWorker#doWaitSpin for some 
> workloads. For example it is observed for a writing test described in 
> CASSANDRA-20165 where ~8.5% of total allocations are from this logic:
> !image-2025-01-01-13-14-02-562.png|width=570!
> !image-2025-01-01-13-15-16-767.png|width=570!
> The idea of this parking is to avoid unpark signalling costs. The logic 
> selects a random time period to park a thread by LockSupport.parkNanos and 
> put the thread into a ConcurrentSkipListMap using wake up time as a key, so 
> the map is used as a concurrent priority queue. Once the parking is finished 
> - the thread removes itself from the map. When we neede to schedule a task - 
> we take a spinning thread with the smallest wake up time from the map.
> We can try to implement another algorithm for this logic without memory 
> allocation overheads, for example based on a Timing Wheel data structure.
> Note: it also makes sense to check granularity of actual parking time 
> (https://hazelcast.com/blog/locksupport-parknanos-under-the-hood-and-the-curious-case-of-parking/)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to