CodingCat commented on code in PR #2462:
URL: https://github.com/apache/celeborn/pull/2462#discussion_r1568135693
##########
client/src/main/scala/org/apache/celeborn/client/ChangePartitionManager.scala:
##########
@@ -79,14 +81,17 @@ class ChangePartitionManager(
batchHandleChangePartitionExecutors.submit {
new Runnable {
override def run(): Unit = {
- val distinctPartitions = requests.synchronized {
- // For each partition only need handle one request
- requests.asScala.filter { case (partitionId, _) =>
-
!inBatchPartitions.get(shuffleId).contains(partitionId)
- }.map { case (partitionId, request) =>
- inBatchPartitions.get(shuffleId).add(partitionId)
- request.asScala.toArray.maxBy(_.epoch)
- }.toArray
+ val distinctPartitions = {
+ requests.asScala.map { case (partitionId, request) =>
+ locks(partitionId % locks.length).synchronized {
+ if (!inBatchPartitions.contains(partitionId)) {
+ inBatchPartitions.get(shuffleId).add(partitionId)
+ Some(request.asScala.toArray.maxBy(_.epoch))
+ } else {
+ None
+ }
+ }
+ }.filter(_.isDefined).map(_.get).toArray
Review Comment:
based on our observation in our production system, it won't bring more
competition ...
if we use `requests.synchronized`, all celeborn-dispatcher threads + all
celeborn-client-life-cycle-manager-change-partition-executor will all compete
for the same object for locking, even they are likely working on different
partitions ... check the following screenshots


after this change, with a huge spark application of 300TB shuffle data, I
don't see such intensive locking competition anymore
##########
client/src/main/scala/org/apache/celeborn/client/ChangePartitionManager.scala:
##########
@@ -151,7 +156,7 @@ class ChangePartitionManager(
oldPartition,
cause)
- requests.synchronized {
+ locks(partitionId % locks.length).synchronized {
if (requests.containsKey(partitionId)) {
requests.get(partitionId).add(changePartition)
logTrace(s"[handleRequestPartitionLocation] For $shuffleId, request
for same partition" +
Review Comment:
if I understand the suggest code correctly, you essentially create a set in
requests for each partition and keep adding a request to it,
I thought the same when iterating on the PR, however it turns out we cannot
do it ....
basically it is not what the original code was doing... the original code
always add a new set containing a single request to the hash map, i.e. line 178
- 179
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]