Peter Bacsko created YUNIKORN-2632:
--------------------------------------

             Summary: Data race in IncAllocatedResource
                 Key: YUNIKORN-2632
                 URL: https://issues.apache.org/jira/browse/YUNIKORN-2632
             Project: Apache YuniKorn
          Issue Type: Bug
          Components: core - scheduler
            Reporter: Peter Bacsko
            Assignee: Peter Bacsko


After YUNIKORN-2548, we accidentally make an unlocked access to 
\{{Queue.allocatedResource}}.

{noformat}
WARNING: DATA RACE
Read at 0x00c000578a00 by goroutine 52:
  
github.com/apache/yunikorn-core/pkg/scheduler/objects.(*Queue).IncAllocatedResource()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/objects/queue.go:1032
 +0x6b
  github.com/apache/yunikorn-core/pkg/scheduler/objects.(*Application).tryNode()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/objects/application.go:1495
 +0x184
  
github.com/apache/yunikorn-core/pkg/scheduler/objects.(*Application).tryNodes.func1()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/objects/application.go:1402
 +0x144
  
github.com/apache/yunikorn-core/pkg/scheduler/objects.(*treeIterator).ForEachNode.func1()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/objects/node_iterator.go:42
 +0x95
  github.com/google/btree.(*node[go.shape.interface { 
Less(github.com/google/btree.Item) bool }]).iterate()
      
/home/bacskop/go/pkg/mod/github.com/google/[email protected]/btree_generic.go:522 
+0x6f1
  github.com/google/btree.(*node[go.shape.interface { 
Less(github.com/google/btree.Item) bool }]).iterate()
      
/home/bacskop/go/pkg/mod/github.com/google/[email protected]/btree_generic.go:510 
+0x448
  github.com/google/btree.(*node[go.shape.interface { 
Less(github.com/google/btree.Item) bool }]).iterate()
      
/home/bacskop/go/pkg/mod/github.com/google/[email protected]/btree_generic.go:510 
+0x448
  github.com/google/btree.(*node[go.shape.interface { 
Less(github.com/google/btree.Item) bool }]).iterate()
      
/home/bacskop/go/pkg/mod/github.com/google/[email protected]/btree_generic.go:510 
+0x448
  github.com/google/btree.(*BTreeG[go.shape.interface { 
Less(github.com/google/btree.Item) bool }]).Ascend()
      
/home/bacskop/go/pkg/mod/github.com/google/[email protected]/btree_generic.go:779 
+0x108
  github.com/google/btree.(*BTree).Ascend()
      
/home/bacskop/go/pkg/mod/github.com/google/[email protected]/btree_generic.go:1029 
+0x108
  
github.com/apache/yunikorn-core/pkg/scheduler/objects.(*treeIterator).ForEachNode()
...
Previous write at 0x00c000578a00 by goroutine 49:
  
github.com/apache/yunikorn-core/pkg/scheduler/objects.(*Queue).DecAllocatedResource()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/objects/queue.go:1101
 +0x212
  
github.com/apache/yunikorn-core/pkg/scheduler.(*PartitionContext).removeAllocation()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/partition.go:1357
 +0x17b4
  
github.com/apache/yunikorn-core/pkg/scheduler.(*ClusterContext).processAllocationReleases()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/context.go:870
 +0xba
  
github.com/apache/yunikorn-core/pkg/scheduler.(*ClusterContext).handleRMUpdateAllocationEvent()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/context.go:750
 +0x1e4
  github.com/apache/yunikorn-core/pkg/scheduler.(*Scheduler).handleRMEvent()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/scheduler.go:133
 +0x28d
  
github.com/apache/yunikorn-core/pkg/scheduler.(*Scheduler).StartService.gowrap1()
      
/home/bacskop/go/pkg/mod/github.com/apache/[email protected]/pkg/scheduler/scheduler.go:60
 +0x33
 {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to