Github user revans2 commented on a diff in the pull request:

    https://github.com/apache/storm/pull/2419#discussion_r151699527
  
    --- Diff: 
storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestGenericResourceAwareStrategy.java
 ---
    @@ -129,23 +130,34 @@ public void 
testGenericResourceAwareStrategySharedMemory() {
             double totalExpectedWorkerOffHeap = (totalNumberOfTasks * 
memoryOffHeap) + sharedOffHeapWorker;
             
             SchedulerAssignment assignment = 
cluster.getAssignmentById(topo.getId());
    -        assertEquals(1, assignment.getSlots().size());
    -        WorkerSlot ws = assignment.getSlots().iterator().next();
    -        String nodeId = ws.getNodeId();
    -        assertEquals(1, 
assignment.getNodeIdToTotalSharedOffHeapMemory().size());
    -        assertEquals(sharedOffHeapNode, 
assignment.getNodeIdToTotalSharedOffHeapMemory().get(nodeId), 0.01);
    -        assertEquals(1, assignment.getScheduledResources().size());
    -        WorkerResources resources = 
assignment.getScheduledResources().get(ws);
    -        assertEquals(totalExpectedCPU, resources.get_cpu(), 0.01);
    -        assertEquals(totalExpectedOnHeap, resources.get_mem_on_heap(), 
0.01);
    -        assertEquals(totalExpectedWorkerOffHeap, 
resources.get_mem_off_heap(), 0.01);
    -        assertEquals(sharedOnHeap, resources.get_shared_mem_on_heap(), 
0.01);
    -        assertEquals(sharedOffHeapWorker, 
resources.get_shared_mem_off_heap(), 0.01);
    +        Set<WorkerSlot> slots = assignment.getSlots();
    +        Map<String, Double> nodeToTotalShared = 
assignment.getNodeIdToTotalSharedOffHeapMemory();
    +        LOG.info("NODE TO SHARED OFF HEAP {}", nodeToTotalShared);
    +        Map<WorkerSlot, WorkerResources> scheduledResources = 
assignment.getScheduledResources();
    +        assertEquals(2, slots.size());
    --- End diff --
    
    I'll fix the comment.  The test was wrong because the GPU resources were 
not being recorded properly.  
    
    Each supervisor has a single GPU.  Each spout needs a GPU, but there are 2 
spouts, so it cannot be on a single node, and hence cannot be on a single slot. 
 I can either up the number of CPUs per node to 2 and leave the rest of the 
test alone, or I can update the comment.


---

Reply via email to