On 6/27/19 1:35 AM, Nick Desaulniers wrote:
On Wed, Jun 26, 2019 at 8:53 AM kbuild test robot <l...@intel.com> wrote:
CC: kbuild-...@01.org
TO: Huang Ying <ying.hu...@intel.com>

tree:   yhuang/autonuma-r0.1
Tree that's not a url and TO: ... @intel.com is a giveaway that this
is an internal tree.  Rong, does 0day bot email LKML for internal
trees?

Hi Nick,

The internal tree that you see is allowed to expose out, you could forward mail to the below addresses if necessary.

 CC: kbuild-...@01.org
 TO: Huang Ying <ying.hu...@intel.com>

Best Regards,
Rong Chen



head:   66ba79a988fd1e7a3385c7edfed0a29f6e53ca76
commit: 66ba79a988fd1e7a3385c7edfed0a29f6e53ca76 [11/11] autonuma recency fix
config: arm64-defconfig (attached as .config)
compiler: clang version 9.0.0 (git://gitmirror/llvm_project 
fee855b5bc1abe1f3f89e977ce4c81cf9bdbc2e4)
reproduce:
         wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
         chmod +x ~/bin/make.cross
         git checkout 66ba79a988fd1e7a3385c7edfed0a29f6e53ca76
         # save the attached .config to linux build tree
         make.cross ARCH=arm64

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <l...@intel.com>

All error/warnings (new ones prefixed by >>):

kernel/sched/fair.c:1421:41: warning: signed shift result (0x280000000) 
requires 35 bits to represent, but 'int' only has 32 bits [-Wshift-overflow]
      end = mm->numa_scan_starts[i] + (2560 << 22);
                                       ~~~~ ^  ~~
kernel/sched/fair.c:1426:8: error: expected expression
       if (long(now - tj) > 250)
           ^
    1 warning and 1 error generated.

vim +1426 kernel/sched/fair.c

   1405
   1406  bool should_numa_migrate_memory(struct task_struct *p, struct page * 
page,
   1407                                  int src_nid, int dst_cpu, unsigned 
long addr)
   1408  {
   1409          struct numa_group *ng = p->numa_group;
   1410          int dst_nid = cpu_to_node(dst_cpu);
   1411          int last_cpupid, this_cpupid;
   1412          struct mm_struct *mm = p->mm;
   1413
   1414          if (sysctl_numa_balancing_mode == NUMA_BALANCING_HMEM) {
   1415                  int i, i_start;
   1416                  unsigned long start, end;
   1417                  unsigned long now = jiffies, tj;
   1418
   1419                  i = i_start = mm->numa_scan_idx;
   1420                  smp_rmb();
1421                  end = mm->numa_scan_starts[i] + (MAX_SCAN_WINDOW << 22);
   1422                  do {
   1423                          tj = mm->numa_scan_jiffies[i];
   1424                          if (!tj)
   1425                                  return false;
1426                          if (long(now - tj) > HZ)
   1427                                  return false;
   1428                          start = mm->numa_scan_starts[i];
   1429                          /* Scan pass the end of address space */
   1430                          if (end < start)
   1431                                  end = TASK_SIZE;
   1432                          if (addr >= start && addr < end)
   1433                                  return true;
   1434                          if (i == 0)
   1435                                  i = NUMA_SCAN_NR_HIST - 1;
   1436                          else
   1437                                  i--;
   1438                          end = start;
   1439                  } while (i != i_start);
   1440                  return false;
   1441          }
   1442
   1443          this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
   1444          last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
   1445
   1446          /*
   1447           * Allow first faults or private faults to migrate immediately 
early in
   1448           * the lifetime of a task. The magic number 4 is based on 
waiting for
   1449           * two full passes of the "multi-stage node selection" test 
that is
   1450           * executed below.
   1451           */LKML
   1452          if ((p->numa_preferred_nid == NUMA_NO_NODE || p->numa_scan_seq <= 4) 
&&
   1453              (cpupid_pid_unset(last_cpupid) || cpupid_match_pid(p, 
last_cpupid)))
   1454                  return true;
   1455
   1456          /*
   1457           * Multi-stage node selection is used in conjunction with a 
periodic
   1458           * migration fault to build a temporal task<->page relation. 
By using
   1459           * a two-stage filter we remove short/unlikely relations.
   1460           *
   1461           * Using P(p) ~ n_p / n_t as per frequentist probability, we 
can equate
   1462           * a task's usage of a particular page (n_p) per total usage 
of this
   1463           * page (n_t) (in a given time-span) to a probability.
   1464           *
   1465           * Our periodic faults will sample this probability and 
getting the
   1466           * same result twice in a row, given these samples are fully
   1467           * independent, is then given by P(n)^2, provided our sample 
period
   1468           * is sufficiently short compared to the usage pattern.
   1469           *
   1470           * This quadric squishes small probabilities, making it less 
likely we
   1471           * act on an unlikely task<->page relation.
   1472           */
   1473          if (!cpupid_pid_unset(last_cpupid) &&
   1474              cpupid_to_nid(last_cpupid) != dst_nid) {
   1475                  count_vm_event(NUMA_SHARED);
   1476                  return false;
   1477          }
   1478
   1479          /* Always allow migrate on private faults */
   1480          if (cpupid_match_pid(p, last_cpupid))
   1481                  return true;
   1482
   1483          /* A shared fault, but p->numa_group has not been set up yet. 
*/
   1484          if (!ng)
   1485                  return true;
   1486
   1487          /*
   1488           * Destination node is much more heavily used than the source
   1489           * node? Allow migration.
   1490           */
   1491          if (group_faults_cpu(ng, dst_nid) > group_faults_cpu(ng, 
src_nid) *
   1492                                          ACTIVE_NODE_FRACTION)
   1493                  return true;
   1494
   1495          /*
   1496           * Distribute memory according to CPU & memory use on each 
node,
   1497           * with 3/4 hysteresis to avoid unnecessary memory migrations:
   1498           *
   1499           * faults_cpu(dst)   3   faults_cpu(src)
   1500           * --------------- * - > ---------------
   1501           * faults_mem(dst)   4   faults_mem(src)
   1502           */
   1503          return group_faults_cpu(ng, dst_nid) * group_faults(p, src_nid) * 
3 >
   1504                 group_faults_cpu(ng, src_nid) * group_faults(p, 
dst_nid) * 4;
   1505  }
   1506

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation


_______________________________________________
kbuild mailing list
kbuild@lists.01.org
https://lists.01.org/mailman/listinfo/kbuild

Reply via email to