I agree that it is wise to intersect cq_cpumask with online_cpumask in
cq_cpulist_set(), and if cpuset_weight(cq_cpumask & online_cpumask) == 0
-- ignore cq_cpumask and stick with default behavior.

I don't think cpu offlining is problematic here. If CPU is offlined --
the IRQ (and queue) will be served by another online CPU. Yes, it will
break IRQ affinity and make cq_cpumask setting pointless, but this is
system administrator's problem. The parameter is only useful for fine
tuning and some resource planning in advance is required. I don't think
that this side effect is enough to motivate "unsafe" tag.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1831566

Title:
  [realtime app] not possible to redirect drivers/nvme IRQs from
  realtime cpuset

Status in linux package in Ubuntu:
  Confirmed

Bug description:
  We're running realtime application on Ubuntu 16.04 with linux-image
  4.15 and found it impossible to get rid of jitter introduced by Intel
  NVMe IRQs. I'm providing here a patch which solved the issue for us.

  The realtime application is bound to isolated CPUs (one thread per
  CPU, nohz_full= in kernel cmdline, all IRQs moved to housekeeping
  CPUs), application doesn't use any linux kernel syscalls except in
  startup phase so we don't expect any interruptions of the application
  from the kernel or HW.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1831566/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to