[PATCH v2] sched: allow resubmits to queue_balance_callback()

2021-03-22 Thread Barret Rhoden
This commit changes the callback list such that whenever an item is on the list, its head->next is not NULL. The last element (first inserted) will point to itself. This allows us to detect and ignore any attempt to reenqueue a callback_head. Signed-off-by: Barret Rhoden --- sorry about the ol

sched: allow resubmits to queue_balance_callback()

2021-03-18 Thread Barret Rhoden
This commit changes the callback list such that whenever an item is on the list, its head->next is not NULL. The last element (first inserted) will point to itself. This allows us to detect and ignore any attempt to reenqueue a callback_head. Signed-off-by: Barret Rhoden --- i might b

[PATCH] init: fix error check in clean_path()

2020-09-04 Thread Barret Rhoden
init_stat() returns 0 on success, same as vfs_lstat(). When it replaced vfs_lstat(), the '!' was dropped. Fixes: 716308a5331b ("init: add an init_stat helper") Signed-off-by: Barret Rhoden --- Andy: this was broken in virtme. "/init: source: not found" i

[RFC PATCH] libbpf: Support setting map max_entries at runtime

2020-08-31 Thread Barret Rhoden
o other runtime dependent values, such as the maximum number of threads (/proc/sys/kernel/threads-max). Signed-off-by: Barret Rhoden --- tools/lib/bpf/bpf_helpers.h | 4 tools/lib/bpf/libbpf.c | 40 ++--- tools/lib/bpf/libbpf.h | 4 3 files changed

Re: [RFC v2] perf/core: Fixes hung issue on perf stat command during cpu hotplug

2020-08-26 Thread Barret Rhoden
ent for the function says it returns @func's return val or -ESRCH. You could also add -ENXIO to that. Thanks for the fix. Reviewed-by: Barret Rhoden + ret = data.ret; if (ret != -EAGAIN) break;

[tip: perf/core] perf: Add cond_resched() to task_function_call()

2020-05-01 Thread tip-bot2 for Barret Rhoden
The following commit has been merged into the perf/core branch of tip: Commit-ID: 2ed6edd33a214bca02bd2b45e3fc3038a059436b Gitweb: https://git.kernel.org/tip/2ed6edd33a214bca02bd2b45e3fc3038a059436b Author:Barret Rhoden AuthorDate:Tue, 14 Apr 2020 18:29:20 -04:00

Re: [PATCH 0/3] KVM: x86/mmu: Use kernel's PG_LEVEL_* enums

2020-04-28 Thread Barret Rhoden
On 4/27/20 8:54 PM, Sean Christopherson wrote: Drop KVM's PT_{PAGE_TABLE,DIRECTORY,PDPE}_LEVEL KVM enums in favor of the kernel's PG_LEVEL_{4K,2M,1G} enums, which have far more user friendly names. thanks for doing this - it fell off my radar. all 3: Reviewed-by: Barret Rhoden

Re: [PATCH 1/3] KVM: x86/mmu: Tweak PSE hugepage handling to avoid 2M vs 4M conundrum

2020-04-28 Thread Barret Rhoden
On 4/27/20 8:54 PM, Sean Christopherson wrote: Change the PSE hugepage handling in walk_addr_generic() to fire on any page level greater than PT_PAGE_TABLE_LEVEL, a.k.a. PG_LEVEL_4K. PSE paging only has two levels, so "== 2" and "> 1" are functionally the seam, i.e. this is a nop. ^ s/seam/sa

Re: [PATCH] modules: fix livelock in add_unformed_module()

2019-05-13 Thread Barret Rhoden
Hi - On 5/13/19 7:23 AM, Prarit Bhargava wrote: [snip] A module is loaded once for each cpu. Does one CPU succeed in loading the module, and the others fail with EEXIST? My follow-up patch changes from wait_event_interruptible() to wait_event_interruptible_timeout() so the CPUs are no longer

[PATCH] modules: fix livelock in add_unformed_module()

2019-05-10 Thread Barret Rhoden
ry again. This commit changes finished_loading() such that we only consider a module 'finished' when it doesn't exist or is LIVE, which are the cases that break from the wait-loop in add_unformed_module(). Fixes: f9a75c1d717f ("modules: Only return -EEXIST for modules that hav

Re: [PATCH v3] kernel/module: Reschedule while waiting for modules to finish loading

2019-05-10 Thread Barret Rhoden
Hi - On 5/2/19 1:46 PM, Prarit Bhargava wrote: On 5/2/19 8:41 AM, Prarit Bhargava wrote: On 5/2/19 5:48 AM, Jessica Yu wrote: +++ Prarit Bhargava [01/05/19 17:26 -0400]: On 4/30/19 6:22 PM, Prarit Bhargava wrote: On a s390 z14 LAR with 2 cpus about stalls about 3% of the time while loading t

Re: [PATCH 1/2] x86, numa: always initialize all possible nodes

2019-05-01 Thread Barret Rhoden
Hi - This patch triggered an oops for me (more below). On 2/12/19 4:53 AM, Michal Hocko wrote: [snip] Fix the issue by reworking how x86 initializes the memory less nodes. The current implementation is hacked into the workflow and it doesn't allow any flexibility. There is init_memory_less_node

[PATCH v2] ext4: fix use-after-free race with debug_want_extra_isize

2019-04-18 Thread Barret Rhoden
Kara Signed-off-by: Barret Rhoden Cc: sta...@vger.kernel.org # 4.14.111 --- - Updated tags Thanks for the review! fs/ext4/super.c | 58 + 1 file changed, 34 insertions(+), 24 deletions(-) diff --git a/fs/ext4/super.c b/fs/ext4/super.c index

[PATCH] ext4: fix use-after-free race with debug_want_extra_isize

2019-04-15 Thread Barret Rhoden
: Barret Rhoden Cc: sta...@vger.kernel.org --- - In the current code, it looks like someone could mount with want_extra_isize with some value > 0 but less than the minimums in the s_es. If that's a bug, I can submit a follow-on patch. - Similarly, on a failed remount, sbi->s_want_ext

Re: [PATCH] percpu/module resevation: change resevation size iff X86_VSMP is set

2019-03-13 Thread Barret Rhoden
Hi - On 03/01/2019 04:54 PM, Christopher Lameter wrote: On Fri, 1 Mar 2019, Barret Rhoden wrote: I'm not familiar with VSMP - how bad is it to use L1 cache alignment instead of 4K page alignment? Maybe some structures can use the smaller alignment? Or maybe have VSMP require SRCU-

Re: [PATCH] percpu/module resevation: change resevation size iff X86_VSMP is set

2019-03-01 Thread Barret Rhoden
Hi - On 03/01/2019 03:34 PM, Dennis Zhou wrote: Hi Barret, On Fri, Mar 01, 2019 at 01:30:15PM -0500, Barret Rhoden wrote: Hi - On 01/21/2019 06:47 AM, Eial Czerwacki wrote: Your main issue was that you only sent this patch to LKML, but not the maintainers of the file. If you don't,

Re: [PATCH] percpu/module resevation: change resevation size iff X86_VSMP is set

2019-03-01 Thread Barret Rhoden
Hi - On 01/21/2019 06:47 AM, Eial Czerwacki wrote: > Your main issue was that you only sent this patch to LKML, but not the maintainers of the file. If you don't, your patch might get lost. To get the appropriate people and lists, run: scripts/get_maintainer.pl YOUR_PATCH.patch. F

[PATCH v2 0/3] kvm: Use huge pages for DAX-backed files

2018-11-14 Thread Barret Rhoden
/discussion thread: https://lore.kernel.org/lkml/20181029210716.212159-1-b...@google.com/ v1 -> v2: https://lore.kernel.org/lkml/20181109203921.178363-1-b...@google.com/ - Updated Acks/Reviewed-by - Minor touchups - Added patch to remove redundant PageReserved() check - Rebased onto linux-next Bar

Re: [RFC PATCH] kvm: Use huge pages for DAX-backed files

2018-11-06 Thread Barret Rhoden
On 2018-10-29 at 17:07 Barret Rhoden wrote: > Another issue is that kvm_mmu_zap_collapsible_spte() also uses > PageTransCompoundMap() to detect huge pages, but we don't have a way to > get the HVA easily. Can we just aggressively zap DAX pages there? Any thoughts about this? Is

Re: [RFC PATCH] kvm: Use huge pages for DAX-backed files

2018-10-30 Thread Barret Rhoden
On 2018-10-29 at 20:10 Dan Williams wrote: > > > > static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, > > > > gfn_t *gfnp, kvm_pfn_t *pfnp, > > > > int *levelp) > > > > @@ -3168,7 +3237,7 @@ static void tr