This commit changes the callback list such that whenever an item is on
the list, its head->next is not NULL. The last element (first inserted)
will point to itself. This allows us to detect and ignore any attempt
to reenqueue a callback_head.
Signed-off-by: Barret Rhoden
---
sorry about the ol
This commit changes the callback list such that whenever an item is on
the list, its head->next is not NULL. The last element (first inserted)
will point to itself. This allows us to detect and ignore any attempt
to reenqueue a callback_head.
Signed-off-by: Barret Rhoden
---
i might b
init_stat() returns 0 on success, same as vfs_lstat(). When it replaced
vfs_lstat(), the '!' was dropped.
Fixes: 716308a5331b ("init: add an init_stat helper")
Signed-off-by: Barret Rhoden
---
Andy: this was broken in virtme. "/init: source: not found"
i
o other runtime dependent values, such as the
maximum number of threads (/proc/sys/kernel/threads-max).
Signed-off-by: Barret Rhoden
---
tools/lib/bpf/bpf_helpers.h | 4
tools/lib/bpf/libbpf.c | 40 ++---
tools/lib/bpf/libbpf.h | 4
3 files changed
ent for the function says it returns @func's return val or
-ESRCH. You could also add -ENXIO to that.
Thanks for the fix.
Reviewed-by: Barret Rhoden
+ ret = data.ret;
if (ret != -EAGAIN)
break;
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 2ed6edd33a214bca02bd2b45e3fc3038a059436b
Gitweb:
https://git.kernel.org/tip/2ed6edd33a214bca02bd2b45e3fc3038a059436b
Author:Barret Rhoden
AuthorDate:Tue, 14 Apr 2020 18:29:20 -04:00
On 4/27/20 8:54 PM, Sean Christopherson wrote:
Drop KVM's PT_{PAGE_TABLE,DIRECTORY,PDPE}_LEVEL KVM enums in favor of the
kernel's PG_LEVEL_{4K,2M,1G} enums, which have far more user friendly
names.
thanks for doing this - it fell off my radar.
all 3:
Reviewed-by: Barret Rhoden
On 4/27/20 8:54 PM, Sean Christopherson wrote:
Change the PSE hugepage handling in walk_addr_generic() to fire on any
page level greater than PT_PAGE_TABLE_LEVEL, a.k.a. PG_LEVEL_4K. PSE
paging only has two levels, so "== 2" and "> 1" are functionally the
seam, i.e. this is a nop.
^ s/seam/sa
Hi -
On 5/13/19 7:23 AM, Prarit Bhargava wrote:
[snip]
A module is loaded once for each cpu.
Does one CPU succeed in loading the module, and the others fail with EEXIST?
My follow-up patch changes from wait_event_interruptible() to
wait_event_interruptible_timeout() so the CPUs are no longer
ry again.
This commit changes finished_loading() such that we only consider a
module 'finished' when it doesn't exist or is LIVE, which are the cases
that break from the wait-loop in add_unformed_module().
Fixes: f9a75c1d717f ("modules: Only return -EEXIST for modules that hav
Hi -
On 5/2/19 1:46 PM, Prarit Bhargava wrote:
On 5/2/19 8:41 AM, Prarit Bhargava wrote:
On 5/2/19 5:48 AM, Jessica Yu wrote:
+++ Prarit Bhargava [01/05/19 17:26 -0400]:
On 4/30/19 6:22 PM, Prarit Bhargava wrote:
On a s390 z14 LAR with 2 cpus about stalls about 3% of the time while
loading t
Hi -
This patch triggered an oops for me (more below).
On 2/12/19 4:53 AM, Michal Hocko wrote:
[snip]
Fix the issue by reworking how x86 initializes the memory less nodes.
The current implementation is hacked into the workflow and it doesn't
allow any flexibility. There is init_memory_less_node
Kara
Signed-off-by: Barret Rhoden
Cc: sta...@vger.kernel.org # 4.14.111
---
- Updated tags
Thanks for the review!
fs/ext4/super.c | 58 +
1 file changed, 34 insertions(+), 24 deletions(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index
: Barret Rhoden
Cc: sta...@vger.kernel.org
---
- In the current code, it looks like someone could mount with want_extra_isize
with some value > 0 but less than the minimums in the s_es. If that's a bug,
I can submit a follow-on patch.
- Similarly, on a failed remount, sbi->s_want_ext
Hi -
On 03/01/2019 04:54 PM, Christopher Lameter wrote:
On Fri, 1 Mar 2019, Barret Rhoden wrote:
I'm not familiar with VSMP - how bad is it to use L1 cache alignment instead
of 4K page alignment? Maybe some structures can use the smaller alignment?
Or maybe have VSMP require SRCU-
Hi -
On 03/01/2019 03:34 PM, Dennis Zhou wrote:
Hi Barret,
On Fri, Mar 01, 2019 at 01:30:15PM -0500, Barret Rhoden wrote:
Hi -
On 01/21/2019 06:47 AM, Eial Czerwacki wrote:
Your main issue was that you only sent this patch to LKML, but not the
maintainers of the file. If you don't,
Hi -
On 01/21/2019 06:47 AM, Eial Czerwacki wrote:
>
Your main issue was that you only sent this patch to LKML, but not the
maintainers of the file. If you don't, your patch might get lost. To
get the appropriate people and lists, run:
scripts/get_maintainer.pl YOUR_PATCH.patch.
F
/discussion thread:
https://lore.kernel.org/lkml/20181029210716.212159-1-b...@google.com/
v1 -> v2:
https://lore.kernel.org/lkml/20181109203921.178363-1-b...@google.com/
- Updated Acks/Reviewed-by
- Minor touchups
- Added patch to remove redundant PageReserved() check
- Rebased onto linux-next
Bar
On 2018-10-29 at 17:07 Barret Rhoden wrote:
> Another issue is that kvm_mmu_zap_collapsible_spte() also uses
> PageTransCompoundMap() to detect huge pages, but we don't have a way to
> get the HVA easily. Can we just aggressively zap DAX pages there?
Any thoughts about this? Is
On 2018-10-29 at 20:10 Dan Williams wrote:
> > > > static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
> > > > gfn_t *gfnp, kvm_pfn_t *pfnp,
> > > > int *levelp)
> > > > @@ -3168,7 +3237,7 @@ static void tr
20 matches
Mail list logo