ot studied it.
Signed-off-by: Pavel Tatashin
Reviewed-by: Steven Sistare
Reviewed-by: Daniel Jordan
Reviewed-by: Bob Picco
I do not see any obvious issues in the patch
Acked-by: Michal Hocko
Thank you very much!
Pavel
---
mm/page_all
ge(), the buddy page is initialized again.
So, in order to avoid this issue, we must initialize the buddy page prior
to calling deferred_free_range().
Signed-off-by: Pavel Tatashin
---
mm/page_alloc.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/page_all
This problem is introduced in linux-next:
a4d28b2d6e64 mm: deferred_init_memmap improvements
If it is more appropriate a create a new patch that includes this fix into
the original patch please let me know.
Pavel Tatashin (1):
mm: buddy page accessed before initialized
mm/page_alloc.c | 10
On 11/02/2017 09:32 AM, Michal Hocko wrote:
On Tue 31-10-17 11:50:02, Pavel Tatashin wrote:
[...]
The problem happens in this path:
page_alloc_init_late
deferred_init_memmap
deferred_init_range
__def_free
deferred_free_range
__free_pages_boot_core(page, order
On 11/02/2017 09:54 AM, Michal Hocko wrote:
On Thu 02-11-17 09:39:58, Pavel Tatashin wrote:
[...]
Hi Michal,
Previously as before my project? That is because memory for all struct pages
was always zeroed in memblock, and in __free_one_page() page_is_buddy() was
always returning false, thus
Now, that memory is not zeroed, page_is_buddy() can return true after kexec
when memory is dirty (unfortunately memset(1) with CONFIG_VM_DEBUG does not
catch this case). And proceed further to incorrectly remove buddy from the
list.
OK, I thought this was a regression from one of the recent patc
Yes, but as I said, unfortunately memset(1) with CONFIG_VM_DEBUG does not
catch this case. So, when CONFIG_VM_DEBUG is enabled kexec reboots without
issues.
Can we make the init pattern to catch this?
Unfortunately, that is not easy: memset() gives us only one byte to play
with, and if we us
ge(), the buddy page is initialized again.
So, in order to avoid this issue, we must initialize the buddy page prior
to calling deferred_free_range().
Signed-off-by: Pavel Tatashin
---
mm/page_alloc.c | 66 +
1 file changed, 43 insertio
-deferred_init_memmap-improvements-fix.patch
and
mm-deferred_init_memmap-improvements-fix-2.patch
Again, I can send a new full version of
mm-deferred_init_memmap-improvements.patch
If that is better.
Pavel Tatashin (1):
mm: buddy page accessed before initialized
mm/page_alloc.c | 66
arly boot timestamps were available, the engineer who introduced
this bug would have noticed the extra time that is spent early in boot.
Pavel Tatashin (6):
x86/tsc: remove tsc_disabled flag
time: sync read_boot_clock64() with persistent clock
x86/time: read_boot_clock64() implementation
sch
: Pavel Tatashin
---
arch/x86/include/asm/tsc.h | 4 +++
arch/x86/kernel/setup.c| 10 --
arch/x86/kernel/time.c | 1 +
arch/x86/kernel/tsc.c | 81 ++
4 files changed, 94 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm
read_boot_clock64() returns time of when system started. Now, that
early boot clock is available on x86 it is possible to implement x86
specific version of read_boot_clock64() that takes advantage of this new
interface.
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/time.c | 30
w' is
added to read_boot_clock64() parameters. Arch may decide to use it instead
of accessing persistent clock again.
Also, change read_boot_clock64() to have __init prototype since it is
accessed only during boot.
Signed-off-by: Pavel Tatashin
---
arch/arm/kernel/time.c | 2 +-
tsc_disabled is set when notsc is passed as kernel parameter. The reason we
have notsc is to avoid timing problems on multi-preccors systems. However,
we already have a mechanism to detect and resolve these issues by invoking
tsc unstable path.
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel
.
Signed-off-by: Pavel Tatashin
---
arch/x86/include/asm/paravirt.h | 2 +-
arch/x86/include/asm/paravirt_types.h | 1 +
arch/x86/kernel/paravirt.c| 1 +
arch/x86/xen/time.c | 7 ---
4 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86
Allow sched_clock() to be used before schec_clock_init() and
sched_clock_init_late() are called. This provides us with a way to get
early boot timestamps on machines with unstable clocks.
Signed-off-by: Pavel Tatashin
---
kernel/sched/clock.c | 10 --
1 file changed, 8 insertions(+), 2
> Reviewed-by: Dou Liyang
Thank you!
Pavel
> why we should remove the *extern* keyword?
Hi Dou,
While, I am not sure why it was decided to stop using externs in
headers, this is a warning printed by scripts/checkpatch.pl:
CHECK: extern prototypes should be avoided in .h files
To have a clean checkpatch output I removed externs.
Pavel
Hi Dou,
Great comments, my replies below:
>> static inline unsigned long long paravirt_sched_clock(void)
>> {
>> - return PVOP_CALL0(unsigned long long, pv_time_ops.sched_clock);
>> + return PVOP_CALL0(unsigned long long,
>> pv_time_ops.active_sched_clock);
>> }
>>
>
> Should in th
> IMO, using the extern keyword on function prototypes in *.h files
> is superfluous, but, It doesn't matter for functionality. *extern*
> is default keywords.
>
> AFAIK, it's a code style problem. In x86 arch, we prefer to
> keep *extern* explicitly, so, let's keep it like before for
> code consis
that has not
yet been initialized. And it is also going to be easier to multithread
later: multi-thread the first loop, wait for it to finish,
multi-thread the 2nd loop wait for it to finish.
Pasha
On Fri, Nov 3, 2017 at 5:27 AM, Michal Hocko wrote:
> On Thu 02-11-17 13:02:21, Pavel Tatashi
> Why cannot we do something similar to the optimized struct page
> initialization and write 8B at the time and fill up the size unaligned
> chunk in 1B?
I do not think this is a good idea: memset() on SPARC is slow for
small sizes, this is why we ended up using stores, but thats not the
case on x
" ?
Pasha
On Thu, Nov 2, 2017 at 9:58 PM, Dou Liyang wrote:
> Hi Pavel,
>
>
> At 11/03/2017 01:26 AM, Pavel Tatashin wrote:
>>
>> tsc_disabled is set when notsc is passed as kernel parameter. The reason
>> we
>> have notsc is to avoid timing problems on
Hi Dou,
Thank you for testing it! I will rebase this series of the 'tip' tree
for the next iteration.
Thank you,
Pasha
1. Replace these two patches:
arm64/kasan: add and use kasan_map_populate()
x86/kasan: add and use kasan_map_populate()
With:
x86/mm/kasan: don't use vmemmap_populate() to initialize
shadow
arm64/mm/kasan: don't use vmemmap_populate() to initialize
shadow
Pavel, could you please send the
t use vmemmap_populate() to initialize shadow
arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow
Pavel Tatashin (2):
x86/mm/kasan: don't use vmemmap_populate() to initialize shadow
arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow
arch/arm64/Kconfig
and use it instead of
vmemmap_populate(). Besides, this allows us to take advantage of gigantic
pages and use them to populate the shadow, which should save us some memory
wasted on page tables and reduce TLB pressure.
Signed-off-by: Andrey Ryabinin
Signed-off-by: Pavel Tatashin
---
arch/x86
and use it instead of
vmemmap_populate(). Besides, this allows us to take advantage of gigantic
pages and use them to populate the shadow, which should save us some memory
wasted on page tables and reduce TLB pressure.
Signed-off-by: Will Deacon
Signed-off-by: Pavel Tatashin
---
arch/arm64
kasan_populate_shadow() interface and use it instead of
vmemmap_populate(). Besides, this allows us to take advantage of gigantic
pages and use them to populate the shadow, which should save us some memory
wasted on page tables and reduce TLB pressure.
Signed-off-by: Will Deacon
Signed-off-by: Pavel Tatashin
Corrected "From" fields in these two patches to preserve the original
authorship.
Andrey Ryabinin (1):
x86/mm/kasan: don't use vmemmap_populate() to initialize shadow
Will Deacon (1):
arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow
arch/arm64/Kconfig | 2 +-
arc
kasan_populate_shadow() interface and use it instead of
vmemmap_populate(). Besides, this allows us to take advantage of gigantic
pages and use them to populate the shadow, which should save us some memory
wasted on page tables and reduce TLB pressure.
Signed-off-by: Andrey Ryabinin
Signed-off-by: Pavel
On Wed, Oct 18, 2017 at 6:01 AM, Dou Liyang wrote:
> Hi Pasha,
>
> Sorry to reply you so late.
>
> I have test the TSC sync in our machine with DR(Dynamic Reconfiguration)
> Linux kernel: Linux-4.14.0-rc5
> NUMA nodes: 4 node.
> Use clock_gettime() to reach nano-second accuracy.
>
> It is OK
Hi Andrey,
I asked Will, about it, and he preferred to have this patched added to
the end of my series instead of replacing "arm64/kasan: add and use
kasan_map_populate()".
In addition, Will's patch stops using large pages for kasan memory, and
thus might add some regression in which case it
As I said, I'm fine either way, I just didn't want to cause extra work
or rebasing:
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
Makes sense. I am also fine either way, I can submit a new patch merging
together the two if needed.
Pavel
Thank you Andrey, I will test this patch. Should it go on top or replace
the existing patch in mm-tree? ARM and x86 should be done the same
either both as follow-ups or both replace.
Pavel
inin wrote:
On 10/18/2017 08:08 PM, Pavel Tatashin wrote:
As I said, I'm fine either way, I just didn't want to cause extra work
or rebasing:
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
Makes sense. I am also fine either way, I can submit a new patch m
This looks good to me, thank you Andrew.
Pavel
return 0;
Fixes: c25323c07345 ("x86/tsc: Use topology functions")
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/smpboot.c | 13 -
arch/x86/kernel/tsc.c | 6 ++
2 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/smpboot.c b
uced:
[0.002233] Calibrating delay loop (skipped), value calculated using timer
frequency.. 6384.27 BogoMIPS (lpj=3192137)
[0.002516] pid_max: default: 32768 minimum: 301
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/tsc.c | 15 +--
1 file changed, 9 insertions(+), 6 delet
value.
This is why there is no reason to keep notsc, and it can be removed. But,
for compatibility reasons we will keep this parameter but change its
definition to be the same as tsc=unstable.
Signed-off-by: Pavel Tatashin
Reviewed-by: Dou Liyang
---
.../admin-guide/kernel-parameters.txt
deferred
page initialization.
Example 2:
https://patchwork.kernel.org/patch/10021247/
- If early boot timestamps were available, the engineer who introduced
this bug would have noticed the extra time that is spent early in boot.
Pavel Tatashin (7):
x86/tsc: remove tsc_disabled flag
time: sync read_
availability of TSC.
- remove dependency on memblock, and reduce code
- earlier kvm sched_clock()
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/kvm.c | 1 +
arch/x86/kernel/kvmclock.c | 64 ++
arch/x86/kernel/setup.c| 7 ++---
3 files changed, 12
better and more consistent estimate of the boot
time without need for an arch specific implementation.
Signed-off-by: Pavel Tatashin
---
arch/arm/kernel/time.c | 12 +---
arch/s390/kernel/time.c | 11 +--
include/linux/timekeeping.h | 3 +-
kernel/time/timekeeping.c | 61
hot path, we want to make sure that no
regressions are introduced to this function after machine is booted, this
is why we are using static branch that is enabled by default, but is
disabled once we have initialized a permanent clock source.
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/tsc.c
Allow sched_clock() to be used before schec_clock_init() and
sched_clock_init_late() are called. This provides us with a way to get
early boot timestamps on machines with unstable clocks.
Signed-off-by: Pavel Tatashin
---
kernel/sched/clock.c | 10 --
1 file changed, 8 insertions(+), 2
Hi Peter,
> That said; flipping static keys early isn't hard. We should call
> jump_label_init() early, because we want the entries sorted and the
> key->entries link set. It will also replace the GENERIC_NOP5_ATOMIC
> thing, which means we need to also do arch_init_ideal_nop() early, but
> since
oto error;
To:
ret = __try_online_node (nid, start, false);
if (ret < 0)
goto error;
new_node = ret;
Other than that the patch looks good to me, it simplifies the code.
So, if the above is addressed:
Reviewed-by: Pavel Tatashin
Thank you,
Pavel
>
/* we online node here. we can't roll back from here. */
And replace all:
> + if (ret)
> + goto register_fail;
With:
BUG_ON(ret);
With the above addressed:
Reviewed-by: Pavel Tatashin
On Fri, Jun 1, 2018 at 8:54 AM wrote:
>
> From: Oscar Salvador
>
> link_mem_sections() and walk_memory_range() share most of the code,
> so we can use walk_memory_range() with a callback to
> register_mem_sect_under_node()
> instead of using link_mem_sections().
Yes, their logic is indeed ident
in case
> the node is online, so we can safely remove that check as well.
>
> Signed-off-by: Oscar Salvador
Reviewed-by: Pavel Tatashin
> ---
> drivers/base/node.c | 5 -
> 1 file changed, 5 deletions(-)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
>
> Do we still need add a static_key? after Peter worked out the patch
> to enable ealy jump_label_init?
Hi Feng,
With Pete's patch we will still need at least one static branch, but
as I replied to Pete's email I like the idea of initializing
jump_label_init() early, but in my opinion it should b
> Bah, no, we don't make a mess first and then maybe clean it up.
OK, I will add this path to the series.
>
> Have a look at the below. The patch is a mess, but I have two sick kids
> on hands
Sorry to hear that, I hope your kids will get better soon.
> , please clean up / split where appropria
> Please don't make that a wholesale patch. I surely indicated the steps
> which are required and the steps can be done as separate patches easily,
Hi Thomas,
I will split it into several patches in the next version.
Thank you,
Pavel
read_persistent_clock64()
Signed-off-by: Pavel Tatashin
---
arch/s390/kernel/time.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index cf561160ea88..d1f5447d5687 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -221,6
hot path, we want to make sure that no
regressions are introduced to this function, with the current approach
sched_clock() path is not modified at all.
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/tsc.c | 40 ++--
1 file changed, 26 insertions(+), 14
Allow sched_clock() to be used before schec_clock_init() and
sched_clock_init_late() are called. This provides us with a way to get
early boot timestamps on machines with unstable clocks.
Signed-off-by: Pavel Tatashin
---
kernel/sched/clock.c | 10 --
1 file changed, 8 insertions(+), 2
availability of TSC.
- remove dependency on memblock, and reduce code
- earlier kvm sched_clock()
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/kvm.c | 1 +
arch/x86/kernel/kvmclock.c | 64 ++
arch/x86/kernel/setup.c| 7 ++---
3 files changed, 12
ialized during handover from memblock.
Signed-off-by: Pavel Tatashin
---
arch/x86/include/asm/text-patching.h | 1 +
arch/x86/kernel/alternative.c| 10 +-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/text-patching.h
b/arch/x86/include/asm/t
read_boot_clock64() is deleted, and replaced with
read_persistent_wall_and_boot_offset().
The default implementation of read_persistent_wall_and_boot_offset()
provides a better fallback than the current stubs for read_boot_clock64()
that arm has, so remove the old code.
Signed-off-by: Pavel
read_boot_clock64() was replaced by read_persistent_wall_and_boot_offset()
so remove it.
Signed-off-by: Pavel Tatashin
---
arch/s390/kernel/time.c | 13 -
1 file changed, 13 deletions(-)
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index d1f5447d5687..e8766beee5ad
functionality early as well.
static branching requires patching nop instructions, thus, we need
arch_init_ideal_nops() to be called prior to jump_label_init()
Here we do all the necessary steps to call arch_init_ideal_nops
after early_cpu_init().
Signed-off-by: Pavel Tatashin
Suggested-by: Peter
value.
This is why there is no reason to keep notsc, and it can be removed. But,
for compatibility reasons we will keep this parameter but change its
definition to be the same as tsc=unstable.
Signed-off-by: Pavel Tatashin
Reviewed-by: Dou Liyang
---
.../admin-guide/kernel-parameters.txt
uced:
[0.002233] Calibrating delay loop (skipped), value calculated using timer
frequency.. 6384.27 BogoMIPS (lpj=3192137)
[0.002516] pid_max: default: 32768 minimum: 301
Signed-off-by: Pavel Tatashin
---
arch/x86/kernel/tsc.c | 15 +--
1 file changed, 9 insertions(+), 6 delet
better and more consistent estimate of the boot
time without need for an arch specific implementation.
Signed-off-by: Pavel Tatashin
---
include/linux/timekeeping.h | 3 +-
kernel/time/timekeeping.c | 61 +++--
2 files changed, 34 insertions(+), 30 deletions
work.kernel.org/patch/10021247/
- If early boot timestamps were available, the engineer who introduced
this bug would have noticed the extra time that is spent early in boot.
Pavel Tatashin (7):
x86/tsc: remove tsc_disabled flag
time: sync read_boot_clock64() with persistent clock
Hi Borislav,
>
> Reviewed-by: Borislav Petkov
Thank you.
>
> Also, please take the patch below into your queue and keep it a separate
> patch in case we have to revert it later. It should help in keeping the
> mess manageable and not let it go completely out of control before we've
> done the c
your suggestions,
and no longer will impersonating my commit comments.
>
>> Signed-off-by: Pavel Tatashin
>> ---
>> arch/x86/kernel/tsc.c | 15 +--
>> 1 file changed, 9 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/kernel/tsc.c b/ar
> So you forgot to answer this question. I did not find a system yet, which
> actually exposes this behaviour on mainline.
>
> Is this an artifact of your early sched clock thing?
>
Yes, it is. Let me explain how it happens.
So, the problem is introduced in patch "sched: early boot clock" by this
On 06/23/2018 12:56 PM, Thomas Gleixner wrote:
> On Thu, 21 Jun 2018, Pavel Tatashin wrote:
>> /*
>> * Scheduler clock - returns current time in nanosec units.
>> */
>> @@ -1354,6 +1364,7 @@ void __init tsc_early_delay_calibrate(void)
>> lpj = tsc_k
Hi Thomas,
Thank you for your feedback. My comments below:
> > As soon as sched_clock() starts output non-zero values, we start
> > output time without correcting the output as it is done in
> > sched_clock_local() where unstable TSC and backward motion are
> > detected. But, since early in boot
Hi Peter,
> It _should_ all work.. but scary, who knows where this early stuff ends
> up being used.
I have tested this patch, and the following patch, which moves the
jump label init early and it works as Thomas describes:
on_each_cpu() ends up calling only the current CPU.
Also, you mentioned:
On Mon, Jun 25, 2018 at 4:56 AM Peter Zijlstra wrote:
>
> On Thu, Jun 21, 2018 at 05:25:17PM -0400, Pavel Tatashin wrote:
> > Allow sched_clock() to be used before schec_clock_init() and
> > sched_clock_init_late() are called. This provides us with a way to get
> >
On Mon, Jun 25, 2018 at 3:09 AM Martin Schwidefsky
wrote:
>
> From a s390 standpoint this looks reasonable.
>
> Reviewed-by: Martin Schwidefsky
>
Thank you Martin!
Pavel
> Also, I think the better condition is @early_boot_irqs_disabled, until
> we enable IRQs for the first time, text_poke_early() should be fine. And
> once we enable interrupts, all that other crud should really be working.
Sure, I will use early_boot_irqs_disabled flag. I think, we still want
to h
Hi Oscar,
Below is updated patch, with comment about OpenGrok and Acked-by Michal added.
Thank you,
Pavel
>From cca1b083d78d0ff99cce6dfaf12f6380d76390c7 Mon Sep 17 00:00:00 2001
From: Pavel Tatashin
Date: Thu, 26 Jul 2018 00:01:41 +0200
Subject: [PATCH] mm: access zone->node via zone_
r
> Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
> OK, this looks definitely better. I will have to check that all the
> required state is initialized properly. Considering the above
> explanation I would simply fold the follow up patch into this one. It is
> not so large it would get hard to review and you would make it clear why
> the work is d
On Wed, Jul 25, 2018 at 5:30 PM Andrew Morton wrote:
>
> On Tue, 24 Jul 2018 21:46:25 -0400 Pavel Tatashin
> wrote:
>
> > > > +static inline bool defer_init(int nid, unsigned long pfn, unsigned
> > > > long end_pfn)
> > > > {
> > >
> > OpenGrok was used to find places where zone->node is accessed. A public one
> > is available here: http://src.illumos.org/source/
>
> I assume that tool uses some pattern matching or similar so steps to use
> the tool to get your results would be more helpful. This is basically
> the same thing
On Thu, Jul 26, 2018 at 1:52 PM Michal Hocko wrote:
>
> On Thu 26-07-18 13:18:46, Pavel Tatashin wrote:
> > > > OpenGrok was used to find places where zone->node is accessed. A public
> > > > one
> > > > is available here: http://src.illumos.org/sou
separate function.
Signed-off-by: Pavel Tatashin
Reviewed-by: Oscar Salvador
---
mm/page_alloc.c | 74 +++--
1 file changed, 34 insertions(+), 40 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4946c73e549b..02e4b84038f8 100644
--- a/mm
() and also fix a small
deferred pages bug.
The improvements include reducing number of ifdefs and making code more
modular.
The bug is the deferred_init_update() should be called after the mirrored
memory skipping is taken into account.
Pavel Tatashin (3):
mm: make memmap_init a proper function
.
Signed-off-by: Pavel Tatashin
---
mm/page_alloc.c | 45 +
1 file changed, 25 insertions(+), 20 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6796dacd46ac..4946c73e549b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -306,24
memmap_init is sometimes a macro sometimes a function based on
__HAVE_ARCH_MEMMAP_INIT. It is only a function on ia64. Make
memmap_init a weak function instead, and let ia64 redefine it.
Signed-off-by: Pavel Tatashin
Reviewed-by: Andrew Morton
Reviewed-by: Oscar Salvador
---
arch/ia64/include
unsigned long *nr_initialised)
> > +static bool __meminit
> > +defer_init(int nid, unsigned long pfn, unsigned long end_pfn)
>
> Hi Pavel,
>
> maybe I do not understand properly the __init/__meminit macros, but should not
> "defer_init" be __init instead of __meminit?
> I t
his:
for (i = 0; i < MAX_NR_ZONES; i++)
zone_init_internals(&pgdat->node_zones[i], i, nid, 0);
Other than this all good:
Reviewed-by: Pavel Tatashin
Thank you,
Pavel
Hi David,
On Fri, Jul 27, 2018 at 12:55 PM David Hildenbrand wrote:
>
> Right now, struct pages are inititalized when memory is onlined, not
> when it is added (since commit d0dc12e86b31 ("mm/memory_hotplug: optimize
> memory hotplug")).
>
> remove_memory() will call arch_remove_memory(). Here, w
On Mon, Jul 30, 2018 at 8:11 AM David Hildenbrand wrote:
>
> On 30.07.2018 14:05, Michal Hocko wrote:
> > On Mon 30-07-18 13:53:06, David Hildenbrand wrote:
> >> On 30.07.2018 13:30, Michal Hocko wrote:
> >>> On Fri 27-07-18 18:54:54, David Hildenbrand wrote:
> Right now, struct pages are ini
> > - if (cd.actual_read_sched_clock == jiffy_sched_clock_read)
> > + if (cd.actual_read_sched_clock == jiffy_sched_clock_read) {
> > + local_irq_disable();
> > sched_clock_register(jiffy_sched_clock_read, BITS_PER_LONG,
> > HZ);
> > + local_
-[ end trace 08080eb81afa002c ]---
Disable IRQs for the duration of generic_sched_clock_init().
Fixes: 857baa87b642 ("sched/clock: Enable sched clock early")
Signed-off-by: Pavel Tatashin
Reported-by: Guenter Roeck
---
kernel/sched/clock.c | 2 ++
1 file changed, 2 insertions(+)
dif
ion, only call it once.
> Also fix comments:
> -s/authorative/authoritative
> -s/cyc2ns_init/tsc_init
>
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: Peter Zijlstra
> Cc: Pavel Tatashin
> Signed-off-by: Dou Liyang
Hi Dou,
ly being
> used by is_dev_zone().
>
> This patch removes zone_id() and makes is_dev_zone() use zone_idx()
> to check the zone, so we do not have two things with the same
> functionality around.
>
> Signed-off-by: Oscar Salvador
Thank you:
Reviewed-by: Pavel Tatashin
On Mon, Jul 30, 2018 at 3:55 AM Dou Liyang wrote:
>
> kvm_get_preset_lpj() just be called at kvmclock_init(), So mark it
> __init as well.
Reviewed-by: Pavel Tatashin
Thank you,
Pavel
>
> So i guess we agree that the right fix for this is to not touch struct
> pages when removing memory, correct?
Yes in my opinion that would be the correct fix.
Thank you,
Pavel
>
> --
>
> Thanks,
>
> David / dhildenb
>
> + zone_set_nid(nid);
>
> This should be:
>
> zone_set_nid(zone, nid);
>
> I fixed it up in your patch, I hope that is ok.
Yes, thank you. I fixed this when compile tested this patch, but must
have forgotten to regenerate the patch before sending it.
Thank you,
Pavel
>
> Thanks
> -
On 07/19/2018 09:40 AM, Michal Hocko wrote:
> On Thu 19-07-18 15:27:37, osalva...@techadventures.net wrote:
>> From: Pavel Tatashin
>>
>> zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
>> have inline functions to access this field in
On Thu, Jul 19, 2018 at 6:40 AM Peter Zijlstra wrote:
>
> On Tue, Jul 17, 2018 at 10:22:10PM -0400, Pavel Tatashin wrote:
>
> > diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> > index 0e9dbb2d9aea..7a8a63b940ee 100644
> > --- a/kernel/sched/clock.c
>
On Thu, Jul 19, 2018 at 6:49 AM Peter Zijlstra wrote:
>
> On Tue, Jul 17, 2018 at 10:22:11PM -0400, Pavel Tatashin wrote:
> > sched_clock_running may be read every time sched_clock_cpu() is called.
> > Yet, this variable is updated only twice during boot, and never changes
>
On Thu, Jul 19, 2018 at 10:03 AM Michal Hocko wrote:
>
> On Thu 19-07-18 15:58:59, Oscar Salvador wrote:
> > On Thu, Jul 19, 2018 at 03:46:22PM +0200, Michal Hocko wrote:
> > > On Thu 19-07-18 15:27:40, osalva...@techadventures.net wrote:
> > > > From: Oscar Salvador
> > > >
> > > > We should onl
On 07/19/2018 07:01 AM, Thomas Gleixner wrote:
> On Thu, 19 Jul 2018, Peter Zijlstra wrote:
>> On Tue, Jul 17, 2018 at 10:22:06PM -0400, Pavel Tatashin wrote:
>>> During boot tsc is calibrated twice: once in tsc_early_delay_calibrate(),
>>> and the second time in
1 - 100 of 1355 matches
Mail list logo