On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> This only does 64k linux page support for now. 64k hash linux config
> THP need to differentiate it from hugetlb huge page because with THP
> we need to track hash pte slot information with respect to each subpage.
> This is not needed with hugetlb hug
On 21/04/16 19:59, Michael Ellerman wrote:
> On Thu, 2016-04-21 at 19:53 +1000, Balbir Singh wrote:
>> On 09/04/16 16:13, Aneesh Kumar K.V wrote:
>>> +static inline int pgd_huge(pgd_t pgd)
>>> +{
>>> + /*
>>> +* leaf pte for huge page
>>&
R_PUD_INDEX_SIZE;
> + __pgd_index_size = R_PGD_INDEX_SIZE;
> + __pmd_cache_index = R_PMD_INDEX_SIZE;
> + __pte_table_size = R_PTE_TABLE_SIZE;
> + __pmd_table_size = R_PMD_TABLE_SIZE;
> + __pud_table_size = R_PUD_TABLE_SIZE;
> + __pgd_table_size = R_PGD_TABLE_SI
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/book3s/64/hash.h| 8
> arch/powerpc/include/asm/book3s/64/pgtable.h | 20
> arch/powerpc/include/asm/nohash/64/pgtable.h | 7 +++
> arch/powerpc/
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 12
> arch/powerpc/include/asm/book3s/64/radix.h | 6 ++
> arch/powerpc/mm/pgtable-radix.c | 20
> 3 files ch
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/mmu_context.h | 25 +
> arch/powerpc/kernel/swsusp.c | 2 +-
> arch/powerpc/mm/mmu_context_nohash.c | 3 ++-
> drivers/cpufreq/pmac32-cpufreq.c
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/mmu_context.h | 4
> arch/powerpc/mm/mmu_context_hash64.c | 42
> +++---
> 2 files changed, 38 insertions(+), 8 deletions(-)
>
> diff --git a/arch/
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Core kernel don't track the page size of the va range that we are
> invalidating. Hence we end up flushing tlb for the entire mm here.
> Later patches will improve this.
>
> We also don't flush page walk cache separetly instead use RIC=2 when
> flushi
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> We are going to add asm changes in the follow up patches. Add the
> feature bit now so that we can get it all build. We will enable radix
> in the last patch using cpu table.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
Looks good!
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> We also use MMU_FTR_RADIX to branch out from code path specific to
> hash.
>
> No functionality change.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/kernel/entry_64.S | 7 +--
> arch/powerpc/kernel/exceptions-64s.S | 28 +
ar K.V
> ---
Why is this specific to radix?
Balbir Singh.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On 23/04/16 18:30, Benjamin Herrenschmidt wrote:
> On Thu, 2016-04-21 at 14:12 +1000, Balbir Singh wrote:
>>> + } while (cpu_to_be64(old_pte) != __cmpxchg_u64((unsigned long *)ptep,
>>> +
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> pgtable_page_dtor for nohash is now moved to pte_fragment_free_mm()
>
> Signed-off-by: Aneesh Kumar K.V
This needs a better changelog
> ---
> arch/powerpc/include/asm/book3s/64/pgalloc.h | 147
> +++
> arch/powerpc/include
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/book3s/64/pgalloc.h | 34
>
> arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++--
> arch/powerpc/mm/hash_utils_64.c | 7 ++
> arc
t; if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE)
> ||
> cpu_has_feature(CPU_FTR_NOEXECUTE))) {
>
Acked-by: Balbir Singh
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On 09/04/16 16:13, Aneesh Kumar K.V wrote:
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/include/asm/book3s/64/hash.h| 14 +++---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 15 ---
> arch/powerpc/include/asm/book3s/64/radix.h | 21 +
>
renames some things.
>
>
> because the value to which it is getting initialized is no more a
> constant.
>
Isn't it still a constant depending on the type of page table? Or is it a run
time value depending on the chosen page table type?
Balbir Singh
___
h either mm->page_table_lock held or ptl lock held
>*/
> unsigned long access = 0, trap;
> + if (radix_enabled())
> + return;
>
> /* We only want HPTEs for linux PTEs that have _PAGE_ACCESSED set */
>
On 26/04/16 14:58, Daniel Axtens wrote:
> Sparse doesn't seem to be passing -maltivec around properly, leading
> to lots of errors:
>
> .../include/altivec.h:34:2: error: Use the "-maltivec" flag to enable PowerPC
> AltiVec support
> arch/powerpc/lib/xor_vmx.c:27:16: error: Expected ; at end of
> /*
> * System calls.
> @@ -508,6 +509,14 @@ BEGIN_FTR_SECTION
> ldarxr6,0,r1
> END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
>
> +BEGIN_FTR_SECTION
> +/*
> + * A cp_abort (copy paste abort) here ensures that when context switching, a
> + * copy from one process can't leak into the
Hi Petr
Very very nice documentation, some comments inline
Reviewed-by: Balbir Singh
Balbir
On 26/04/16 01:14, Petr Mladek wrote:
> LivePatch framework deserves some documentation, definitely.
> This is an attempt to provide some basic info. I hope that
> it will be useful for both
e where it's not clear which one
> should be used.
>
Makes sense, but I suspect its a larger impact with loads of testing required
across platforms. Should this be done incrementally?
Balbir Singh
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
r_vmx.c:60:9: error: got v1_in
> ...
> arch/powerpc/lib/xor_vmx.c:87:9: error: too many errors
>
> Only include the altivec.h header for non-__CHECKER__ builds.
> For builds with __CHECKER__, make up some stubs instead, as
> suggested by Balbir. (The vector size of 16 is ar
On 27/04/16 09:05, Benjamin Herrenschmidt wrote:
> On Wed, 2016-04-27 at 08:16 +1000, Balbir Singh wrote:
>>
>> On 27/04/16 07:05, Benjamin Herrenschmidt wrote:
>>>
>>> On Tue, 2016-04-26 at 21:54 +0530, Aneesh Kumar K.V wrote:
>>>>
>>>>
that branch.
>
> (*) Balbir, some of your comments were a bit too vague; if you could turn
> them into actual patch on top of what's currently in git, that'd be
> helpful
>
Hi, Jiri
Thanks! I'll try to do that -- I'll add it on my TODO list.
Balbir Sin
On 28/04/16 16:17, Suraj Jitindar Singh wrote:
> When unregistering a crash_shutdown_handle in the function
> crash_shutdown_unregister() the other handles are shifted down in the
> array to replace the unregistered handle. The for loop assumes that the
> last element in the array is null and use
>
> Thanks for taking a look Balbir, the size of crash_shutdown_handles is
> actually CRASH_HANDLER_MAX+1.
>
Acked-by: Balbir Singh
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
in asm/types.h and define PPC_ELF_ABI_v2 when ELF ABI
> v2 is detected.
>
> We don't add explicit includes of asm/types.h because it's included
> basically everywhere via compiler.h.
>
> Signed-off-by: Michael Ellerman
> ---
Makes sense
Acked-by: Balbir Singh
On Tue, 12 Jan 2016 12:45:36 +0530
"Aneesh Kumar K.V" wrote:
> Not really needed. But this brings it back to as it was before
>
Could you expand on not really needed. Could the changelog describe how
the bits will be used in the follow on patches.
Balbir
___
On Tue, 12 Jan 2016 12:45:38 +0530
"Aneesh Kumar K.V" wrote:
> This is needed so that we can support both hash and radix page table
> using single kernel. Radix kernel uses a 4 level table.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/Kconfig | 1 +
> arch
On 12/01/16 18:15, Aneesh Kumar K.V wrote:
> This is needed so that we can support both hash and radix page table
> using single kernel. Radix kernel uses a 4 level table.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/include/as
On Fri, 22 Jan 2016 12:49:05 +0530
Shilpasri G Bhat wrote:
> Create sysfs attributes to export throttle information in
> /sys/devices/system/cpu/cpufreq/chipN. The newly added sysfs files are as
> follows:
>
> 1)/sys/devices/system/cpu/cpufreq/chip0/throttle_frequencies
> This gives the thrott
On Fri, 22 Jan 2016 12:49:02 +0530
Shilpasri G Bhat wrote:
> cpu_to_chip_id() does a DT walk through to find out the chip id by
> taking a contended device tree lock. This adds an unnecessary overhead
> in a hot path. So instead of calling cpu_to_chip_id() everytime cache
> the chip ids for all c
e is enabled by default for all architectures except
> IA-64, whose symbols are too far apart to capture in this manner.
snip
I still don't get the 2GB limitaiton, because of the 32 bit address
does it imply that modules load with -2GB to +2GB of the kernel base
address of the kallsyms a
On Thu, 21 Jan 2016 11:55:44 +1100
Cyril Bur wrote:
> Currently when threads get scheduled off they always giveup the FPU,
> Altivec (VMX) and Vector (VSX) units if they were using them. When they are
> scheduled back on a fault is then taken to enable each facility and load
> registers. As a res
n iterative step we should
give the numbers some meaning and use proper helpers for it.
I am going to give the patches a spin
Balbir Singh.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Wed, Jan 27, 2016 at 10:50 AM, Cyril Bur wrote:
> On Mon, 25 Jan 2016 11:04:23 +1100
> Balbir Singh wrote:
>
>> On Thu, 21 Jan 2016 11:55:44 +1100
>> Cyril Bur wrote:
>>
>> > Currently when threads get scheduled off they always giveup the FPU,
>>
On Wed, 27 Jan 2016 13:19:04 +0100
Torsten Duwe wrote:
> Thanks! Make sure you use a compiler that can disable -mprofile-kernel with
> "notrace".
gcc-6? I have gcc-5.2.1
Balbir Singh.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.
s DT every time
> to find the chip id.
> - Patches [4] to [6] will add a perf trace point
> "power:powernv_throttle" and sysfs throttle counter stats in
> /sys/devices/system/cpu/cpufreq/chipN.
>
Looks good to me. You've got the reviews and acks you need.
Balbir Singh
From: Balbir Singh
I spent some time trying to use kgdb and debugged my inability to
resume from kgdb_handle_breakpoint(). NIP is not incremented
and that leads to a loop in the debugger.
I've tested this lightly on a virtual instance with KDB enabled.
After the patch, I am able to get th
On Mon, 1 Feb 2016 21:39:00 +1100
Andrew Donnellan wrote:
> On 01/02/16 17:03, Balbir Singh wrote:
> > From: Balbir Singh
> >
> > I spent some time trying to use kgdb and debugged my inability to
> > resume from kgdb_handle_breakpoint(). NIP is not incremented
>
On Thu, Feb 4, 2016 at 10:02 PM, Petr Mladek wrote:
> On Thu 2016-02-04 18:31:40, AKASHI Takahiro wrote:
>> Jiri, Torsten
>>
>> Thank you for your explanation.
>>
>> On 02/03/2016 08:24 PM, Torsten Duwe wrote:
>> >On Wed, Feb 03, 2016 at 09:55:11AM +0100, Jiri Kosina wrote:
>> >>On Wed, 3 Feb 2016
On Tue, 2016-02-09 at 21:11 +1100, Michael Ellerman wrote:
> On Mon, 2016-01-02 at 06:03:25 UTC, Balbir Singh wrote:
> > From: Balbir Singh
> >
> > I spent some time trying to use kgdb and debugged my inability to
> > resume from kgdb_handle_breakpoint(). NIP is n
e/Kconfig| 5 +
> scripts/recordmcount.c | 6 +-
> scripts/recordmcount.h | 17 ++-
> 19 files changed, 552 insertions(+), 47 deletions(-)
> create mode 100755 arch/powerpc/gcc-mprofile-kernel-notrace.sh
> create m
On Wed, 2016-02-10 at 17:25 +0100, Torsten Duwe wrote:
snip
> diff --git a/arch/powerpc/gcc-mprofile-kernel-notrace.sh
> b/arch/powerpc/gcc-mprofile-kernel-notrace.sh
> new file mode 100755
> index 000..68d6482
> --- /dev/null
> +++ b/arch/powerpc/gcc-mprofile-kernel-notrace.sh
> @@ -0,0 +1,
On Thu, 2016-02-11 at 09:42 +0100, Torsten Duwe wrote:
> On Thu, Feb 11, 2016 at 06:48:17PM +1100, Balbir Singh wrote:
> > On Wed, 2016-02-10 at 17:25 +0100, Torsten Duwe wrote:
> > > +
> > > +echo "int func() { return 0; }" | \
> > > +$* -S -x
On Thu, 2016-02-11 at 14:09 +0530, Kamalesh Babulal wrote:
> * Balbir Singh [2016-02-11 18:48:17]:
>
> > On Wed, 2016-02-10 at 17:25 +0100, Torsten Duwe wrote:
> >
> > snip
> >
> > > diff --git a/arch/powerpc/gcc-mprofile-kernel-notrace.sh
> > &g
On Thu, 2016-01-28 at 16:32 +0100, Torsten Duwe wrote:
> From: Petr Mladek
>
> Livepatch works on x86_64 and s390 only when the ftrace call
> is at the very beginning of the function. But PPC is different.
> We need to handle TOC and save LR there before calling the
> global ftrace handler.
>
>
On Fri, 2016-02-12 at 17:45 +0100, Petr Mladek wrote:
> On Sat 2016-02-13 03:13:29, Balbir Singh wrote:
> > On Thu, 2016-01-28 at 16:32 +0100, Torsten Duwe wrote:
> > > From: Petr Mladek
> > >
> > > Livepatch works on x86_64 and s390 only when the ftrace ca
d_t *pmdp)
> {
> - pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, 0);
> + pmd_hugepage_update(vma->vm_mm, address, pmdp, ~0UL, 0);
> + /*
> + * This ensures that generic code that rely on IRQ disabling
> + * to prevent a par
kick_all_cpus_sync();
pmdp_invalidate()->pmd_hugepage_update() can still run in parallel with
find_linux_pte_or_hugepte() and race.. Am I missing something?
Balbir Singh
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
@l
The only limitation today is figuring out the correct offset to patch
(8 or 16) depending on whether the TOC stub is generated or not by the
compiler
If the sequence is well known, we could potentially scan instructions
or go to the hash that ftrace maintains and search in there with a
On Mon, 2016-02-15 at 16:31 +0530, Aneesh Kumar K.V wrote:
> Balbir Singh writes:
>
> > > Now we can't depend for mm_cpumask, a parallel find_linux_pte_hugepte
> > > can happen outside that. Now i had a variant for kick_all_cpus_sync that
> > > ignor
S_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? \
> + H_PTRS_PER_PMD : R_PTRS_PER_PMD)
> +#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? \
> + H_PTRS_PER_PUD : R_PTRS_PER_PUD)
> +
How about reusing max
#define MAX_PTRS_PER_PTE max(H_PTRS_PER_PTE, R_PTRS_PER_PTE)
#define MAX_PTRS_PER_PMD max(H_PTRS_PER_PMD, R_PTRS_PER_PMD)
#define MAX_PTRS_PER_PUD max(H_PTRS_PER_PUD, R_PTRS_PER_PUD)
Balbir Singh.
On 10/12/19 3:47 pm, Daniel Axtens wrote:
> This helps with powerpc support, and should have no effect on
> anything else.
>
> Suggested-by: Christophe Leroy
> Signed-off-by: Daniel Axtens
If you follow the recommendations by Christophe and I, you don't need this
On 10/12/19 3:47 pm, Daniel Axtens wrote:
> KASAN support on powerpc64 is challenging:
>
> - We want to be able to support inline instrumentation so as to be
>able to catch global and stack issues.
>
> - We run some code in real mode after boot, most notably a lot of
>KVM code. We'd
write the locks need to be held? For example can the device_hotplug_lock
be held in read mode while add/remove memory via (mem_hotplug_lock) is held
in write mode?
Balbir Singh.
On Wed, Sep 19, 2018 at 09:35:07AM +0200, David Hildenbrand wrote:
> Am 19.09.18 um 03:22 schrieb Balbir Singh:
> > On Tue, Sep 18, 2018 at 01:48:16PM +0200, David Hildenbrand wrote:
> >> Reading through the code and studying how mem_hotplug_lock is to be used,
> >> I
Cc: Michael Ellerman
> Cc: Rashmica Gupta
> Cc: Balbir Singh
> Cc: Michael Neuling
> Reviewed-by: Pavel Tatashin
> Reviewed-by: Rashmica Gupta
> Signed-off-by: David Hildenbrand
> ---
> arch/powerpc/platforms/powernv/memtrace.c | 4 +++-
> 1 file changed, 3 insert
On Wed, Oct 24, 2018 at 01:12:56PM +0300, Kirill A. Shutemov wrote:
> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
> > diff --git a/mm/mremap.c b/mm/mremap.c
> > index 9e68a02a52b1..2fd163cff406 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -191,6 +191,54 @@
On Wed, Oct 24, 2018 at 07:13:50PM -0700, Joel Fernandes wrote:
> On Wed, Oct 24, 2018 at 10:57:33PM +1100, Balbir Singh wrote:
> [...]
> > > > + pmd_t pmd;
> > > > +
> > > > + new_ptl = pmd_lockptr(mm, new_pmd);
> >
On Sat, Oct 27, 2018 at 12:39:17PM -0700, Joel Fernandes wrote:
> Hi Balbir,
>
> On Sat, Oct 27, 2018 at 09:21:02PM +1100, Balbir Singh wrote:
> > On Wed, Oct 24, 2018 at 07:13:50PM -0700, Joel Fernandes wrote:
> > > On Wed, Oct 24, 2018 at 10:57:33PM +
);
> + vma = find_vma_intersection(mm, addr, end);
> + if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> + ret = H_PARAMETER;
> + goto out;
> + }
> + ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> + &src_pfn, &dst_pfn, NULL);
> + if (ret < 0)
> + ret = H_PARAMETER;
> +out:
> + up_read(&mm->mmap_sem);
> + return ret;
> +}
> +
> +/*
> + * TODO: Number of secure pages and the page size order would probably come
> + * via DT or via some uvcall. Return 8G for now.
> + */
> +static unsigned long kvmppc_get_secmem_size(void)
> +{
> + return (1UL << 33);
> +}
> +
> +static int kvmppc_hmm_pages_init(void)
> +{
> + unsigned long nr_pfns = kvmppc_hmm->devmem->pfn_last -
> + kvmppc_hmm->devmem->pfn_first;
> +
> + kvmppc_hmm->pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
> + sizeof(unsigned long), GFP_KERNEL);
> + if (!kvmppc_hmm->pfn_bitmap)
> + return -ENOMEM;
> +
> + spin_lock_init(&kvmppc_hmm_lock);
> +
> + return 0;
> +}
> +
> +int kvmppc_hmm_init(void)
> +{
> + int ret = 0;
> + unsigned long size = kvmppc_get_secmem_size();
Can you split secmem to secure_mem?
> +
> + kvmppc_hmm = kzalloc(sizeof(*kvmppc_hmm), GFP_KERNEL);
> + if (!kvmppc_hmm) {
> + ret = -ENOMEM;
> + goto out;
> + }
> +
> + kvmppc_hmm->device = hmm_device_new(NULL);
> + if (IS_ERR(kvmppc_hmm->device)) {
> + ret = PTR_ERR(kvmppc_hmm->device);
> + goto out_free;
> + }
> +
> + kvmppc_hmm->devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> + &kvmppc_hmm->device->device, size);
IIUC, there is just one HMM device for all the secure memory in the
system?
> + if (IS_ERR(kvmppc_hmm->devmem)) {
> + ret = PTR_ERR(kvmppc_hmm->devmem);
> + goto out_device;
> + }
> + ret = kvmppc_hmm_pages_init();
> + if (ret < 0)
> + goto out_devmem;
> +
> + return ret;
> +
> +out_devmem:
> + hmm_devmem_remove(kvmppc_hmm->devmem);
> +out_device:
> + hmm_device_put(kvmppc_hmm->device);
> +out_free:
> + kfree(kvmppc_hmm);
> + kvmppc_hmm = NULL;
> +out:
> + return ret;
> +}
> +
> +void kvmppc_hmm_free(void)
> +{
> + kfree(kvmppc_hmm->pfn_bitmap);
> + hmm_devmem_remove(kvmppc_hmm->devmem);
> + hmm_device_put(kvmppc_hmm->device);
> + kfree(kvmppc_hmm);
> + kvmppc_hmm = NULL;
> +}
Balbir Singh.
On Mon, Oct 22, 2018 at 10:48:35AM +0530, Bharata B Rao wrote:
> A secure guest will share some of its pages with hypervisor (Eg. virtio
> bounce buffers etc). Support shared pages in HMM driver.
>
> Signed-off-by: Bharata B Rao
> ---
> arch/powerpc/kvm/book3s_hv_hmm.c | 69 +
On Mon, Oct 22, 2018 at 10:48:36AM +0530, Bharata B Rao wrote:
> H_SVM_INIT_START: Initiate securing a VM
> H_SVM_INIT_DONE: Conclude securing a VM
>
> During early guest init, these hcalls will be issued by UV.
> As part of these hcalls, [un]register memslots with UV.
>
> Signed-off-by: Bharata
is should allow better concurrency for massively threaded
Question -- I presume mmap_sem (rw_semaphore implementation tested against)
was qrwlock?
Balbir Singh.
lgaonkar
> Signed-off-by: Santosh Sivaraj
> Cc: sta...@vger.kernel.org # v4.15+
> ---
Acked-by: Balbir Singh
On 12/8/19 7:22 pm, Santosh Sivaraj wrote:
> Certain architecture specific operating modes (e.g., in powerpc machine
> check handler that is unable to access vmalloc memory), the
> search_exception_tables cannot be called because it also searches the
> module exception tables if entry is not fou
Cc: Mahesh Salgaonkar
> Signed-off-by: Santosh Sivaraj
> ---
Isn't this based on https://patchwork.ozlabs.org/patch/895294/? If so it should
still have my author tag and signed-off-by
Balbir Singh
> arch/powerpc/include/asm/mce.h | 4 +++-
> arch/powerpc/kernel/mce.c
}
> + }
> +
> + return n;
Do we always return n independent of the check_copy_size return value and
access_ok return values?
Balbir Singh.
> +}
> +
> extern unsigned long __clear_user(void __user *addr, unsigned long size);
>
> static inline unsigned long clear_user(void __user *addr, unsigned long size)
>
r 1 GPU and attached NPUs for POWER8 */
> - pe->npucomp = kzalloc(sizeof(pe->npucomp), GFP_KERNEL);
> + pe->npucomp = kzalloc(sizeof(*pe->npucomp), GFP_KERNEL);
To avoid these in the future, I wonder if instead of sizeof(pe->npucomp), we
insist on
sizeof structure
pe->npucomp = kzalloc(sizeof(struct npucomp), GFP_KERNEL);
Acked-by: Balbir Singh
c0abd628
> c0abd628 (T) schedule+0x48
>
> [ ... etc ... ]
>
>
> save_stack_trace_tsk_reliable
> =========
>
> arch/powerpc/kernel/stacktrace.c :: save_stack_trace_tsk_reliable() does
> take into account the first stackframe, but only to verify that the link
> register is indeed pointing at kernel code address.
>
> Can someone explain what __switch_to is doing with the stack and whether
> in such circumstances, the reliable stack unwinder should be skipping
> the first frame when verifying the frame contents like STACK_FRAME_MARKER,
> etc.
>
> I may be on the wrong path in debugging this, but figuring out this
> sp[0] frame state would be helpful.
>
I would compare the output of xmon across the unreliable stack frames with
the contents of what the stack unwinder has.
I suspect the patch is stuck trying to transition to enabled state, it'll
be interesting to see if we are really stuck
Balbir Singh.
On Sat, Jan 12, 2019 at 02:45:41AM -0600, Segher Boessenkool wrote:
> On Sat, Jan 12, 2019 at 12:09:14PM +1100, Balbir Singh wrote:
> > Could you please define interesting frame on top a bit more? Usually
> > the topmost return address is in LR
>
> There is no reliable
gt; arch-specific implementations consistent.
>
> Signed-off-by: Joe Lawrence
Seems straight forward
Acked-by: Balbir Singh
On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> From: Nicolai Stange
>
> The ppc64 specific implementation of the reliable stacktracer,
> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> trace" whenever it finds an exception frame on the stack. Stack frames
On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh wrote:
>
> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> > From: Nicolai Stange
> >
> > The ppc64 specific implementation of the reliable stacktracer,
> > save_stack_trace_tsk_reliable(), bails
On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman wrote:
>
> Balbir Singh writes:
> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh wrote:
> >>
> >> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> >> > From: Nicolai Stange
> >>
e looks good to me as well.
>
> Reviewed-by: Alistair Popple
>
I checked the three callers of set_pte_at_notify and the assumption
seems correct
Reviewed-by: Balbir Singh
On Wed, Feb 6, 2019 at 3:44 PM Michael Ellerman wrote:
>
> Balbir Singh writes:
> > On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman
> > wrote:
> >> Balbir Singh writes:
> >> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh
> >> > wrote:
&g
);
> }
>
> extern struct page *pud_page(pud_t pud);
> @@ -951,7 +951,7 @@ static inline int pgd_none(pgd_t pgd)
>
> static inline int pgd_present(pgd_t pgd)
> {
> - return (pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
> + return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT));
> }
>
Care to put a big FAT warning, so that we don't repeat this again
(as in authors planning on changing these bits).
Balbir Singh.
On Sat, Feb 16, 2019 at 08:22:12AM -0600, Segher Boessenkool wrote:
> Hi all,
>
> On Sat, Feb 16, 2019 at 09:55:11PM +1100, Balbir Singh wrote:
> > On Thu, Feb 14, 2019 at 05:23:39PM +1100, Michael Ellerman wrote:
> > > In v4.20 we changed our pgd/pud_present() to
the kasan core are going to be required
> for hash and radix as well.
>
Thanks for following through with this, could you please share details on
how you've been testing this?
I know qemu supports qemu -cpu e6500, but beyond that what does the machine
look like?
Balbir Singh.
On Sun, Feb 17, 2019 at 07:34:20PM +1100, Michael Ellerman wrote:
> Balbir Singh writes:
> > On Sat, Feb 16, 2019 at 08:22:12AM -0600, Segher Boessenkool wrote:
> >> Hi all,
> >>
> >> On Sat, Feb 16, 2019 at 09:55:11PM +1100, Balbir Singh wrote:
> >&g
On Mon, Feb 18, 2019 at 11:49:18AM +1100, Michael Ellerman wrote:
> Balbir Singh writes:
> > On Sun, Feb 17, 2019 at 07:34:20PM +1100, Michael Ellerman wrote:
> >> Balbir Singh writes:
> >> > On Sat, Feb 16, 2019 at 08:22:12AM -0600, Segher Boessenkool wrote:
>
&drv->states[i];
> + struct cpuidle_state_usage *su = &dev->states_usage[i];
> +
> + if (s->disabled || su->disable)
> + continue;
> +
> + return s->target_residency * tb_ticks_per_usec;
Can we ensure this is not prone to overflow?
Otherwise looks good
Reviewed-by: Balbir Singh
On Fri, Jun 1, 2018 at 2:54 PM, Gautham R Shenoy
wrote:
> Hi Balbir,
>
> Thanks for reviewing the patch!
>
> On Fri, Jun 01, 2018 at 12:51:05AM +1000, Balbir Singh wrote:
>> On Thu, May 31, 2018 at 10:15 PM, Gautham R. Shenoy
>
> [..snip..]
>> >
>&g
On 12/06/18 06:20, Mathieu Malaterre wrote:
> Hi Meelis,
>
> On Mon, Jun 11, 2018 at 1:21 PM Meelis Roos wrote:
>> I am seeing this on PowerMac G4 with sungem ethernet driver. 4.17 was
>> OK, 4.17.0-10146-gf0dc7f9c6dd9 is problematic.
> Same here.
>
>> [ 140.518664] eth0: hw csum failure
>> [
sion: Linux version 4.17.0-autotest
>>> >>
>>> >>I am seeing this bug on rc7 as well.
>
> Observing similar traces on linux next kernel: 4.17.0-next-20180608-autotest
>
> Block size [0x400] unaligned hotplug range: start 0x22000, size
> 0x100
size < block_size in this case, why? how? Could you confirm that the block size
is 64MB and your trying to remove 16MB
Balbir Singh.
avoids the old timespec type and the HW access.
>
> Signed-off-by: Arnd Bergmann
> ---
Looks good to me!
Acked-by: Balbir Singh
Balbir Singh
On Thu, Jun 21, 2018 at 6:31 PM, Aneesh Kumar K.V
wrote:
>
> We do this only with VMEMMAP config so that our page_to_[nid/section] etc are
> not
> impacted.
>
> Signed-off-by: Aneesh Kumar K.V
Why 128TB, given that it's sparse_vmemmap_extreme by default, why not
1PB dire
k
>
> It's actually super easy to do simple boot tests with qemu, it works fine in
> TCG,
> Michael's wiki page at
> https://github.com/linuxppc/wiki/wiki/Booting-with-Qemu is very helpful.
>
> I did this a lot in development.
>
> My full commandline, fwiw, is:
>
> qemu-system-ppc64 -m 8G -M pseries -cpu power9 -kernel
> ../out-3s-radix/vmlinux -nographic -chardev stdio,id=charserial0,mux=on
> -device spapr-vty,chardev=charserial0,reg=0x3000 -initrd
> ./rootfs-le.cpio.xz -mon chardev=charserial0,mode=readline -nodefaults -smp 4
qemu has been crashing with KASAN enabled/ both inline/out-of-line options. I
am running linux-next + the 4 patches you've posted. In one case I get a panic
and a hang in the other. I can confirm that when I disable KASAN, the issue
disappears
Balbir Singh.
>
> Regards,
> Daniel
>
cross different configurations?
>> BTW, the current set of patches just hang if I try to make the default
>> mode as out of line
>
> Do you have CONFIG_RELOCATABLE?
>
> I've tested the following process:
>
> # 1) apply patches on a fresh linux-next
> # 2) output dir
> mkdir ../out-3s-kasan
>
> # 3) merge in the relevant config snippets
> cat > kasan.config << EOF
> CONFIG_EXPERT=y
> CONFIG_LD_HEAD_STUB_CATCH=y
>
> CONFIG_RELOCATABLE=y
>
> CONFIG_KASAN=y
> CONFIG_KASAN_GENERIC=y
> CONFIG_KASAN_OUTLINE=y
>
> CONFIG_PHYS_MEM_SIZE_FOR_KASAN=2048
> EOF
>
I think I got CONFIG_PHYS_MEM_SIZE_FOR_KASN wrong, honestly I don't get why
we need this size? The size is in MB and the default is 0.
Why does the powerpc port of KASAN need the SIZE to be explicitly specified?
Balbir Singh.
RS_PER_*s in the same style as MAX_PTRS_PER_P4D.
> As KASAN is the only user at the moment, just define them in the kasan
> header, and have them default to PTRS_PER_* unless overridden in arch
> code.
>
> Suggested-by: Christophe Leroy
> Suggested-by: Balbir Singh
> Signe
* Nathan Fontenot [2010-10-01 13:35:54]:
> Define a version of memory_block_size_bytes() for powerpc/pseries such that
> a memory block spans an entire lmb.
I hope I am not missing anything obvious, but why not just call it
lmb_size, why do we need memblock_size?
Is lmb_size == memblock_size af
* Dave Hansen [2010-10-03 11:11:01]:
> On Sun, 2010-10-03 at 13:07 -0500, Robin Holt wrote:
> > On Sun, Oct 03, 2010 at 11:25:00PM +0530, Balbir Singh wrote:
> > > * Nathan Fontenot [2010-10-01 13:35:54]:
> > >
> > > > Define a version of memory_bloc
* Peter Zijlstra [2009-08-28 08:48:05]:
> On Fri, 2009-08-28 at 11:44 +0530, Arun R Bharadwaj wrote:
> > * Peter Zijlstra [2009-08-27 14:53:27]:
> >
> > Hi Peter, Ben,
> >
> > I've put the whole thing in a sort of a block diagram. Hope it
> > explains things more clearly.
> >
> >
> >
> >
>
* Ankita Garg [2009-09-01 10:33:16]:
> Hello,
>
> Below is a patch to fix a couple of issues with fake numa node creation
> on ppc:
>
> 1) Presently, fake nodes could be created such that real numa node
> boundaries are not respected. So a node could have lmbs that belong to
> different real no
* Ankita Garg [2009-09-01 14:54:07]:
> Hi Balbir,
>
> On Tue, Sep 01, 2009 at 11:27:53AM +0530, Balbir Singh wrote:
> > * Ankita Garg [2009-09-01 10:33:16]:
> >
> > > Hello,
> > >
> > > Below is a patch to fix a couple of issues with fake
* Arun R B [2009-09-01 17:08:40]:
> * Arun R Bharadwaj [2009-09-01 17:07:04]:
>
> Cleanup drivers/cpuidle/cpuidle.c
>
> Cpuidle maintains a pm_idle_old void pointer because, currently in x86
> there is no clean way of registering and unregistering a idle function.
>
> So remove pm_idle_old and
> from offlined cpus, correct sched domains, etc. I can propose a patchset
> for x86_64 to do exactly this if there aren't any objections and I hope
> you'll help do ppc.
Sounds interesting, I'd definitely be interested in seeing your
proposal, but I would think of th
101 - 200 of 1077 matches
Mail list logo