On Thu, May 08, 2014 at 02:40:00PM +0900, Masami Hiramatsu wrote:
(2014/05/08 13:47), Ananth N Mavinakayanahalli wrote:
On Wed, May 07, 2014 at 08:55:51PM +0900, Masami Hiramatsu wrote:
...
+#if defined(CONFIG_PPC64) (!defined(_CALL_ELF) || _CALL_ELF == 1)
+/*
+ * On PPC64 ABIv1
During the EEH hotplug event, pcibios_setup_device() will be invoked two
times. And the last time will trigger a warning of re-attachment of iommu
group.
The two times are:
pci_device_add
...
pcibios_add_device
pcibios_setup_device - 1st time
Unify the low/highmem code path from do_init_bootmem() by using (the)
lowmem related variables/parameters even when the low/highmem split
is not needed (64-bit) or configured. In such cases the lowmem
variables/parameters continue to observe the definition by referring
to memory directly mapped by
Currently bootmem is just a wrapper around/on top of memblock. This
eliminates from the build/kernel image the bootmem code and the
initialization wrapper code just as other ARHC(es) did: x86, arm,
etc
For now only cover !NUMA systems
Signed-off-by: Emil Medve emilian.me...@freescale.com
---
Currently bootmem is just a wrapper around/on top of memblock. This
eliminates from the build/kernel image the bootmem code and the
initialization wrapper code just as other ARHC(es) did: x86, arm,
etc
For now only cover !NUMA systems
Signed-off-by: Emil Medve emilian.me...@freescale.com
---
Unify the low/highmem code path from do_init_bootmem() by using (the)
lowmem related variables/parameters even when the low/highmem split
is not needed (64-bit) or configured. In such cases the lowmem
variables/parameters continue to observe the definition by referring
to memory directly mapped by
I found stack trace couldn't be saved sometimes. After some
investigation, it seems that when function trace is enabled,
void save_stack_trace(struct stack_trace *trace)
{
unsigned long sp;
asm(mr %0,1 : =r (sp));
save_context_stack(trace, sp, current, 1);
}
is
Kirill A. Shutemov with 8c6e50b029 commit introduced
vm_ops-map_pages() for mapping easy accessible pages around
fault address in hope to reduce number of minor page faults.
This patch creates infrastructure to modify the FAULT_AROUND_ORDER
value using mm/Kconfig. This will enable architecture
Performance data for different FAULT_AROUND_ORDER values from 4 socket
Power7 system (128 Threads and 128GB memory). perf stat with repeat of 5
is used to get the stddev values. Test ran in v3.14 kernel (Baseline) and
v3.15-rc1 for different fault around order values. %change here is calculated
in
Kirill A. Shutemov with 8c6e50b029 commit introduced
vm_ops-map_pages() for mapping easy accessible pages around
fault address in hope to reduce number of minor page faults.
This patch creates infrastructure to modify the FAULT_AROUND_ORDER
value using mm/Kconfig. This will enable architecture
Hello, Vinod.
Thanks for your feedback.
2014-05-02 21:03 GMT+04:00 Vinod Koul vinod.k...@intel.com:
On Wed, Apr 23, 2014 at 05:53:25PM +0400, Alexander Popov wrote:
+static struct dma_async_tx_descriptor *
+mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+
On 05/07/2014 04:31 PM, Shevchenko, Andriy wrote:
On Sun, 2014-05-04 at 18:22 +0800, Hongbo Zhang wrote:
On 05/03/2014 12:46 AM, Vinod Koul wrote:
On Fri, Apr 18, 2014 at 04:17:51PM +0800, hongbo.zh...@freescale.com wrote:
From: Hongbo Zhang hongbo.zh...@freescale.com
This patch adds
On 05/03/2014 12:50 AM, Vinod Koul wrote:
On Fri, Apr 18, 2014 at 04:17:49PM +0800, hongbo.zh...@freescale.com wrote:
This need review from Dan ...
Dan, could you please have a look at this? thanks.
___
Linuxppc-dev mailing list
On 05/06/2014 04:30 PM, Aneesh Kumar K.V wrote:
Although it's optional IBM POWER cpus always had DAR value set on
alignment interrupt. So don't try to compute these values.
Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
The patch body is fine now. The commit message however
On 05/06/2014 05:54 PM, Aneesh Kumar K.V wrote:
Today when KVM tries to reserve memory for the hash page table it
allocates from the normal page allocator first. If that fails it
falls back to CMA's reserved region. One of the side effects of
this is that we could end up exhausting the page
On 05/06/2014 08:01 PM, Aneesh Kumar K.V wrote:
On recent IBM Power CPUs, while the hashed page table is looked up using
the page size from the segmentation hardware (i.e. the SLB), it is
possible to have the HPT entry indicate a larger page size. Thus for
example it is possible to put a 16MB
HID0 IBM bit 19 is the HILE bit on POWER8. Set it to 0 to take
exceptions in big endian and to 1 to take them in little endian.
Signed-off-by: Anton Blanchard an...@samba.org
---
Index: b/arch/powerpc/include/asm/reg.h
===
---
On 05/06/2014 09:06 PM, mihai.cara...@freescale.com wrote:
-Original Message-
From: Alexander Graf [mailto:ag...@suse.de]
Sent: Friday, May 02, 2014 12:55 PM
To: Caraman Mihai Claudiu-B02008
Cc: kvm-...@vger.kernel.org; k...@vger.kernel.org; linuxppc-
d...@lists.ozlabs.org
Subject: Re:
On Apr 18, 2014, at 8:11 AM, Diana Craciun diana.crac...@freescale.com wrote:
From: Diana Craciun diana.crac...@freescale.com
The CoreNet coherency fabric is a fabric-oriented, conectivity
infrastructure that enables the implementation of coherent, multicore
systems. The CCF acts as a
On Thu, 2014-05-08 at 07:00 -0700, Kumar Gala wrote:
On Apr 18, 2014, at 8:11 AM, Diana Craciun diana.crac...@freescale.com
wrote:
From: Diana Craciun diana.crac...@freescale.com
The CoreNet coherency fabric is a fabric-oriented, conectivity
infrastructure that enables the
20 matches
Mail list logo