From: Anshuman Khandual khand...@linux.vnet.ibm.com
This patch defines macros for the three bolted SLB indexes we use.
Switch the functions that take the indexes as an argument to use the
enum.
Signed-off-by: Anshuman Khandual khand...@linux.vnet.ibm.com
Signed-off-by: Michael Ellerman
On Wed, Aug 12, 2015 at 03:40:56PM +0200, Christophe Leroy wrote:
/* Insert level 1 index */
rlwimi r11, r10, 32 - ((PAGE_SHIFT - 2) 1), (PAGE_SHIFT - 2) 1,
29
lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11)/* Get the
level 1 entry */
+ mtcrr11
Maybe
For no reason other than it looks ugly.
Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
arch/powerpc/mm/slb.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 0c7115fd314b..515730e499fe 100644
---
It makes no sense to put the instructions for calculating the lock
value (cpu number + 1) and the clearing of eq bit of cr1 in lbarx/stbcx
loop. And when the lock is acquired by the other thread, the current
lock value has no chance to equal with the lock value used by current
cpu. So we can skip
I didn't find anything unusual. But I think we do need to order the
load/store of esel_next when acquire/release tcd lock. For acquire,
add a data dependency to order the loads of lock and esel_next.
For release, even there already have a isync here, but it doesn't
guarantee any memory access
Since we moved the lock to be the first element of
struct tlb_core_data in commit 82d86de25b9c (powerpc/e6500: Make TLB
lock recursive), this macro is not used by any code. Just delete it.
Signed-off-by: Kevin Hao haoke...@gmail.com
---
arch/powerpc/kernel/asm-offsets.c | 1 -
1 file changed, 1
At the moment 64bit-prefetchable window can be maximum 64GB, which is
currently got from device tree. This means that in shared mode the maximum
supported VF BAR size is 64GB/256=256MB. While this size could exhaust the
whole 64bit-prefetchable window. This is a design decision to set a
boundary
The alignment of IOV BAR on PowerNV platform is the total size of the IOV
BAR. No matter whether the IOV BAR is extended with number of
roundup_pow_of_two(total_vfs) or number of max PE number (256), the total
size could be calculated by (vfs_expanded * VF_BAR_size).
This patch simplifies the
Each VF could have 6 BARs at most. When the total BAR size exceeds the
gate, after expanding it will also exhaust the M64 Window.
This patch limits the boundary by checking the total VF BAR size instead of
the individual BAR.
Signed-off-by: Wei Yang weiy...@linux.vnet.ibm.com
---
In original design, it tries to group VFs to enable more number of VFs in the
system, when VF BAR is bigger than 64MB. This design has a flaw in which one
error on a VF will interfere other VFs in the same group.
This patch series change this design by using M64 BAR in Single PE mode to
cover
On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
a SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned
from 64bit prefetchable window, which means M64 BAR can't work on it.
This patch makes this explicit.
Signed-off-by: Wei Yang
In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
BARs in Single PE mode to cover the number of VFs required to be enabled.
By doing so, several VFs would be in one VF Group and leads to interference
between VFs in the same group.
This patch changes the design by using one
When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
sparse.
This patch restructures the patch to allocate sparse PE# for VFs when M64
BAR is set to Single PE mode.
Signed-off-by: Wei Yang weiy...@linux.vnet.ibm.com
---
arch/powerpc/include/asm/pci-bridge.h |2 +-
On Wed, Aug 12, 2015 at 03:42:47PM +0300, Boaz Harrosh wrote:
The support I have suggested and submitted for zone-less sections.
(In my add_persistent_memory() patchset)
Would work perfectly well and transparent for all such multimedia cases.
(All hacks removed). In fact I have loaded pmem
On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
I'm assuming that anybody who wants to use the page-less
scatter-gather lists always does so on memory that isn't actually
virtually mapped at all, or only does so on sane architectures that
are cache coherent at a physical level,
On Thu, Aug 13, 2015 at 09:37:37AM +1000, Julian Calaby wrote:
I.e. ~90% of this patch set seems to be just mechanically dropping
BUG_ON()s and converting open coded stuff to use accessor functions
(which should be macros or get inlined, right?) - and the remaining
bit is not flushing if we
Most architectures just call into -dma_supported, but some also return 1
if the method is not present, or 0 if no dma ops are present (although
that should never happeb). Consolidate this more broad version into
common code.
Also fix h8300 which inorrectly always returned 0, which would have been
Currently there are three valid implementations of dma_mapping_error:
(1) call -mapping_error
(2) check for a hardcoded error code
(3) always return 0
This patch provides a common implementation that calls -mapping_error
if present, then checks for DMA_ERROR_CODE if defined or otherwise
On Thu, Aug 13, 2015 at 05:04:08PM +0200, Christoph Hellwig wrote:
diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index 1143c4d..260f52a 100644
--- a/arch/arm/common/dmabounce.c
+++ b/arch/arm/common/dmabounce.c
@@ -440,14 +440,6 @@ static void
On Wed, Aug 12, 2015 at 09:05:15AM -0700, Linus Torvalds wrote:
[ Again, I'm responding to one random patch - this pattern was in
other patches too. ]
A question: do we actually expect to mix page-less and pageful SG
entries in the same SG list?
How does that happen?
Both for DAX and
The coherent DMA allocator works the same over all architectures supporting
dma_map operations.
This patch consolidates them and converges the minor differences:
- the debug_dma helpers are now called from all architectures, including
those that were previously missing them
-
Almost everyone implements dma_set_mask the same way, although some time
that's hidden in -set_dma_mask methods.
Move this implementation to common code, including a callout to override
the post-check action, and remove duplicate instaces in methods as well.
Unfortunately some architectures
On Thu, Aug 13, 2015 at 05:04:05PM +0200, Christoph Hellwig wrote:
diff --git a/arch/arm/include/asm/dma-mapping.h
b/arch/arm/include/asm/dma-mapping.h
index 2ae3424..ab521d5 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -175,21 +175,6 @@ static
Since 2009 we have a nice asm-generic header implementing lots of DMA API
functions for architectures using struct dma_map_ops, but unfortunately
it's still missing a lot of APIs that all architectures still have to
duplicate.
This series consolidates the remaining functions, although we still
Most architectures do not support non-coherent allocations and either
define dma_{alloc,free}_noncoherent to their coherent versions or stub
them out.
Openrisc uses dma_{alloc,free}_attrs to implement them, and only Mips
implements them directly.
This patch moves the Openrisc version to common
On Thu, Aug 13, 2015 at 04:20:40PM +0100, Russell King - ARM Linux wrote:
-/*
- * Dummy noncoherent implementation. We don't provide a dma_cache_sync
- * function so drivers using this API are highlighted with build warnings.
- */
I'd like a similar comment to remain after this patch
On Thu, Aug 13, 2015 at 04:25:05PM +0100, Russell King - ARM Linux wrote:
On Thu, Aug 13, 2015 at 05:04:08PM +0200, Christoph Hellwig wrote:
diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index 1143c4d..260f52a 100644
--- a/arch/arm/common/dmabounce.c
+++
On Thu, 2015-08-13 at 19:51 +0800, Kevin Hao wrote:
It makes no sense to put the instructions for calculating the lock
value (cpu number + 1) and the clearing of eq bit of cr1 in lbarx/stbcx
loop. And when the lock is acquired by the other thread, the current
lock value has no chance to equal
Hi Christoph,
On Fri, Aug 14, 2015 at 12:35 AM, Christoph Hellwig h...@lst.de wrote:
On Thu, Aug 13, 2015 at 09:37:37AM +1000, Julian Calaby wrote:
I.e. ~90% of this patch set seems to be just mechanically dropping
BUG_ON()s and converting open coded stuff to use accessor functions
(which
On Thu, Aug 13, 2015 at 9:51 AM, Ross Zwisler
ross.zwis...@linux.intel.com wrote:
Update the annotation for the kaddr pointer returned by direct_access()
so that it is a __pmem pointer. This is consistent with the PMEM driver
and with how this direct_access() pointer is used in the DAX code.
On Thu, Aug 13, 2015 at 10:11:09PM +0800, Wei Yang wrote:
At the moment 64bit-prefetchable window can be maximum 64GB, which is
currently got from device tree. This means that in shared mode the maximum
supported VF BAR size is 64GB/256=256MB. While this size could exhaust the
whole
On Thu, Aug 13, 2015 at 10:11:10PM +0800, Wei Yang wrote:
Each VF could have 6 BARs at most. When the total BAR size exceeds the
gate, after expanding it will also exhaust the M64 Window.
This patch limits the boundary by checking the total VF BAR size instead of
the individual BAR.
On Thu, Aug 13, 2015 at 10:11:06PM +0800, Wei Yang wrote:
On PHB_IODA2, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If
a SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned
from 64bit prefetchable window, which means M64 BAR can't work on it.
This patch makes this
On Thu, Aug 13, 2015 at 10:11:08PM +0800, Wei Yang wrote:
In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
BARs in Single PE mode to cover the number of VFs required to be enabled.
By doing so, several VFs would be in one VF Group and leads to interference
between VFs in
Peter Zijlstra [pet...@infradead.org] wrote:
| On Tue, Aug 11, 2015 at 09:14:00PM -0700, Sukadev Bhattiprolu wrote:
| | +static void __perf_read_group_add(struct perf_event *leader, u64
read_format, u64 *values)
| | {
| | + struct perf_event *sub;
| | + int n = 1; /* skip @nr */
|
| This
On Thu, Aug 13, 2015 at 01:04:28PM -0700, Sukadev Bhattiprolu wrote:
| | +static int perf_read_group(struct perf_event *event,
| | + u64 read_format, char __user *buf)
| | +{
| | + struct perf_event *leader = event-group_leader, *child;
| | +
Hi,
Here is another instruction trace from a kernel context switch trace.
Quite a lot of register and CR save/restore code.
Regards,
Anton
c02943d8 fsnotify+0x8 mfcrr12
c02943dc fsnotify+0xc std r20,-96(r1)
c02943e0 fsnotify+0x10 std r21,-88(r1)
Update the annotation for the kaddr pointer returned by direct_access()
so that it is a __pmem pointer. This is consistent with the PMEM driver
and with how this direct_access() pointer is used in the DAX code.
Signed-off-by: Ross Zwisler ross.zwis...@linux.intel.com
---
The goal of this series is to enhance the DAX I/O path so that all operations
that store data (I/O writes, zeroing blocks, punching holes, etc.) properly
synchronize the stores to media using the PMEM API. This ensures that the data
DAX is writing is durable on media before the operation
Hi Eduardo,
In previous mail I asked questions about including header files in device tree.
Don't bother, I have already figured out the solution.
Another questions is about cpu cooling:
I found out that there is no explicit calling for registering cpu cooling
device in the of-thermal style
On Fri, Aug 14, 2015 at 11:04:58AM +1000, Gavin Shan wrote:
On Thu, Aug 13, 2015 at 10:11:07PM +0800, Wei Yang wrote:
The alignment of IOV BAR on PowerNV platform is the total size of the IOV
BAR. No matter whether the IOV BAR is extended with number of
roundup_pow_of_two(total_vfs) or number of
On Fri, Aug 14, 2015 at 11:03:00AM +1000, Gavin Shan wrote:
On Thu, Aug 13, 2015 at 10:11:11PM +0800, Wei Yang wrote:
When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
sparse.
This patch restructures the patch to allocate sparse PE# for VFs when M64
BAR is set to Single PE
On Thu, May 21, 2015 at 01:57:04PM +0530, Gautham R. Shenoy wrote:
In guest_exit_cont we call kvmhv_commence_exit which expects the trap
number as the argument. However r3 doesn't contain the trap number at
this point and as a result we would be calling the function with a
spurious trap
On Thu, 2015-08-06 at 18:54 +0530, Anshuman Khandual wrote:
On 08/04/2015 03:27 PM, Michael Ellerman wrote:
On Mon, 2015-13-07 at 08:16:06 UTC, Anshuman Khandual wrote:
This patch enables facility unavailable exceptions for generic facility,
FPU, ALTIVEC and VSX in /proc/interrupts listing
Acked-by: Ian Munsie imun...@au1.ibm.com
Excerpts from Daniel Axtens's message of 2015-08-13 14:11:20 +1000:
+/* Only warn if we detached while the link was OK.
Only because mpe is sure to pick this up (I personally don't mind) -
block comments should start with /* on a line by itself.
+
On Wed, 2015-08-05 at 14:03 +1000, Anton Blanchard wrote:
Hi,
While looking at traces of kernel workloads, I noticed places where gcc
used a large number of non volatiles. Some of these functions
did very little work, and we spent most of our time saving the
non volatiles to the stack and
The paca display is already more than 24 lines, which can be problematic
if you have an old school 80x24 terminal, or more likely you are on a
virtual terminal which does not scroll for whatever reason.
This adds an optional letter to the dp and dpa xmon commands
(dpp and dppa), which will enable
From: Wang Dongsheng dongsheng.w...@freescale.com
Signed-off-by: Wang Dongsheng dongsheng.w...@freescale.com
---
*V2*
No changes.
diff --git a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
b/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
index 9e9f7e2..9770d02 100644
---
On Fri, Aug 14, 2015 at 10:52:21AM +1000, Gavin Shan wrote:
On Thu, Aug 13, 2015 at 10:11:08PM +0800, Wei Yang wrote:
In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
BARs in Single PE mode to cover the number of VFs required to be enabled.
By doing so, several VFs would
On Wed, Aug 12, 2015 at 09:55:25PM +1000, Michael Ellerman wrote:
The paca display is already more than 24 lines, which can be problematic
if you have an old school 80x24 terminal, or more likely you are on a
virtual terminal which does not scroll for whatever reason.
We'd like to expand the
On Thu, 2015-08-13 at 19:51 +0800, Kevin Hao wrote:
I didn't find anything unusual. But I think we do need to order the
load/store of esel_next when acquire/release tcd lock. For acquire,
add a data dependency to order the loads of lock and esel_next.
For release, even there already have a
On Thu, 2015-08-13 at 20:30 -0700, Dan Williams wrote:
On Thu, Aug 13, 2015 at 7:31 AM, Christoph Hellwig h...@lst.de wrote:
On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
I'm assuming that anybody who wants to use the page-less
scatter-gather lists always does so on memory
On Thu, Aug 13, 2015 at 10:11:11PM +0800, Wei Yang wrote:
When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
sparse.
This patch restructures the patch to allocate sparse PE# for VFs when M64
BAR is set to Single PE mode.
Signed-off-by: Wei Yang weiy...@linux.vnet.ibm.com
---
On Thu, Aug 13, 2015 at 10:11:07PM +0800, Wei Yang wrote:
The alignment of IOV BAR on PowerNV platform is the total size of the IOV
BAR. No matter whether the IOV BAR is extended with number of
roundup_pow_of_two(total_vfs) or number of max PE number (256), the total
size could be calculated by
On Wed, 2015-08-12 at 21:06 +0200, Alexander Graf wrote:
On 10.08.15 17:27, Nicholas Krause wrote:
This fixes the wrapper functions kvm_umap_hva_hv and the function
kvm_unmap_hav_range_hv to return the return value of the function
kvm_handle_hva or kvm_handle_hva_range that they are
On Thu, Aug 13, 2015 at 7:31 AM, Christoph Hellwig h...@lst.de wrote:
On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
I'm assuming that anybody who wants to use the page-less
scatter-gather lists always does so on memory that isn't actually
virtually mapped at all, or only does
From: James Bottomley james.bottom...@hansenpartnership.com
Date: Thu, 13 Aug 2015 20:59:20 -0700
On Thu, 2015-08-13 at 20:30 -0700, Dan Williams wrote:
On Thu, Aug 13, 2015 at 7:31 AM, Christoph Hellwig h...@lst.de wrote:
On Wed, Aug 12, 2015 at 09:01:02AM -0700, Linus Torvalds wrote:
I'm
Hello Hongtao,
On Fri, Aug 14, 2015 at 03:15:22AM +, Hongtao Jia wrote:
Hi Eduardo,
In previous mail I asked questions about including header files in device
tree.
Don't bother, I have already figured out the solution.
Another questions is about cpu cooling:
I found out that there
From: Wang Dongsheng dongsheng.w...@freescale.com
SCFG provides SoC specific configuration and status registers for
the chip. Add this for powerpc platform.
Signed-off-by: Wang Dongsheng dongsheng.w...@freescale.com
---
*V2*
- Remove scfg description in board.txt and create scfg.txt for scfg.
-
On Wed, Aug 05, 2015 at 12:38:31PM +0530, Gautham R. Shenoy wrote:
Section 3.7 of Version 1.2 of the Power8 Processor User's Manual
prescribes that updates to HID0 be preceded by a SYNC instruction and
followed by an ISYNC instruction (Page 91).
Create an inline function name
Acked-by: Ian Munsie imun...@au1.ibm.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Excerpts from Daniel Axtens's message of 2015-08-13 14:11:21 +1000:
Previously the SPA was allocated and freed upon entering and leaving
AFU-directed mode. This causes some issues for error recovery - contexts
hold a pointer inside the SPA, and they may persist after the AFU has
been detached.
Hi Tabi,
-Original Message-
From: Timur Tabi [mailto:ti...@tabi.org]
Sent: Tuesday, March 25, 2014 11:55 PM
To: Wang Dongsheng-B40534
Cc: Wood Scott-B07421; Jin Zhengxiong-R64188; Li Yang-Leo-R58472; linuxppc-
d...@lists.ozlabs.org; linux-fb...@vger.kernel.org
Subject: Re: [PATCH]
Acked-by: Ian Munsie imun...@au1.ibm.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Acked-by: Ian Munsie imun...@au1.ibm.com
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On 08/13/2015 05:40 PM, Christoph Hellwig wrote:
On Wed, Aug 12, 2015 at 03:42:47PM +0300, Boaz Harrosh wrote:
The support I have suggested and submitted for zone-less sections.
(In my add_persistent_memory() patchset)
Would work perfectly well and transparent for all such multimedia cases.
66 matches
Mail list logo