4.11-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Ben Hutchings <[email protected]>

commit 8376efd31d3d7c44bd05be337adde023cc531fa1 upstream.

Commit 11e63f6d920d added cache flushing for unaligned writes from an
iovec, covering the first and last cache line of a >= 8 byte write and
the first cache line of a < 8 byte write.  But an unaligned write of
2-7 bytes can still cover two cache lines, so make sure we flush both
in that case.

Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 arch/x86/include/asm/pmem.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/include/asm/pmem.h
+++ b/arch/x86/include/asm/pmem.h
@@ -103,7 +103,7 @@ static inline size_t arch_copy_from_iter
 
                if (bytes < 8) {
                        if (!IS_ALIGNED(dest, 4) || (bytes != 4))
-                               arch_wb_cache_pmem(addr, 1);
+                               arch_wb_cache_pmem(addr, bytes);
                } else {
                        if (!IS_ALIGNED(dest, 8)) {
                                dest = ALIGN(dest, 
boot_cpu_data.x86_clflush_size);


Reply via email to