Author: arekm                        Date: Wed Oct 20 17:33:28 2010 GMT
Module: packages                      Tag: HEAD
---- Log message:
- fix for http://seclists.org/bugtraq/2010/Oct/161

---- Files affected:
packages/kernel:
   kernel-small_fixes.patch (1.9 -> 1.10) 

---- Diffs:

================================================================
Index: packages/kernel/kernel-small_fixes.patch
diff -u packages/kernel/kernel-small_fixes.patch:1.9 
packages/kernel/kernel-small_fixes.patch:1.10
--- packages/kernel/kernel-small_fixes.patch:1.9        Sun Mar  7 14:30:30 2010
+++ packages/kernel/kernel-small_fixes.patch    Wed Oct 20 19:33:23 2010
@@ -77,3 +77,81 @@
  #include "../../include/linux/license.h"
  
  /* Some toolchains use a `_' prefix for all user symbols. */
+From 799c10559d60f159ab2232203f222f18fa3c4a5f Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <[email protected]>
+Date: Fri, 15 Oct 2010 11:09:28 -0700
+Subject: [PATCH] De-pessimize rds_page_copy_user
+
+Don't try to "optimize" rds_page_copy_user() by using kmap_atomic() and
+the unsafe atomic user mode accessor functions.  It's actually slower
+than the straightforward code on any reasonable modern CPU.
+
+Back when the code was written (although probably not by the time it was
+actually merged, though), 32-bit x86 may have been the dominant
+architecture.  And there kmap_atomic() can be a lot faster than kmap()
+(unless you have very good locality, in which case the virtual address
+caching by kmap() can overcome all the downsides).
+
+But these days, x86-64 may not be more populous, but it's getting there
+(and if you care about performance, it's definitely already there -
+you'd have upgraded your CPU's already in the last few years).  And on
+x86-64, the non-kmap_atomic() version is faster, simply because the code
+is simpler and doesn't have the "re-try page fault" case.
+
+People with old hardware are not likely to care about RDS anyway, and
+the optimization for the 32-bit case is simply buggy, since it doesn't
+verify the user addresses properly.
+
+Reported-by: Dan Rosenberg <[email protected]>
+Acked-by: Andrew Morton <[email protected]>
+Cc: [email protected]
+Signed-off-by: Linus Torvalds <[email protected]>
+---
+ net/rds/page.c |   27 +++++++--------------------
+ 1 files changed, 7 insertions(+), 20 deletions(-)
+
+diff --git a/net/rds/page.c b/net/rds/page.c
+index 595a952..1dfbfea 100644
+--- a/net/rds/page.c
++++ b/net/rds/page.c
+@@ -57,30 +57,17 @@ int rds_page_copy_user(struct page *page, unsigned long 
offset,
+       unsigned long ret;
+       void *addr;
+ 
+-      if (to_user)
++      addr = kmap(page);
++      if (to_user) {
+               rds_stats_add(s_copy_to_user, bytes);
+-      else
++              ret = copy_to_user(ptr, addr + offset, bytes);
++      } else {
+               rds_stats_add(s_copy_from_user, bytes);
+-
+-      addr = kmap_atomic(page, KM_USER0);
+-      if (to_user)
+-              ret = __copy_to_user_inatomic(ptr, addr + offset, bytes);
+-      else
+-              ret = __copy_from_user_inatomic(addr + offset, ptr, bytes);
+-      kunmap_atomic(addr, KM_USER0);
+-
+-      if (ret) {
+-              addr = kmap(page);
+-              if (to_user)
+-                      ret = copy_to_user(ptr, addr + offset, bytes);
+-              else
+-                      ret = copy_from_user(addr + offset, ptr, bytes);
+-              kunmap(page);
+-              if (ret)
+-                      return -EFAULT;
++              ret = copy_from_user(addr + offset, ptr, bytes);
+       }
++      kunmap(page);
+ 
+-      return 0;
++      return ret ? -EFAULT : 0;
+ }
+ EXPORT_SYMBOL_GPL(rds_page_copy_user);
+ 
+-- 
+1.7.3.1
+
================================================================

---- CVS-web:
    
http://cvs.pld-linux.org/cgi-bin/cvsweb.cgi/packages/kernel/kernel-small_fixes.patch?r1=1.9&r2=1.10&f=u

_______________________________________________
pld-cvs-commit mailing list
[email protected]
http://lists.pld-linux.org/mailman/listinfo/pld-cvs-commit

Reply via email to