On Thu, Feb 03, 2005 at 05:23:57PM -0800, David S. Miller wrote:
> 
> You're absolutely right.  Ok, so we do need to change kfree_skb().
> I believe even with the memory barrier, the atomic_read() optimization
> is still worth it.  atomic ops on sparc64 take a minimum of 40 some odd
> cycles on UltraSPARC-III and later, whereas the memory barrier will
> take up a single cycle most of the time.

OK, here is the patch to do that.  Let's get rid of kfree_skb_fast
while we're at it since it's no longer used.

Signed-off-by: Herbert Xu <[EMAIL PROTECTED]>

Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
===== include/linux/skbuff.h 1.59 vs edited =====
--- 1.59/include/linux/skbuff.h 2005-01-11 07:23:55 +11:00
+++ edited/include/linux/skbuff.h       2005-02-04 12:46:15 +11:00
@@ -353,15 +353,11 @@
  */
 static inline void kfree_skb(struct sk_buff *skb)
 {
-       if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users))
-               __kfree_skb(skb);
-}
-
-/* Use this if you didn't touch the skb state [for fast switching] */
-static inline void kfree_skb_fast(struct sk_buff *skb)
-{
-       if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users))
-               kfree_skbmem(skb);
+       if (likely(atomic_read(&skb->users) == 1))
+               smp_rmb();
+       else if (likely(!atomic_dec_and_test(&skb->users)))
+               return;
+       __kfree_skb(skb);
 }
 
 /**

Reply via email to