On 30/11/2019 05:58, Cong Wang wrote:
On Fri, Nov 29, 2019 at 6:43 AM John Garry <[email protected]> wrote:

On 29/11/2019 00:48, Cong Wang wrote:
The IOVA cache algorithm implemented in IOMMU code does not
exactly match the original algorithm described in the paper.


which paper?

It's in drivers/iommu/iova.c, from line 769:

  769 /*
  770  * Magazine caches for IOVA ranges.  For an introduction to magazines,
  771  * see the USENIX 2001 paper "Magazines and Vmem: Extending the Slab
  772  * Allocator to Many CPUs and Arbitrary Resources" by Bonwick and Adams.
  773  * For simplicity, we use a static magazine size and don't implement the
  774  * dynamic size tuning described in the paper.
  775  */



Particularly, it doesn't need to free the loaded empty magazine
when trying to put it back to global depot. To make it work, we
have to pre-allocate magazines in the depot and only recycle them
when all of them are full.

Before this patch, rcache->depot[] contains either full or
freed entries, after this patch, it contains either full or
empty (but allocated) entries.

I *quickly* tested this patch and got a small performance gain.

Thanks for testing! It requires a different workload to see bigger gain,
in our case, 24 memcache.parallel servers with 120 clients.


So in fact I was getting a ~10% throughput boost for my storage test. Seems more than I would expect/hope for. I would need to test more.




Cc: Joerg Roedel <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
---
   drivers/iommu/iova.c | 45 +++++++++++++++++++++++++++-----------------
   1 file changed, 28 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 41c605b0058f..cb473ddce4cf 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -862,12 +862,16 @@ static void init_iova_rcaches(struct iova_domain *iovad)
       struct iova_cpu_rcache *cpu_rcache;
       struct iova_rcache *rcache;
       unsigned int cpu;
-     int i;
+     int i, j;

       for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
               rcache = &iovad->rcaches[i];
               spin_lock_init(&rcache->lock);
               rcache->depot_size = 0;
+             for (j = 0; j < MAX_GLOBAL_MAGS; ++j) {
+                     rcache->depot[j] = iova_magazine_alloc(GFP_KERNEL);
+                     WARN_ON(!rcache->depot[j]);
+             }
               rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), 
cache_line_size());
               if (WARN_ON(!rcache->cpu_rcaches))
                       continue;
@@ -900,24 +904,30 @@ static bool __iova_rcache_insert(struct iova_domain 
*iovad,

       if (!iova_magazine_full(cpu_rcache->loaded)) {
               can_insert = true;
-     } else if (!iova_magazine_full(cpu_rcache->prev)) {
+     } else if (iova_magazine_empty(cpu_rcache->prev)) {

is this change strictly necessary?

Yes, because it is what described in the paper. But it should be
functionally same because cpu_rcache->prev is either full or empty.

That is was what I was getting at.





               swap(cpu_rcache->prev, cpu_rcache->loaded);
               can_insert = true;
       } else {
-             struct iova_magazine *new_mag = iova_magazine_alloc(GFP_ATOMIC);

Apart from this change, did anyone ever consider kmem cache for the magazines?

+             spin_lock(&rcache->lock);
+             if (rcache->depot_size < MAX_GLOBAL_MAGS) {
+                     swap(rcache->depot[rcache->depot_size], cpu_rcache->prev);
+                     swap(cpu_rcache->prev, cpu_rcache->loaded);
+                     rcache->depot_size++;
+                     can_insert = true;
+             } else {
+                     mag_to_free = cpu_rcache->loaded;
+             }
+             spin_unlock(&rcache->lock);
+
+             if (mag_to_free) {
+                     struct iova_magazine *new_mag = 
iova_magazine_alloc(GFP_ATOMIC);

-             if (new_mag) {
-                     spin_lock(&rcache->lock);
-                     if (rcache->depot_size < MAX_GLOBAL_MAGS) {
-                             rcache->depot[rcache->depot_size++] =
-                                             cpu_rcache->loaded;
+                     if (new_mag) {
+                             cpu_rcache->loaded = new_mag;
+                             can_insert = true;
                       } else {
-                             mag_to_free = cpu_rcache->loaded;
+                             mag_to_free = NULL;
                       }
-                     spin_unlock(&rcache->lock);
-
-                     cpu_rcache->loaded = new_mag;
-                     can_insert = true;
               }
       }

@@ -963,14 +973,15 @@ static unsigned long __iova_rcache_get(struct iova_rcache 
*rcache,

       if (!iova_magazine_empty(cpu_rcache->loaded)) {
               has_pfn = true;
-     } else if (!iova_magazine_empty(cpu_rcache->prev)) {
+     } else if (iova_magazine_full(cpu_rcache->prev)) {
               swap(cpu_rcache->prev, cpu_rcache->loaded);
               has_pfn = true;
       } else {
               spin_lock(&rcache->lock);
               if (rcache->depot_size > 0) {
-                     iova_magazine_free(cpu_rcache->loaded);

it is good to remove this from under the lock, apart from this change

-                     cpu_rcache->loaded = rcache->depot[--rcache->depot_size];
+                     swap(rcache->depot[rcache->depot_size - 1], 
cpu_rcache->prev);
+                     swap(cpu_rcache->prev, cpu_rcache->loaded);

I wonder if not using swap() at all is neater here.

+                     rcache->depot_size--;

I'm not sure how appropriate the name "depot_size" is any longer.

I think it is still okay, because empty ones don't count. However if you
have better names, I am open to your suggestion.

Yeah, probably.

thanks,
John

_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to