Michael> Fix page shift calculation: for all pages except possibly
    Michael> the last one, the byte beyond the buffer end must be page
    Michael> aligned.

Good catch.  But it seems to me that we don't have to worry about the
first page either, because of the trick

        buffer_list[0].size += buffer_list[0].addr & ((1ULL << shift) - 1);
        buffer_list[0].addr &= ~0ull << shift;

later on.  Does the patch below make sense?

 - R.

--- infiniband/hw/mthca/mthca_provider.c        (revision 4754)
+++ infiniband/hw/mthca/mthca_provider.c        (working copy)
@@ -778,17 +778,17 @@ static struct ib_mr *mthca_reg_phys_mr(s
        mask = 0;
        total_size = 0;
        for (i = 0; i < num_phys_buf; ++i) {
-               if (i != 0 && buffer_list[i].addr & ~PAGE_MASK)
-                       return ERR_PTR(-EINVAL);
-               if (i != 0 && i != num_phys_buf - 1 &&
-                   (buffer_list[i].size & ~PAGE_MASK))
-                       return ERR_PTR(-EINVAL);
+               if (i != 0)
+                       mask |= buffer_list[i].addr;
+               if (i != 0 && i != num_phys_buf - 1)
+                       mask |= buffer_list[i].size;
 
                total_size += buffer_list[i].size;
-               if (i > 0)
-                       mask |= buffer_list[i].addr;
        }
 
+       if (mask & ~PAGE_MASK)
+               return ERR_PTR(-EINVAL);
+
        /* Find largest page shift we can use to cover buffers */
        for (shift = PAGE_SHIFT; shift < 31; ++shift)
                if (num_phys_buf > 1) {
_______________________________________________
openib-general mailing list
[email protected]
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to