Module Name:    src
Committed By:   jdolecek
Date:           Wed Jul 28 21:38:50 UTC 2021

Modified Files:
        src/sys/arch/xen/xen: xbdback_xenbus.c

Log Message:
fix intentional, but eventually faulty off-by-one for the maximum number
of segments for I/O - this was supposed to allow MAXPHYS-size I/O even
with page offset, but actually ended up letting through I/O up to
MAXPHYS+PAGE_SIZE

the KASSERT(bcount < MAXPHYS) is kept as-is, since at that place the number
of segments should already be validated, so it's kernel bug if the size
is still too big there

fixes PR port-xen/56328 by Greg Oster


To generate a diff of this commit:
cvs rdiff -u -r1.97 -r1.98 src/sys/arch/xen/xen/xbdback_xenbus.c

Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.

Modified files:

Index: src/sys/arch/xen/xen/xbdback_xenbus.c
diff -u src/sys/arch/xen/xen/xbdback_xenbus.c:1.97 src/sys/arch/xen/xen/xbdback_xenbus.c:1.98
--- src/sys/arch/xen/xen/xbdback_xenbus.c:1.97	Sun Feb 21 20:02:25 2021
+++ src/sys/arch/xen/xen/xbdback_xenbus.c	Wed Jul 28 21:38:50 2021
@@ -1,4 +1,4 @@
-/*      $NetBSD: xbdback_xenbus.c,v 1.97 2021/02/21 20:02:25 jdolecek Exp $      */
+/*      $NetBSD: xbdback_xenbus.c,v 1.98 2021/07/28 21:38:50 jdolecek Exp $      */
 
 /*
  * Copyright (c) 2006 Manuel Bouyer.
@@ -26,7 +26,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: xbdback_xenbus.c,v 1.97 2021/02/21 20:02:25 jdolecek Exp $");
+__KERNEL_RCSID(0, "$NetBSD: xbdback_xenbus.c,v 1.98 2021/07/28 21:38:50 jdolecek Exp $");
 
 #include <sys/buf.h>
 #include <sys/condvar.h>
@@ -71,8 +71,7 @@ __KERNEL_RCSID(0, "$NetBSD: xbdback_xenb
 #define VBD_BSIZE 512
 #define VBD_MAXSECT ((PAGE_SIZE / VBD_BSIZE) - 1)
 
-/* Need to alloc one extra page to account for possible mapping offset */
-#define VBD_VA_SIZE	(MAXPHYS + PAGE_SIZE)
+#define VBD_VA_SIZE			MAXPHYS
 #define VBD_MAX_INDIRECT_SEGMENTS	VBD_VA_SIZE >> PAGE_SHIFT
 
 CTASSERT(XENSHM_MAX_PAGES_PER_REQUEST >= VBD_MAX_INDIRECT_SEGMENTS);

Reply via email to