Author: mjg Date: Tue Oct 8 14:59:50 2019 New Revision: 353307 URL: https://svnweb.freebsd.org/changeset/base/353307
Log: amd64 pmap: allocate pv table entries for gaps in PA This matches the state prior to r353149 and fixes crashes with DRM modules. Reported and tested by: cy, garga, Krasznai Andras Fixes: r353149 ("amd64 pmap: implement per-superpage locks") Sponsored by: The FreeBSD Foundation Modified: head/sys/amd64/amd64/pmap.c Modified: head/sys/amd64/amd64/pmap.c ============================================================================== --- head/sys/amd64/amd64/pmap.c Tue Oct 8 14:54:35 2019 (r353306) +++ head/sys/amd64/amd64/pmap.c Tue Oct 8 14:59:50 2019 (r353307) @@ -1864,26 +1864,14 @@ pmap_init_pv_table(void) highest = -1; s = 0; for (i = 0; i < vm_phys_nsegs; i++) { - start = vm_phys_segs[i].start / NBPDR; end = vm_phys_segs[i].end / NBPDR; domain = vm_phys_segs[i].domain; if (highest >= end) continue; - if (start < highest) { - start = highest + 1; - pvd = &pv_table[start]; - } else { - /* - * The lowest address may land somewhere in the middle - * of our page. Simplify the code by pretending it is - * at the beginning. - */ - pvd = pa_to_pmdp(vm_phys_segs[i].start); - pvd = (struct pmap_large_md_page *)trunc_page(pvd); - start = pvd - pv_table; - } + start = highest + 1; + pvd = &pv_table[start]; pages = end - start + 1; s = round_page(pages * sizeof(*pvd)); _______________________________________________ svn-src-head@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/svn-src-head To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"