On Wed, Feb 13, 2019 at 04:28:00PM +0800, Kairui Song wrote:
> @@ -465,6 +472,12 @@ read_kcore(struct file *file, char __user *buffer, 
> size_t buflen, loff_t *fpos)
>                               goto out;
>                       }
>                       m = NULL;       /* skip the list anchor */
> +             } else if (m->type == KCORE_NORAM) {
> +                     /* for NORAM area just fill zero */
> +                     if (clear_user(buffer, tsz)) {
> +                             ret = -EFAULT;
> +                             goto out;
> +                     }

I don't think this works reliably. The loop filling the buffer
has this logic at the top:

        while (buflen) {
                /*
                 * If this is the first iteration or the address is not within
                 * the previous entry, search for a matching entry.
                 */
                if (!m || start < m->addr || start >= m->addr + m->size) {
                        list_for_each_entry(m, &kclist_head, list) {
                                if (start >= m->addr &&
                                    start < m->addr + m->size)
                                        break;
                        }
                }

This sets m to the kclist entry that contains the memory being
read. But if we do a large read that starts in valid KCORE_RAM
memory below the GART overlap and extends into the overlap, m
will not be set to the KCORE_NORAM entry. It will keep pointing
to the KCORE_RAM entry and the patch will have no effect.

But this seems already broken in existing cases as well, various
KCORE_* types overlap with KCORE_RAM, don't they?  So maybe
bf991c2231117d50a7645792b514354fc8d19dae ("proc/kcore: optimize
multiple page reads") broke this and once fixed, this KCORE_NORAM
approach will work. Omar?

Regards,

-- 
Jiri Bohac <[email protected]>
SUSE Labs, Prague, Czechia

Reply via email to