Hi Dave,

On 9/17/2019 2:40 PM, Dave Anderson wrote:
> 
> Hi Kazu,
> 
> This seems to be an extrememly rare condition, but in any case, your
> patch is the correct thing to do.  
> 
> However, I do have two dumpfiles where the change causes the warning message
> to be displayed unnecessarily, which was only meant to be displayed if the
> mem_map cache is *not* virtually-mapped.  But the single check for 
> !(vt->flags & V_MEM_MAP) is not always sufficient, so it should also
> check for !(machdep->flags & VMEMMAP) like so:
> 
> --- a/memory.c        2019-09-17 14:20:24.586069004 -0400
> +++ b/memory.c        2019-09-17 14:36:36.765286602 -0400
> @@ -6348,13 +6348,13 @@ fill_mem_map_cache(ulong pp, ulong ppend
>                  if (cnt > size)
>                          cnt = size;
>  
> -             if (!readmem(addr, KVADDR, bufptr, size,
> +             if (!readmem(addr, KVADDR, bufptr, cnt,
>                      "virtual page struct cache", RETURN_ON_ERROR|QUIET)) {
> -                     BZERO(bufptr, size);
> -                     if (!(vt->flags & V_MEM_MAP) && ((addr+size) < ppend)) 
> +                     BZERO(bufptr, cnt);
> +                     if (!((vt->flags & V_MEM_MAP) || (machdep->flags & 
> VMEMMAP)) && ((addr+cnt) < ppend))
>                               error(WARNING, 
>                                  "mem_map[] from %lx to %lx not accessible\n",
> -                                     addr, addr+size);
> +                                     addr, addr+cnt);
>               }
>  
>               addr += cnt;
> 
> You OK with that?

I'm OK, thanks!

Kazu

> 
> Thanks,
>   Dave
> 
> 
> 
> 
>  
> 
> ----- Original Message -----
>> Hi Kazu,
>>
>> I will be out of the office until Monday September 16th.  I will check
>> out your patch when I get back.
>>
>> Thanks,
>>   Dave
>>
>>
>> ----- Original Message -----
>>>
>>> fill_mem_map_cache() intends to read page-size-or-less size if it cannot
>>> read the whole cache size, but it seems it doesn't correctly, and shows
>>> just zero for existing data.
>>>
>>>   crash> kmem -p 1000
>>>         PAGE       PHYSICAL      MAPPING       INDEX CNT FLAGS
>>>   ffffea0000000040     1000                0        0  0 0
>>>   crash> rd ffffea0000000040
>>>   ffffea0000000040:  000fffff00000400                    ........
>>>
>>> I think the size below should be cnt, and confirmed that the patch works
>>> well with a dump by makedumpfile -e option.
>>>
>>> Signed-off-by: Kazuhito Hagio <[email protected]>
>>> ---
>>>  memory.c | 8 ++++----
>>>  1 file changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/memory.c b/memory.c
>>> index ac7e73679405..4584673ed0ae 100644
>>> --- a/memory.c
>>> +++ b/memory.c
>>> @@ -6348,13 +6348,13 @@ fill_mem_map_cache(ulong pp, ulong ppend, char
>>> *page_cache)
>>>                  if (cnt > size)
>>>                          cnt = size;
>>>  
>>> -                if (!readmem(addr, KVADDR, bufptr, size,
>>> +                if (!readmem(addr, KVADDR, bufptr, cnt,
>>>                      "virtual page struct cache", RETURN_ON_ERROR|QUIET)) {
>>> -                        BZERO(bufptr, size);
>>> -                        if (!(vt->flags & V_MEM_MAP) && ((addr+size) <
>>> ppend))
>>> +                        BZERO(bufptr, cnt);
>>> +                        if (!(vt->flags & V_MEM_MAP) && ((addr+cnt) <
>>> ppend))
>>>                                  error(WARNING,
>>>                                     "mem_map[] from %lx to %lx not
>>>                                     accessible\n",
>>> -                                        addr, addr+size);
>>> +                                        addr, addr+cnt);
>>>                  }
>>>  
>>>                  addr += cnt;
>>> --
>>> 2.18.1
>>>
>>
>> --
>> Crash-utility mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/crash-utility
>>
> 
> --
> Crash-utility mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/crash-utility
> 

--
Crash-utility mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/crash-utility

Reply via email to