Re: [Crash-utility] [PATCH] raw_data_dump: display only 8/16/32 bits if requested

2020-04-02 Thread Dominique Martinet
Dave Anderson wrote on Thu, Apr 02, 2020:
> Yes, let's do that -- queued for crash-7.2.9:
> 
>   
> https://github.com/crash-utility/crash/commit/8c28b5625505241d80ec5162f58ccc563e5e59f9

Thanks!
Checked both commits, small wording difference looks good to me.

-- 
Dominique


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility



Re: [Crash-utility] [PATCH] raw_data_dump: display only 8/16/32 bits if requested

2020-04-02 Thread Dave Anderson



- Original Message -
> Previously, calling raw_data_dump() with e.g. len 4 on 64bit systems
> would dump 8 bytes anyway, making it hard to tell what one wants to see.
> 
> For example, with task_struct.rt_priority a uint32.
> before patch:
> crash> struct -r task_struct.rt_priority 8d9b36186180
> 8d9b361861dc:  9741dec00063c.A.
> 
> after patch:
> crash-patched> struct -r task_struct.rt_priority 8d9b36186180
> 8d9b361861dc:  0063  c...
> ---
> 
> Here's the promised follow-up.
> 
> Two remarks:
>  - I wasn't sure about an explicit DISPLAY_64 flag, but if we're 32bit
> and want to print 8 bytes it is just as likely to be two entities than
> a single one so it makes more sense to leave default to me.
>  - I wasn't sure on what to do if someone wants to print some odd size,
> e.g. 6 bits? Should that be DISPLAY_8 anyway?
> I tried on some bitmap and it looks like raw_data_dump is called with 8
> anyway even if the bitmap part is less than 8, I'm not sure this can
> ever be called with weird values, so probably best left as is.

Yes, let's do that -- queued for crash-7.2.9:

  
https://github.com/crash-utility/crash/commit/8c28b5625505241d80ec5162f58ccc563e5e59f9

Thanks,
  Dave



> Thanks!
> 
>  memory.c | 19 ++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/memory.c b/memory.c
> index 4f7b6a0..ccc2944 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -2113,6 +2113,7 @@ raw_data_dump(ulong addr, long count, int symbolic)
>   long wordcnt;
>   ulonglong address;
>   int memtype;
> + ulong flags = HEXADECIMAL;
>  
>   switch (sizeof(long))
>   {
> @@ -2132,6 +2133,22 @@ raw_data_dump(ulong addr, long count, int symbolic)
>   break;
>   }
>  
> + switch (count)
> + {
> + case SIZEOF_8BIT:
> + flags |= DISPLAY_8;
> + break;
> + case SIZEOF_16BIT:
> + flags |= DISPLAY_16;
> + break;
> + case SIZEOF_32BIT:
> + flags |= DISPLAY_32;
> + break;
> + default:
> + flags |= DISPLAY_DEFAULT;
> + break;
> + }
> +
>   if (pc->curcmd_flags & MEMTYPE_FILEADDR) {
>   address = pc->curcmd_private;
>   memtype = FILEADDR;
> @@ -2144,7 +2161,7 @@ raw_data_dump(ulong addr, long count, int symbolic)
>   }
>  
>   display_memory(address, wordcnt,
> - HEXADECIMAL|DISPLAY_DEFAULT|(symbolic ? SYMBOLIC : ASCII_ENDLINE),
> + flags|(symbolic ? SYMBOLIC : ASCII_ENDLINE),
>   memtype, NULL);
>  }
>  
> --
> 2.26.0
> 
> 
> --
> Crash-utility mailing list
> Crash-utility@redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
> 

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility



[Crash-utility] [PATCH] raw_data_dump: display only 8/16/32 bits if requested

2020-04-02 Thread Dominique Martinet
Previously, calling raw_data_dump() with e.g. len 4 on 64bit systems
would dump 8 bytes anyway, making it hard to tell what one wants to see.

For example, with task_struct.rt_priority a uint32.
before patch:
crash> struct -r task_struct.rt_priority 8d9b36186180
8d9b361861dc:  9741dec00063c.A.

after patch:
crash-patched> struct -r task_struct.rt_priority 8d9b36186180
8d9b361861dc:  0063  c...
---

Here's the promised follow-up.

Two remarks:
 - I wasn't sure about an explicit DISPLAY_64 flag, but if we're 32bit
and want to print 8 bytes it is just as likely to be two entities than
a single one so it makes more sense to leave default to me.
 - I wasn't sure on what to do if someone wants to print some odd size,
e.g. 6 bits? Should that be DISPLAY_8 anyway?
I tried on some bitmap and it looks like raw_data_dump is called with 8
anyway even if the bitmap part is less than 8, I'm not sure this can
ever be called with weird values, so probably best left as is.

Thanks!

 memory.c | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/memory.c b/memory.c
index 4f7b6a0..ccc2944 100644
--- a/memory.c
+++ b/memory.c
@@ -2113,6 +2113,7 @@ raw_data_dump(ulong addr, long count, int symbolic)
long wordcnt;
ulonglong address;
int memtype;
+   ulong flags = HEXADECIMAL;
 
switch (sizeof(long))
{
@@ -2132,6 +2133,22 @@ raw_data_dump(ulong addr, long count, int symbolic)
break;
}
 
+   switch (count)
+   {
+   case SIZEOF_8BIT:
+   flags |= DISPLAY_8;
+   break;
+   case SIZEOF_16BIT:
+   flags |= DISPLAY_16;
+   break;
+   case SIZEOF_32BIT:
+   flags |= DISPLAY_32;
+   break;
+   default:
+   flags |= DISPLAY_DEFAULT;
+   break;
+   }
+
if (pc->curcmd_flags & MEMTYPE_FILEADDR) {
address = pc->curcmd_private;
memtype = FILEADDR;
@@ -2144,7 +2161,7 @@ raw_data_dump(ulong addr, long count, int symbolic)
}
 
display_memory(address, wordcnt, 
-   HEXADECIMAL|DISPLAY_DEFAULT|(symbolic ? SYMBOLIC : ASCII_ENDLINE),
+   flags|(symbolic ? SYMBOLIC : ASCII_ENDLINE),
memtype, NULL);
 }
 
-- 
2.26.0


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility



Re: [Crash-utility] 答复: [External Mail]Re: zram decompress support for gcore/crash-utility

2020-04-02 Thread Dave Anderson


- Original Message -
> Hi,Dave & hatayama
> 
> I made two patchs in crash-utility and gcore to support zram decompress
> 1.In crash-utility,I add patch in readmem to support zram decompression,
> readmem interface automatically recognizes and decompresses zram data.
> There are some limitations to zram support,only support lzo decompress,kernel
> support lzo,lz4,lz4hc,842,zstd,but lzo is default.
> 
> use "rd" command also read data even if mapping to zram
> [without patch]
> crash> rd 144fc000 2
> rd: invalid user virtual address: 144fc000  type: "64-bit UVADDR"
> [with patch]
> crash> rd 144fc000 2
> 144fc000:  06ecdc6b06ecb280 06f027f906eebebe   k'..

With respect to the crash utility patch:

Apparently you wrote this patch to only support ARM64?  Here's what happens on 
an x86_64:
  
  $ patch -p1 < $bos/0001-support-zram-decompress-in-readmem.patch
  patching file defs.h
  Hunk #5 succeeded at 5304 (offset 2 lines).
  patching file memory.c
  $ make warn
  ... [ cut ] ...
  cc -c -g -DX86_64 -DLZO -DSNAPPY -DGDB_7_6  memory.c -Wall -O2 
-Wstrict-prototypes -Wmissing-prototypes -fstack-protector -Wformat-security 
  In file included from memory.c:19:0:
  memory.c: In function 'zram_object_addr':
  defs.h:5310:27: error: 'PHYS_MASK_SHIFT' undeclared (first use in this 
function)
   #define _PFN_BITS(PHYS_MASK_SHIFT - PAGESHIFT())
 ^
  defs.h:5311:43: note: in expansion of macro '_PFN_BITS'
   #define OBJ_INDEX_BITS   (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS)
 ^
  memory.c:19838:27: note: in expansion of macro 'OBJ_INDEX_BITS'
page = pfn_to_map(obj >> OBJ_INDEX_BITS);
 ^
  defs.h:5310:27: note: each undeclared identifier is reported only once for 
each function it appears in
   #define _PFN_BITS(PHYS_MASK_SHIFT - PAGESHIFT())
 ^
  defs.h:5311:43: note: in expansion of macro '_PFN_BITS'
   #define OBJ_INDEX_BITS   (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS)
 ^
  memory.c:19838:27: note: in expansion of macro 'OBJ_INDEX_BITS'
page = pfn_to_map(obj >> OBJ_INDEX_BITS);
 ^
  memory.c: In function 'try_zram_decompress':
  memory.c:19940:16: error: 'PTE_VALID' undeclared (first use in this function)
if (pte_val & PTE_VALID)
  ^
  memory.c:19932:8: warning: unused variable 'ret' [-Wunused-variable]
ulong ret = 0;
  ^
  make[4]: *** [memory.o] Error 1
  make[3]: *** [gdb] Error 2
  make[2]: *** [rebuild] Error 2
  make[1]: *** [gdb_merge] Error 2
  make: *** [warn] Error 2
  $

So that's a non-starter.  If it can't be made architecture-neutral, then at 
least the other major architectures need to be supported.  At a minimum all 
architectures need to be able to be compiled with LZO enabled.

If you can do that, other suggestions I have for the patch are:

 (1) Move all the new offset_table entries to the end of the structure to 
prevent
 the breakage of previously-compiled extension modules that use OFFSET().

 (2) Move the new LZO specific functions to diskdump.c, which is the only C file
 that is set up to deal with LZO being #define'd on the fly with "make lzo".

 (3) Create a dummy try_zram_decompress() function in diskdump.c that just 
 returns 0.  Put it outside of the LZO function block, e.g.:

#ifdef LZO
zram_object_addr(args... )
...
lookup_swap_cache(args...)
...
try_zram_decompress(args...)
...
#else
try_zram_decompress(args...) { return 0; }
#endif

 Alternatively, you could create a try_zram_decompress() macro in defs.h 
the same way.

 (4) Remove the #ifdef/#endif LZO section of readmem().

 (5) PLEASE do not make all the white-space changes in memory.c.  It's annoying 
 to have to review the patch when it's cluttered with changes that are 
 irrelevant to the task at hand.

Thanks,
  Dave






> 
> 2.In gcore, I have to make a small change ,change parameter of readmem from
> PHYADDR to UVADDR, other work will be done by crash
> 
> Please help review.
> Thanks
> 
> -邮件原件-
> 发件人: Dave Anderson 
> 发送时间: 2020年4月2日 0:29
> 收件人: 赵乾利 
> 抄送: d hatayama ; Discussion list for crash utility
> usage, maintenance and development 
> 主题: Re: [External Mail]Re: [Crash-utility] zram decompress support for
> gcore/crash-utility
> 
> 
> 
> - Original Message -
> > Hi,Dave
> > Zram is a virtual device,it simulated as a block device,it's part of
> > memroy/ramdump,just enable  CONFIG_ZRAM,no other settings needed.
> > you can refer to drivers/block/zram/zram_drv.c driver calling
> > zram_meta_alloc to alloc memory from RAM.
> >
> > We want to be able to access these zram page like a normal page.
> 
> I understand all that.  I'm just curious how makedumpfile will handle/filter
> the physical RAM pages that make up the 

Re: [Crash-utility] [PATCH v2] struct: Allow -r with a single member-specific output

2020-04-02 Thread Dave Anderson



- Original Message -
> Hi Dave,
> 
> Dave Anderson wrote on Wed, Apr 01, 2020:
> > > I didn't post that v2 back in Feb because I wasn't totally happy with
> > > it; I can't say I now am but might as well get your take on it...
> > 
> > What part of this patch aren't you happy about?
> 
> It's mostly style really - I don't like that we're calling in twice in
> datatype_info(), because member_to_datatype() doesn't really fill in the
> datatype_member struct and only fills in dm->member and offset.
> At a naive read, I would expect member_to_datatype to fill in the whole dm...
> 
> Functionally I tested it, it's a bit slower than my original version but
> not enough to be a valid argument here; it's much better than nothing so
> if you're happy with this let's go with it :)

Works for me -- queued for crash-7.2.9:
  
  
https://github.com/crash-utility/crash/commit/42fba6524ce01b6cecb4cd2cac8f0a50d79b1420

Thanks,
  Dave


> 
> I will probably want to follow up with a second patch for raw_data_dump
> to add DISPLAY_32/16 flags if the len requested is < word size but it's
> not directly related to this patch...
> 
> 
> Cheers,
> --
> Dominique
> 
> 
> --
> Crash-utility mailing list
> Crash-utility@redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
> 

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility



[Crash-utility] 答复: [External Mail]Re: zram decompress support for gcore/crash-utility

2020-04-02 Thread 赵乾利
Hi,Dave & hatayama

I made two patchs in crash-utility and gcore to support zram decompress
1.In crash-utility,I add patch in readmem to support zram decompression, 
readmem interface automatically recognizes and decompresses zram data.
There are some limitations to zram support,only support lzo decompress,kernel 
support lzo,lz4,lz4hc,842,zstd,but lzo is default.

use "rd" command also read data even if mapping to zram
[without patch]
crash> rd 144fc000 2
rd: invalid user virtual address: 144fc000  type: "64-bit UVADDR"
[with patch]
crash> rd 144fc000 2
144fc000:  06ecdc6b06ecb280 06f027f906eebebe   k'..

2.In gcore, I have to make a small change ,change parameter of readmem from 
PHYADDR to UVADDR, other work will be done by crash

Please help review.
Thanks

-邮件原件-
发件人: Dave Anderson 
发送时间: 2020年4月2日 0:29
收件人: 赵乾利 
抄送: d hatayama ; Discussion list for crash utility 
usage, maintenance and development 
主题: Re: [External Mail]Re: [Crash-utility] zram decompress support for 
gcore/crash-utility



- Original Message -
> Hi,Dave
> Zram is a virtual device,it simulated as a block device,it's part of
> memroy/ramdump,just enable  CONFIG_ZRAM,no other settings needed.
> you can refer to drivers/block/zram/zram_drv.c driver calling
> zram_meta_alloc to alloc memory from RAM.
>
> We want to be able to access these zram page like a normal page.

I understand all that.  I'm just curious how makedumpfile will handle/filter 
the physical RAM pages that make up the zram block device.

Anyway, send a patch and I'll take a look.

Dave


>
> 
> From: Dave Anderson 
> Sent: Wednesday, April 1, 2020 23:24
> To: 赵乾利
> Cc: d hatayama; Discussion list for crash utility usage, maintenance
> and development
> Subject: Re: [External Mail]Re: [Crash-utility] zram decompress
> support for gcore/crash-utility
>
> - Original Message -
> > Hi,Dave
> > zram is same with other swap device,but every swaped page will be
> > compressed then saved to another memory address.
> > The process is same with the common swap device,non-swap just a
> > normal user address,pgd and mmu will translate to phy address
> >
> > please refer to below information:
> > crash> vm -p
> > PID: 1565   TASK: ffe1fce32d00  CPU: 7   COMMAND: "system_server"
> >MM   PGD  RSSTOTAL_VM
> > ffe264431c00  ffe1f54ad000  528472k  9780384k
> >   VMA   START   END FLAGS FILE
> > ffe0ea401300   12c0   12e0 100073
> > VIRTUAL PHYSICAL
> > ...
> > 144fc000SWAP: /dev/block/zram0  OFFSET: 236750
> > ...
> > 1738e000SWAP: /dev/block/zram0  OFFSET: 73426
> > 1738f000   21aa2c000
> > 1739   1c3308000
> > 17391000SWAP: /dev/block/zram0  OFFSET: 73431
> > 17392000   19c162000
> > 17393000   19c132000
> > 17394000SWAP: /dev/block/zram0  OFFSET: 234576
> > 17395000   19c369000
> > 17396000   20b35c000
> > 17397000   18011e000
> > 17398000SWAP: /dev/block/zram0  OFFSET: 73433
> > 17399000   1dc3d2000
> > 1739a000   1bc59f000
> > 1739b000SWAP: /dev/block/zram0  OFFSET: 73437
> >
> >
> > crash> vtop -c 1565 144fc000
> > VIRTUAL PHYSICAL
> > 144fc000(not mapped)
> >
> > PAGE DIRECTORY: ffe1f54ad000
> >PGD: ffe1f54ad000 => 1f54ab003
> >PMD: ffe1f54ab510 => 1f43b8003
> >PTE: ffe1f43b87e0 => 39cce00
> >
> >   PTE  SWAPOFFSET
> > 39cce00  /dev/block/zram0  236750
> >
> >   VMA   START   END FLAGS FILE
> > ffe148bafe40   144c   1454 100073
> >
> > SWAP: /dev/block/zram0  OFFSET: 236750
>
> Ok, so with respect to user-space virtual addresses, there is nothing
> other than handling zram swap-backed memory.
>
> So what you're proposing is that when reading user-space memory that
> happens to be backed-up on a zram swap device, then the user data
> could alternatively be read from the zram swap device, and presented
> as if it were present in physical memory?
>
> Are the physical RAM pages that make up the contents of a zram device
> collected with a typical filtered compressed kdump?  If not, what
> makedumpfile -d flag is required for them to be captured?
>
> Dave
>
>
> #/**本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部
> 或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
> This e-mail and its attachments contain confidential information from
> XIAOMI, which is intended only for the person or entity whose address
> is listed above. Any use of the information contained herein in any
> way (including, but not limited to, total or partial disclosure,
> reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error,
> please notify the sender by phone or email immediately and delete
> it!**/#
>