Re: [Crash-utility] [External Mail]RE: zram decompress support for gcore/crash-utility

2020-04-06 Thread 赵乾利
Hi,hatayama

Please refer to the following for the exact kernel changes:

commit 34be98f4944f99076f049a6806fc5f5207a755d3
Author: Ard Biesheuvel 
Date:   Thu Jul 20 17:15:45 2017 +0100

arm64: kernel: remove {THREAD,IRQ_STACK}_START_SP

For historical reasons, we leave the top 16 bytes of our task and IRQ
stacks unused, a practice used to ensure that the SP can always be
masked to find the base of the current stack (historically, where
thread_info could be found).

However, this is not necessary, as:

* When an exception is taken from a task stack, we decrement the SP by
  S_FRAME_SIZE and stash the exception registers before we compare the
  SP against the task stack. In such cases, the SP must be at least
  S_FRAME_SIZE below the limit, and can be safely masked to determine
  whether the task stack is in use.

* When transitioning to an IRQ stack, we'll place a dummy frame onto the
  IRQ stack before enabling asynchronous exceptions, or executing code
  we expect to trigger faults. Thus, if an exception is taken from the
  IRQ stack, the SP must be at least 16 bytes below the limit.

* We no longer mask the SP to find the thread_info, which is now found
  via sp_el0. Note that historically, the offset was critical to ensure
  that cpu_switch_to() found the correct stack for new threads that
  hadn't yet executed ret_from_fork().

Given that, this initial offset serves no purpose, and can be removed.
This brings us in-line with other architectures (e.g. x86) which do not
rely on this masking.

Signed-off-by: Ard Biesheuvel 
[Mark: rebase, kill THREAD_START_SP, commit msg additions]
Signed-off-by: Mark Rutland 
Reviewed-by: Will Deacon 
Tested-by: Laura Abbott 
Cc: Catalin Marinas 
Cc: James Morse 


From: d.hatay...@fujitsu.com 
Sent: Monday, April 6, 2020 12:32
To: 赵乾利
Cc: crash-utility@redhat.com 
Subject: RE: [External Mail]RE: zram decompress support for gcore/crash-utility

Zhao,

> -Original Message-
> From: 赵乾利 
> Sent: Wednesday, April 1, 2020 10:22 PM
> To: Hatayama, Daisuke/畑山 大輔 
> Cc: crash-utility@redhat.com  
> 
> Subject: 答复: [External Mail]RE: zram decompress support for 
> gcore/crash-utility
>
> Hi,hatayama
>
> I just porting zram support into crash-utility,in this way,gcore need calling 
> zram decompress(try_zram_decompress)
> function in gcore.
> integrate zram decompress to readmem is a good suggestion,I'm working on it.
>
> About 0002-gcore-ARM-ARM64-reserved-8-16-byte-in-the-top-of-sta.patch,it's a 
> completely independent
> patch,without this patch,the coredump register will be wrong/dislocation, so 
> that gdb cannot parse out the complete call
> stack.

Thanks for your explanation. I will write this in the commit description.

Could you also tell me the exact commit in Linux kernel that made the 
corresponding change?

> You can see blow:
> [Without patch]
> (gdb) bt
> #0  android::Mutex::lock (this=) at 
> system/core/libutils/include/utils/Mutex.h:183
> #1  android::Looper::pollInner (this=0x704ad1c590  epoll_event*, int, int)>, timeoutMillis=1291145664)
> at system/core/libutils/Looper.cpp:243
> #2  0xbc5e696a0018 in ?? ()
> Backtrace stopped: previous frame identical to this frame (corrupt stack?)
>
> (gdb) info reg
> x0 0xff801998bff0 -549326372880
> x1 0xffa6d3e83848 -382991845304
> x2 0x34 52
> x3 0x7fdb6d8f90 549142237072
> x4 0x10 16
> x5 0x74a5 29861
> x6 0x0 0
> x7 0x8 8
> x8 0x704e33a000 482348343296
> x9 0xbf815ee 200807918
> x100x16 22
> x110xebabf645f5e97f31 -1464806473440067791
> x120x1 1
> x130xc0 192
> x140x20daea4ae8 14741160
> x150x115 277
> x160x1080400222a3e010 1189020679541088272
> x170x40 64
> x180x704a9fcd70 482288323952
> x190x704ad1c590 482291598736
> x200x704e01c000 482345074688
> x210x704cf551c0 482327482816
> x220x704cf55268 482327482984
> x230x74a5 29861
> x240x74a5 29861
> x250x704cf551c0 482327482816
> x260x7fff 2147483647
> x270x704d0aa020 482328879136
> x280x704cfc8840 482327955520
> x290x704407b8 1883506616
> x300x70435dc8 1883463112
> sp 0x7fdb6d90f0 0x7fdb6d90f0
> pc 0x704a9f80c0 0x704a9f80c0 
> cpsr   0xdb6d8f50 -613576880
> fpsr   0x17 23
> fpcr   0x0 0
>
> [With patch]
> (gdb) bt
> #0  __epoll_pwait () at bionic/libc/arch-arm64/s

Re: [Crash-utility] [External Mail]RE: zram decompress support for gcore/crash-utility

2020-04-05 Thread d.hatay...@fujitsu.com
Zhao,

> -Original Message-
> From: 赵乾利 
> Sent: Wednesday, April 1, 2020 10:22 PM
> To: Hatayama, Daisuke/畑山 大輔 
> Cc: crash-utility@redhat.com  
> 
> Subject: 答复: [External Mail]RE: zram decompress support for 
> gcore/crash-utility
> 
> Hi,hatayama
> 
> I just porting zram support into crash-utility,in this way,gcore need calling 
> zram decompress(try_zram_decompress)
> function in gcore.
> integrate zram decompress to readmem is a good suggestion,I'm working on it.
> 
> About 0002-gcore-ARM-ARM64-reserved-8-16-byte-in-the-top-of-sta.patch,it's a 
> completely independent
> patch,without this patch,the coredump register will be wrong/dislocation, so 
> that gdb cannot parse out the complete call
> stack.

Thanks for your explanation. I will write this in the commit description.

Could you also tell me the exact commit in Linux kernel that made the 
corresponding change?

> You can see blow:
> [Without patch]
> (gdb) bt
> #0  android::Mutex::lock (this=) at 
> system/core/libutils/include/utils/Mutex.h:183
> #1  android::Looper::pollInner (this=0x704ad1c590  epoll_event*, int, int)>, timeoutMillis=1291145664)
> at system/core/libutils/Looper.cpp:243
> #2  0xbc5e696a0018 in ?? ()
> Backtrace stopped: previous frame identical to this frame (corrupt stack?)
> 
> (gdb) info reg
> x0 0xff801998bff0 -549326372880
> x1 0xffa6d3e83848 -382991845304
> x2 0x34 52
> x3 0x7fdb6d8f90 549142237072
> x4 0x10 16
> x5 0x74a5 29861
> x6 0x0 0
> x7 0x8 8
> x8 0x704e33a000 482348343296
> x9 0xbf815ee 200807918
> x100x16 22
> x110xebabf645f5e97f31 -1464806473440067791
> x120x1 1
> x130xc0 192
> x140x20daea4ae8 14741160
> x150x115 277
> x160x1080400222a3e010 1189020679541088272
> x170x40 64
> x180x704a9fcd70 482288323952
> x190x704ad1c590 482291598736
> x200x704e01c000 482345074688
> x210x704cf551c0 482327482816
> x220x704cf55268 482327482984
> x230x74a5 29861
> x240x74a5 29861
> x250x704cf551c0 482327482816
> x260x7fff 2147483647
> x270x704d0aa020 482328879136
> x280x704cfc8840 482327955520
> x290x704407b8 1883506616
> x300x70435dc8 1883463112
> sp 0x7fdb6d90f0 0x7fdb6d90f0
> pc 0x704a9f80c0 0x704a9f80c0 
> cpsr   0xdb6d8f50 -613576880
> fpsr   0x17 23
> fpcr   0x0 0
> 
> [With patch]
> (gdb) bt
> #0  __epoll_pwait () at bionic/libc/arch-arm64/syscalls/__epoll_pwait.S:9
> #1  0x00704a9f80c0 in android::Looper::pollInner (this=0x704cf551c0, 
> timeoutMillis=29861) at
> system/core/libutils/Looper.cpp:237
> #2  0x00704a9f7f90 in android::Looper::pollOnce (this=0x704cf551c0, 
> timeoutMillis=29861, outFd=0x0,
> outEvents=0x0, outData=0x0) at system/core/libutils/Looper.cpp:205
> #3  0x00704c4530f4 in android::Looper::pollOnce (this=0x34, 
> timeoutMillis=-613576816) at
> system/core/libutils/include/utils/Looper.h:267
> #4  android::NativeMessageQueue::pollOnce (this=, 
> env=0x704cf5db80, pollObj=,
> timeoutMillis=-613576816)
> at frameworks/base/core/jni/android_os_MessageQueue.cpp:110
> #5  android::android_os_MessageQueue_nativePollOnce (env=0x704cf5db80, 
> obj=, ptr= out>, timeoutMillis=-613576816)
> at frameworks/base/core/jni/android_os_MessageQueue.cpp:191
> #6  0x73749590 in ?? ()
> 
> (gdb) info registers
> x0 0x34 52
> x1 0x7fdb6d8f90 549142237072
> x2 0x10 16
> x3 0x74a5 29861
> x4 0x0 0
> x5 0x8 8
> x6 0x704e33a000 482348343296
> x7 0xbf815ee 200807918
> x8 0x16 22
> x9 0xebabf645f5e97f31 -1464806473440067791
> x100x1 1
> x110xc0 192
> x120x20daea4ae8 14741160
> x130x115 277
> x140x1080400222a3e010 1189020679541088272
> x150x40 64
> x160x704a9fcd70 482288323952
> x170x704ad1c590 482291598736
> x180x704e01c000 482345074688
> x190x704cf551c0 482327482816
> x200x704cf55268 482327482984
> x210x74a5 29861
> x220x74a5 29861
> x230x704cf551c0 482327482816
> x240x7fff 2147483647
> x250x704d0aa020 482328879136
> x260x704cfc8840 482327955520
> x270x704407b

Re: [Crash-utility] [External Mail]Re: zram decompress support for gcore/crash-utility

2020-04-01 Thread 赵乾利
Hi,Dave
Zram is a virtual device,it simulated as a block device,it's part of 
memroy/ramdump,just enable  CONFIG_ZRAM,no other settings needed.
you can refer to drivers/block/zram/zram_drv.c
driver calling zram_meta_alloc to alloc memory from RAM.

We want to be able to access these zram page like a normal page.


From: Dave Anderson 
Sent: Wednesday, April 1, 2020 23:24
To: 赵乾利
Cc: d hatayama; Discussion list for crash utility usage, maintenance and 
development
Subject: Re: [External Mail]Re: [Crash-utility] zram decompress support for 
gcore/crash-utility

- Original Message -
> Hi,Dave
> zram is same with other swap device,but every swaped page will be compressed 
> then saved to another memory address.
> The process is same with the common swap device,non-swap just a normal user 
> address,pgd and mmu will translate to phy address
>
> please refer to below information:
> crash> vm -p
> PID: 1565   TASK: ffe1fce32d00  CPU: 7   COMMAND: "system_server"
>MM   PGD  RSSTOTAL_VM
> ffe264431c00  ffe1f54ad000  528472k  9780384k
>   VMA   START   END FLAGS FILE
> ffe0ea401300   12c0   12e0 100073
> VIRTUAL PHYSICAL
> ...
> 144fc000SWAP: /dev/block/zram0  OFFSET: 236750
> ...
> 1738e000SWAP: /dev/block/zram0  OFFSET: 73426
> 1738f000   21aa2c000
> 1739   1c3308000
> 17391000SWAP: /dev/block/zram0  OFFSET: 73431
> 17392000   19c162000
> 17393000   19c132000
> 17394000SWAP: /dev/block/zram0  OFFSET: 234576
> 17395000   19c369000
> 17396000   20b35c000
> 17397000   18011e000
> 17398000SWAP: /dev/block/zram0  OFFSET: 73433
> 17399000   1dc3d2000
> 1739a000   1bc59f000
> 1739b000SWAP: /dev/block/zram0  OFFSET: 73437
>
>
> crash> vtop -c 1565 144fc000
> VIRTUAL PHYSICAL
> 144fc000(not mapped)
>
> PAGE DIRECTORY: ffe1f54ad000
>PGD: ffe1f54ad000 => 1f54ab003
>PMD: ffe1f54ab510 => 1f43b8003
>PTE: ffe1f43b87e0 => 39cce00
>
>   PTE  SWAPOFFSET
> 39cce00  /dev/block/zram0  236750
>
>   VMA   START   END FLAGS FILE
> ffe148bafe40   144c   1454 100073
>
> SWAP: /dev/block/zram0  OFFSET: 236750

Ok, so with respect to user-space virtual addresses, there is nothing
other than handling zram swap-backed memory.

So what you're proposing is that when reading user-space memory
that happens to be backed-up on a zram swap device, then the user
data could alternatively be read from the zram swap device, and
presented as if it were present in physical memory?

Are the physical RAM pages that make up the contents of a zram
device collected with a typical filtered compressed kdump?  If not,
what makedumpfile -d flag is required for them to be captured?

Dave


#/**本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
 This e-mail and its attachments contain confidential information from XIAOMI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!**/#

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility

Re: [Crash-utility] [External Mail]Re: zram decompress support for gcore/crash-utility

2020-04-01 Thread Dave Anderson


- Original Message -
> Hi,Dave
> Zram is a virtual device,it simulated as a block device,it's part of
> memroy/ramdump,just enable  CONFIG_ZRAM,no other settings needed.
> you can refer to drivers/block/zram/zram_drv.c
> driver calling zram_meta_alloc to alloc memory from RAM.
> 
> We want to be able to access these zram page like a normal page.

I understand all that.  I'm just curious how makedumpfile will handle/filter
the physical RAM pages that make up the zram block device.

Anyway, send a patch and I'll take a look.

Dave


> 
> 
> From: Dave Anderson 
> Sent: Wednesday, April 1, 2020 23:24
> To: 赵乾利
> Cc: d hatayama; Discussion list for crash utility usage, maintenance and
> development
> Subject: Re: [External Mail]Re: [Crash-utility] zram decompress support for
> gcore/crash-utility
> 
> - Original Message -
> > Hi,Dave
> > zram is same with other swap device,but every swaped page will be
> > compressed then saved to another memory address.
> > The process is same with the common swap device,non-swap just a normal user
> > address,pgd and mmu will translate to phy address
> >
> > please refer to below information:
> > crash> vm -p
> > PID: 1565   TASK: ffe1fce32d00  CPU: 7   COMMAND: "system_server"
> >MM   PGD  RSSTOTAL_VM
> > ffe264431c00  ffe1f54ad000  528472k  9780384k
> >   VMA   START   END FLAGS FILE
> > ffe0ea401300   12c0   12e0 100073
> > VIRTUAL PHYSICAL
> > ...
> > 144fc000SWAP: /dev/block/zram0  OFFSET: 236750
> > ...
> > 1738e000SWAP: /dev/block/zram0  OFFSET: 73426
> > 1738f000   21aa2c000
> > 1739   1c3308000
> > 17391000SWAP: /dev/block/zram0  OFFSET: 73431
> > 17392000   19c162000
> > 17393000   19c132000
> > 17394000SWAP: /dev/block/zram0  OFFSET: 234576
> > 17395000   19c369000
> > 17396000   20b35c000
> > 17397000   18011e000
> > 17398000SWAP: /dev/block/zram0  OFFSET: 73433
> > 17399000   1dc3d2000
> > 1739a000   1bc59f000
> > 1739b000SWAP: /dev/block/zram0  OFFSET: 73437
> >
> >
> > crash> vtop -c 1565 144fc000
> > VIRTUAL PHYSICAL
> > 144fc000(not mapped)
> >
> > PAGE DIRECTORY: ffe1f54ad000
> >PGD: ffe1f54ad000 => 1f54ab003
> >PMD: ffe1f54ab510 => 1f43b8003
> >PTE: ffe1f43b87e0 => 39cce00
> >
> >   PTE  SWAPOFFSET
> > 39cce00  /dev/block/zram0  236750
> >
> >   VMA   START   END FLAGS FILE
> > ffe148bafe40   144c   1454 100073
> >
> > SWAP: /dev/block/zram0  OFFSET: 236750
> 
> Ok, so with respect to user-space virtual addresses, there is nothing
> other than handling zram swap-backed memory.
> 
> So what you're proposing is that when reading user-space memory
> that happens to be backed-up on a zram swap device, then the user
> data could alternatively be read from the zram swap device, and
> presented as if it were present in physical memory?
> 
> Are the physical RAM pages that make up the contents of a zram
> device collected with a typical filtered compressed kdump?  If not,
> what makedumpfile -d flag is required for them to be captured?
> 
> Dave
> 
> 
> #/**本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
> This e-mail and its attachments contain confidential information from
> XIAOMI, which is intended only for the person or entity whose address is
> listed above. Any use of the information contained herein in any way
> (including, but not limited to, total or partial disclosure, reproduction,
> or dissemination) by persons other than the intended recipient(s) is
> prohibited. If you receive this e-mail in error, please notify the sender by
> phone or email immediately and delete it!**/#
> 

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility

Re: [Crash-utility] [External Mail]Re: zram decompress support for gcore/crash-utility

2020-04-01 Thread Dave Anderson



- Original Message -
> Hi,Dave
> zram is same with other swap device,but every swaped page will be compressed 
> then saved to another memory address.
> The process is same with the common swap device,non-swap just a normal user 
> address,pgd and mmu will translate to phy address
> 
> please refer to below information:
> crash> vm -p
> PID: 1565   TASK: ffe1fce32d00  CPU: 7   COMMAND: "system_server"
>MM   PGD  RSSTOTAL_VM
> ffe264431c00  ffe1f54ad000  528472k  9780384k
>   VMA   START   END FLAGS FILE
> ffe0ea401300   12c0   12e0 100073
> VIRTUAL PHYSICAL
> ...
> 144fc000SWAP: /dev/block/zram0  OFFSET: 236750
> ...
> 1738e000SWAP: /dev/block/zram0  OFFSET: 73426
> 1738f000   21aa2c000
> 1739   1c3308000
> 17391000SWAP: /dev/block/zram0  OFFSET: 73431
> 17392000   19c162000
> 17393000   19c132000
> 17394000SWAP: /dev/block/zram0  OFFSET: 234576
> 17395000   19c369000
> 17396000   20b35c000
> 17397000   18011e000
> 17398000SWAP: /dev/block/zram0  OFFSET: 73433
> 17399000   1dc3d2000
> 1739a000   1bc59f000
> 1739b000SWAP: /dev/block/zram0  OFFSET: 73437
> 
> 
> crash> vtop -c 1565 144fc000
> VIRTUAL PHYSICAL
> 144fc000(not mapped)
> 
> PAGE DIRECTORY: ffe1f54ad000
>PGD: ffe1f54ad000 => 1f54ab003
>PMD: ffe1f54ab510 => 1f43b8003
>PTE: ffe1f43b87e0 => 39cce00
> 
>   PTE  SWAPOFFSET
> 39cce00  /dev/block/zram0  236750
> 
>   VMA   START   END FLAGS FILE
> ffe148bafe40   144c   1454 100073
> 
> SWAP: /dev/block/zram0  OFFSET: 236750

Ok, so with respect to user-space virtual addresses, there is nothing
other than handling zram swap-backed memory.

So what you're proposing is that when reading user-space memory
that happens to be backed-up on a zram swap device, then the user
data could alternatively be read from the zram swap device, and
presented as if it were present in physical memory?

Are the physical RAM pages that make up the contents of a zram
device collected with a typical filtered compressed kdump?  If not,
what makedumpfile -d flag is required for them to be captured?

Dave


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility



Re: [Crash-utility] [External Mail]Re: zram decompress support for gcore/crash-utility

2020-04-01 Thread 赵乾利
Hi,Dave
zram is same with other swap device,but every swaped page will be compressed 
then saved to another memory address.
The process is same with the common swap device,non-swap just a normal user 
address,pgd and mmu will translate to phy address

please refer to below information:
crash> vm -p
PID: 1565   TASK: ffe1fce32d00  CPU: 7   COMMAND: "system_server"
   MM   PGD  RSSTOTAL_VM
ffe264431c00  ffe1f54ad000  528472k  9780384k
  VMA   START   END FLAGS FILE
ffe0ea401300   12c0   12e0 100073
VIRTUAL PHYSICAL
...
144fc000SWAP: /dev/block/zram0  OFFSET: 236750
...
1738e000SWAP: /dev/block/zram0  OFFSET: 73426
1738f000   21aa2c000
1739   1c3308000
17391000SWAP: /dev/block/zram0  OFFSET: 73431
17392000   19c162000
17393000   19c132000
17394000SWAP: /dev/block/zram0  OFFSET: 234576
17395000   19c369000
17396000   20b35c000
17397000   18011e000
17398000SWAP: /dev/block/zram0  OFFSET: 73433
17399000   1dc3d2000
1739a000   1bc59f000
1739b000SWAP: /dev/block/zram0  OFFSET: 73437


crash> vtop -c 1565 144fc000
VIRTUAL PHYSICAL
144fc000(not mapped)

PAGE DIRECTORY: ffe1f54ad000
   PGD: ffe1f54ad000 => 1f54ab003
   PMD: ffe1f54ab510 => 1f43b8003
   PTE: ffe1f43b87e0 => 39cce00

  PTE  SWAPOFFSET
39cce00  /dev/block/zram0  236750

  VMA   START   END FLAGS FILE
ffe148bafe40   144c   1454 100073

SWAP: /dev/block/zram0  OFFSET: 236750


From: Dave Anderson 
Sent: Wednesday, April 1, 2020 22:20
To: d hatayama
Cc: 赵乾利; Discussion list for crash utility usage, maintenance and development
Subject: [External Mail]Re: [Crash-utility] zram decompress support for 
gcore/crash-utility

- Original Message -

...

> >
> > As far as the gcore extension module, that is maintained by Daisuke 
> > Hatayama,
> > and he make all decisions w/respect to that codebase.  I've cc'd this 
> > response
> > to him.
>
> Thanks Zhao for your patch set.
> Thanks for ccing me, Dave.
>
> I agree that ZRAM support is useful as your explanation. On the other
> hand, it is not only for crash gcore command, but also for crash utility. I 
> think
> it more natural than the current implementation of your patch set that you
> implement a ZRAM support in crash utility and then do it in crash gcore 
> command.
>
> If the ZRAM support were transparent to readmem() interface, there would be 
> no need
> to implement crash gcore command at all. If not, there would be need to add a 
> new code
> for the ZRAM support just corresponding to the following stanza in
> 0001-gcore-add-support-zram-swap.patch:
>
> @@ -225,6 +417,18 @@ void gcore_coredump(void)
>   strerror(errno));
> } else {
> pagefaultf("page fault at %lx\n", addr);
> +   if (paddr != 0) {
> +   pte_val = paddr;
> +   if(try_zram_decompress(pte_val, 
> (unsigned char *)buffer) == PAGE_SIZE)
> +   {
> +   error(WARNING, "zram 
> decompress successed\n");
> +   if (fwrite(buffer, PAGE_SIZE, 
> 1, gcore->fp) != 1)
> +   error(FATAL, "%s: 
> write: %s\n", gcore->corename, strerror(errno));
> +   continue;
> +   }
> +
> +  }

I'm not clear on how zram is linked into the user-space mapping.  For user 
space that
has been swapped out to a zram swap device, I presume it's the same as is, but 
it
references the zram swap device.  But for other user-space mappings 
(non-swapped),
what does the "vm -p" display for user space virtual address pages that are 
backed
by zram?  And for that matter, what does "vtop " show?

Thanks,
  Dave


#/**本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
 This e-mail and its attachments contain confidential information from XIAOMI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!**/#

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility