Re: [PATCH net-next v4 0/3] kernel: add support to collect hardware logs in crash recovery kernel

2018-04-18 Thread Dave Young
On 04/18/18 at 06:01pm, Rahul Lakkireddy wrote:
> On Wednesday, April 04/18/18, 2018 at 11:45:46 +0530, Dave Young wrote:
> > Hi Rahul,
> > On 04/17/18 at 01:14pm, Rahul Lakkireddy wrote:
> > > On production servers running variety of workloads over time, kernel
> > > panic can happen sporadically after days or even months. It is
> > > important to collect as much debug logs as possible to root cause
> > > and fix the problem, that may not be easy to reproduce. Snapshot of
> > > underlying hardware/firmware state (like register dump, firmware
> > > logs, adapter memory, etc.), at the time of kernel panic will be very
> > > helpful while debugging the culprit device driver.
> > > 
> > > This series of patches add new generic framework that enable device
> > > drivers to collect device specific snapshot of the hardware/firmware
> > > state of the underlying device in the crash recovery kernel. In crash
> > > recovery kernel, the collected logs are added as elf notes to
> > > /proc/vmcore, which is copied by user space scripts for post-analysis.
> > > 
> > > The sequence of actions done by device drivers to append their device
> > > specific hardware/firmware logs to /proc/vmcore are as follows:
> > > 
> > > 1. During probe (before hardware is initialized), device drivers
> > > register to the vmcore module (via vmcore_add_device_dump()), with
> > > callback function, along with buffer size and log name needed for
> > > firmware/hardware log collection.
> > 
> > I assumed the elf notes info should be prepared while kexec_[file_]load
> > phase. But I did not read the old comment, not sure if it has been discussed
> > or not.
> > 
> 
> We must not collect dumps in crashing kernel. Adding more things in
> crash dump path risks not collecting vmcore at all. Eric had
> discussed this in more detail at:
> 
> https://lkml.org/lkml/2018/3/24/319
> 
> We are safe to collect dumps in the second kernel. Each device dump
> will be exported as an elf note in /proc/vmcore.

I understand that we should avoid adding anything in crash path.  And I also
agree to collect device dump in second kernel.  I just assumed device
dump use some memory area to store the debug info and the memory
is persistent so that this can be done in 2 steps, first register the
address in elf header in kexec_load, then collect the dump in 2nd
kernel.  But it seems the driver is doing some other logic to collect
the info instead of just that simple like I thought. 

> 
> > If do this in 2nd kernel a question is driver can be loaded later than 
> > vmcore init.
> 
> Yes, drivers will add their device dumps after vmcore init.
> 
> > How to guarantee the function works if vmcore reading happens before
> > the driver is loaded?
> > 
> > Also it is possible that kdump initramfs does not contains the driver
> > module.
> > 
> > Am I missing something?
> > 
> 
> Yes, driver must be in initramfs if it wants to collect and add device
> dump to /proc/vmcore in second kernel.

In RH/Fedora kdump scripts we only add the things are required to
bring up the dump target, so that we can use as less memory as we can.

For example, if a net driver panicked, and the dump target is rootfs
which is a scsi disk, then no network related stuff will be added in
initramfs.

In this case the device dump info will be not collected..
> 
> > > 
> > > 2. vmcore module allocates the buffer with requested size. It adds
> > > an elf note and invokes the device driver's registered callback
> > > function.
> > > 
> > > 3. Device driver collects all hardware/firmware logs into the buffer
> > > and returns control back to vmcore module.
> > > 
> > > The device specific hardware/firmware logs can be seen as elf notes:
> > > 
> > > # readelf -n /proc/vmcore
> > > 
> > > Displaying notes found at file offset 0x1000 with length 0x04003288:
> > >   Owner Data size Description
> > >   VMCOREDD_cxgb4_:02:00.4 0x02000fd8  Unknown note type: (0x0700)
> > >   VMCOREDD_cxgb4_:04:00.4 0x02000fd8  Unknown note type: (0x0700)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   CORE 0x0150 NT_PRSTATUS (prstatus structure)
> > >   VMCOREINFO   0x074f Unknown note type: (0x)
> > > 
> > > Patch 1 adds API to vmcore module to allow drivers to register callback
> > > to collect the device specific hardware/firmware logs.  The logs will
> > > be added to /proc/vmcore as elf notes.
> > > 
> > > Patch 2 updates read and mmap logic to append device specific hardware/
> > > firmwa

[PATCH v3 3/3] kexec_file: Load kernel at top of system RAM if required

2018-04-18 Thread Baoquan He
For kexec_file loading, if kexec_buf.top_down is 'true', the memory which
is used to load kernel/initrd/purgatory is supposed to be allocated from
top to down. This is what we have been doing all along in the old kexec
loading interface and the kexec loading is still default setting in some
distributions. However, the current kexec_file loading interface doesn't
do likt this. The function arch_kexec_walk_mem() it calls ignores checking
kexec_buf.top_down, but calls walk_system_ram_res() directly to go through
all resources of System RAM from bottom to up, to try to find memory region
which can contain the specific kexec buffer, then call 
locate_mem_hole_callback()
to allocate memory in that found memory region from top to down. This brings
confusion. These two interfaces need be unified on behaviour.

Here add checking if kexec_buf.top_down is 'true' in arch_kexec_walk_mem(),
if yes, call the newly added walk_system_ram_res_rev() to find memory region
from top to down to load kernel.

Signed-off-by: Baoquan He 
Cc: Eric Biederman 
Cc: Vivek Goyal 
Cc: Dave Young 
Cc: Andrew Morton 
Cc: Yinghai Lu 
Cc: kexec@lists.infradead.org
---
 kernel/kexec_file.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 75d8e7cf040e..7a66d9d5a534 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -518,6 +518,8 @@ int __weak arch_kexec_walk_mem(struct kexec_buf *kbuf,
   IORESOURCE_SYSTEM_RAM | 
IORESOURCE_BUSY,
   crashk_res.start, crashk_res.end,
   kbuf, func);
+   else if (kbuf->top_down)
+   return walk_system_ram_res_rev(0, ULONG_MAX, kbuf, func);
else
return walk_system_ram_res(0, ULONG_MAX, kbuf, func);
 }
-- 
2.13.6


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: kdump in upstream kexec-tools

2018-04-18 Thread Russell King
On Wed, Apr 18, 2018 at 03:01:08PM +0200, Simon Horman wrote:
> On Tue, Apr 17, 2018 at 10:01:13AM +0100, Russell King wrote:
> > On Tue, Apr 17, 2018 at 10:20:08AM +0530, Bhupesh Sharma wrote:
> > > Hi,
> > > 
> > > I was working on improving documentation/structure of the upstream
> > > kexec-tools and I was wondering what is the purpose of the 'kdump'
> > > directory inside the kexec-tools.
> > > 
> > > This kdump utility seems to cause confusion with the 'kdump' utility
> > > present inside some distributions (for e.g.  '/usr/sbin/kdump' present
> > > inside fedora) due to the same naming convention and so we should
> > > populate/modify the kdump man page to indicate the same, so as to
> > > avoid confusion.
> > > 
> > > Presently here are the contents of this directory:
> > > 
> > > # ls kdump/
> > > kdump.8  kdump.c  Makefile
> > > 
> > > - Out of these the kdump man documentation (kdump.8) is just a
> > > placeholder as suggested by the man page documentation: "kdump - This
> > > is just a placeholder until real man page has been written"
> > > 
> > > - Looking at kdump.c :
> > > 
> > > 1. I understand that this code is mainly used to read a crashdump from
> > > memory. One can run the same using:
> > > 
> > > # kdump 
> > > 
> > > where start_addr is basically the start address of the core dump
> > > (which can also be represented via the 'elfcorehdr' environment
> > > variable being passed to the crash kernel which represents the
> > > physical address of the start of the ELF header)
> > > 
> > > 2. This tool needs READ_ONLY access to /dev/mem (so we need to set
> > > CONFIG_STRICT_DEVMEM configuration option accordingly).
> > > 
> > > 3. The code thereafter reads (via mmap) and verifies the ELF header.
> > > Subsequently it reads (via mmap) the program header.
> > > 
> > > 4. Then we collect all the notes and write on STDOUT all the headers
> > > and notes found in the crashdump collected from memory.
> > > 
> > > So, as per my understanding even in absence of (more powerful) tools
> > > like crash (or gdb), we can still go ahead and read the crashdump from
> > > memory and display all the headers and notes present in the same on
> > > standard serial out interface using this kdump utility.
> > > 
> > > This is probably a good to have feature for systems which have very
> > > simple/minimal rootfs (and I see that a few arm32 systems seem to use
> > > the same as well) or are low on memory availability.
> > > 
> > > Now, I wanted to confirm if the 'kdump' utility for reading crashdump
> > > collected from memory is still needed (as the last commit is dated
> > > back to 2016 and was done for arm32 systems). If yes, I can go ahead
> > > and enhance the kdump man page to include the description given above
> > > - so that it helps users understand how to run the tool.
> > > 
> > > Please share your opinions.
> > 
> > Firstly, please use:
> > 
> >   git://git.armlinux.org.uk/~rmk/kexec-tools.git
> > 
> > for ARM systems - this has some important fixes that aren't in the
> > mainline repository.
> 
> I apologise if this is due to omission on my part,
> can we work towards getting them in the mainline repository?

The problem on ARM was caused because you applied the wrong version of
the patches I sent out.  When I noticed and reported it, there was no
response.

My only option over that intervening six months is to provide people
with something that actually works properly on 32-bit ARM has been
to publish my own kexec-tools git tree with the appropriate fixes in.

It now contains a couple more patches than just fixing that up.

If you want to merge from the above URL, be my guest, but as far as
I'm concerned, it's been proven that sending patches for merging is
open to mistakes happening.  That wouldn't be too bad if it hadn't
taken more than six months to get your attention on this issue.

-- 
Russell King

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH] arm64: Set -fno-PIC along with -mcmodel=large

2018-04-18 Thread David Michael
On Wed, Apr 18, 2018 at 8:54 AM, Simon Horman  wrote:
> On Thu, Apr 12, 2018 at 04:37:25PM -0700, Geoff Levand wrote:
>> Hi Simon,
>>
>> On 02/02/2018 03:48 PM, Geoff Levand wrote:
>> > Hi,
>> >
>> > On 01/07/2018 08:26 AM, David Michael wrote:
>> >> As seen in GCC's gcc/config/aarch64/aarch64.c, -fPIC with large
>> >> code model is unsupported.  This fixes the "sorry, unimplemented"
>> >> errors when building with compilers defaulting to -fPIC.
>> >> ---
>> >>
>> >> purgatory/arch/arm64/entry.S:1:0: sorry, unimplemented: code model 
>> >> 'large' with -fPIC
>> >>
>> >> This change fixes it.  Can something like this be applied?
>> >
>> > This change seems reasonable considering large model + PIC is unsupported.
>> >
>> > Reviewed by: Geoff Levand 
>>
>> Could you please merged this fix so arm64 builds work OK on Gentoo
>> and Gentoo derivatives.
>
> Sure, but it really ought to have a Signed-off-by line from David.

Sorry about that.  Should I resend the patch, or can you amend it with
this line?

Signed-off-by: David Michael 

Thanks.

David

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH net-next v4 0/3] kernel: add support to collect hardware logs in crash recovery kernel

2018-04-18 Thread Rahul Lakkireddy
On Wednesday, April 04/18/18, 2018 at 19:58:01 +0530, Eric W. Biederman wrote:
> Rahul Lakkireddy  writes:
> 
> > On Wednesday, April 04/18/18, 2018 at 11:45:46 +0530, Dave Young wrote:
> >> Hi Rahul,
> >> On 04/17/18 at 01:14pm, Rahul Lakkireddy wrote:
> >> > On production servers running variety of workloads over time, kernel
> >> > panic can happen sporadically after days or even months. It is
> >> > important to collect as much debug logs as possible to root cause
> >> > and fix the problem, that may not be easy to reproduce. Snapshot of
> >> > underlying hardware/firmware state (like register dump, firmware
> >> > logs, adapter memory, etc.), at the time of kernel panic will be very
> >> > helpful while debugging the culprit device driver.
> >> > 
> >> > This series of patches add new generic framework that enable device
> >> > drivers to collect device specific snapshot of the hardware/firmware
> >> > state of the underlying device in the crash recovery kernel. In crash
> >> > recovery kernel, the collected logs are added as elf notes to
> >> > /proc/vmcore, which is copied by user space scripts for post-analysis.
> >> > 
> >> > The sequence of actions done by device drivers to append their device
> >> > specific hardware/firmware logs to /proc/vmcore are as follows:
> >> > 
> >> > 1. During probe (before hardware is initialized), device drivers
> >> > register to the vmcore module (via vmcore_add_device_dump()), with
> >> > callback function, along with buffer size and log name needed for
> >> > firmware/hardware log collection.
> >> 
> >> I assumed the elf notes info should be prepared while kexec_[file_]load
> >> phase. But I did not read the old comment, not sure if it has been 
> >> discussed
> >> or not.
> >> 
> >
> > We must not collect dumps in crashing kernel. Adding more things in
> > crash dump path risks not collecting vmcore at all. Eric had
> > discussed this in more detail at:
> >
> > https://lkml.org/lkml/2018/3/24/319
> >
> > We are safe to collect dumps in the second kernel. Each device dump
> > will be exported as an elf note in /proc/vmcore.
> 
> It just occurred to me there is one variation that is worth
> considering.
> 
> Is the area you are looking at dumping part of a huge mmio area?
> I think someone said 2GB?
> 
> If that is the case it could be worth it to simply add the needed
> addresses to the range of memory we need to dump, and simply having a
> elf note saying that is what happened.
> 

We are _not_ dumping mmio area. However, one part of the dump
collection involves reading 2 GB on-chip memory via PIO access,
which is compressed and stored.

Thanks,
Rahul

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH net-next v4 0/3] kernel: add support to collect hardware logs in crash recovery kernel

2018-04-18 Thread Eric W. Biederman
Rahul Lakkireddy  writes:

> On Wednesday, April 04/18/18, 2018 at 11:45:46 +0530, Dave Young wrote:
>> Hi Rahul,
>> On 04/17/18 at 01:14pm, Rahul Lakkireddy wrote:
>> > On production servers running variety of workloads over time, kernel
>> > panic can happen sporadically after days or even months. It is
>> > important to collect as much debug logs as possible to root cause
>> > and fix the problem, that may not be easy to reproduce. Snapshot of
>> > underlying hardware/firmware state (like register dump, firmware
>> > logs, adapter memory, etc.), at the time of kernel panic will be very
>> > helpful while debugging the culprit device driver.
>> > 
>> > This series of patches add new generic framework that enable device
>> > drivers to collect device specific snapshot of the hardware/firmware
>> > state of the underlying device in the crash recovery kernel. In crash
>> > recovery kernel, the collected logs are added as elf notes to
>> > /proc/vmcore, which is copied by user space scripts for post-analysis.
>> > 
>> > The sequence of actions done by device drivers to append their device
>> > specific hardware/firmware logs to /proc/vmcore are as follows:
>> > 
>> > 1. During probe (before hardware is initialized), device drivers
>> > register to the vmcore module (via vmcore_add_device_dump()), with
>> > callback function, along with buffer size and log name needed for
>> > firmware/hardware log collection.
>> 
>> I assumed the elf notes info should be prepared while kexec_[file_]load
>> phase. But I did not read the old comment, not sure if it has been discussed
>> or not.
>> 
>
> We must not collect dumps in crashing kernel. Adding more things in
> crash dump path risks not collecting vmcore at all. Eric had
> discussed this in more detail at:
>
> https://lkml.org/lkml/2018/3/24/319
>
> We are safe to collect dumps in the second kernel. Each device dump
> will be exported as an elf note in /proc/vmcore.

It just occurred to me there is one variation that is worth
considering.

Is the area you are looking at dumping part of a huge mmio area?
I think someone said 2GB?

If that is the case it could be worth it to simply add the needed
addresses to the range of memory we need to dump, and simply having a
elf note saying that is what happened.

>> If do this in 2nd kernel a question is driver can be loaded later than 
>> vmcore init.
>
> Yes, drivers will add their device dumps after vmcore init.
>
>> How to guarantee the function works if vmcore reading happens before
>> the driver is loaded?
>> 
>> Also it is possible that kdump initramfs does not contains the driver
>> module.
>> 
>> Am I missing something?
>> 
>
> Yes, driver must be in initramfs if it wants to collect and add device
> dump to /proc/vmcore in second kernel.

Eric

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: kdump in upstream kexec-tools

2018-04-18 Thread Simon Horman
On Tue, Apr 17, 2018 at 10:01:13AM +0100, Russell King wrote:
> On Tue, Apr 17, 2018 at 10:20:08AM +0530, Bhupesh Sharma wrote:
> > Hi,
> > 
> > I was working on improving documentation/structure of the upstream
> > kexec-tools and I was wondering what is the purpose of the 'kdump'
> > directory inside the kexec-tools.
> > 
> > This kdump utility seems to cause confusion with the 'kdump' utility
> > present inside some distributions (for e.g.  '/usr/sbin/kdump' present
> > inside fedora) due to the same naming convention and so we should
> > populate/modify the kdump man page to indicate the same, so as to
> > avoid confusion.
> > 
> > Presently here are the contents of this directory:
> > 
> > # ls kdump/
> > kdump.8  kdump.c  Makefile
> > 
> > - Out of these the kdump man documentation (kdump.8) is just a
> > placeholder as suggested by the man page documentation: "kdump - This
> > is just a placeholder until real man page has been written"
> > 
> > - Looking at kdump.c :
> > 
> > 1. I understand that this code is mainly used to read a crashdump from
> > memory. One can run the same using:
> > 
> > # kdump 
> > 
> > where start_addr is basically the start address of the core dump
> > (which can also be represented via the 'elfcorehdr' environment
> > variable being passed to the crash kernel which represents the
> > physical address of the start of the ELF header)
> > 
> > 2. This tool needs READ_ONLY access to /dev/mem (so we need to set
> > CONFIG_STRICT_DEVMEM configuration option accordingly).
> > 
> > 3. The code thereafter reads (via mmap) and verifies the ELF header.
> > Subsequently it reads (via mmap) the program header.
> > 
> > 4. Then we collect all the notes and write on STDOUT all the headers
> > and notes found in the crashdump collected from memory.
> > 
> > So, as per my understanding even in absence of (more powerful) tools
> > like crash (or gdb), we can still go ahead and read the crashdump from
> > memory and display all the headers and notes present in the same on
> > standard serial out interface using this kdump utility.
> > 
> > This is probably a good to have feature for systems which have very
> > simple/minimal rootfs (and I see that a few arm32 systems seem to use
> > the same as well) or are low on memory availability.
> > 
> > Now, I wanted to confirm if the 'kdump' utility for reading crashdump
> > collected from memory is still needed (as the last commit is dated
> > back to 2016 and was done for arm32 systems). If yes, I can go ahead
> > and enhance the kdump man page to include the description given above
> > - so that it helps users understand how to run the tool.
> > 
> > Please share your opinions.
> 
> Firstly, please use:
> 
>   git://git.armlinux.org.uk/~rmk/kexec-tools.git
> 
> for ARM systems - this has some important fixes that aren't in the
> mainline repository.

I apologise if this is due to omission on my part,
can we work towards getting them in the mainline repository?

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH] arm64: Set -fno-PIC along with -mcmodel=large

2018-04-18 Thread Simon Horman
On Thu, Apr 12, 2018 at 04:37:25PM -0700, Geoff Levand wrote:
> Hi Simon,
> 
> On 02/02/2018 03:48 PM, Geoff Levand wrote:
> > Hi,
> > 
> > On 01/07/2018 08:26 AM, David Michael wrote:
> >> As seen in GCC's gcc/config/aarch64/aarch64.c, -fPIC with large
> >> code model is unsupported.  This fixes the "sorry, unimplemented"
> >> errors when building with compilers defaulting to -fPIC.
> >> ---
> >>
> >> purgatory/arch/arm64/entry.S:1:0: sorry, unimplemented: code model 'large' 
> >> with -fPIC
> >>
> >> This change fixes it.  Can something like this be applied?
> > 
> > This change seems reasonable considering large model + PIC is unsupported.
> > 
> > Reviewed by: Geoff Levand  
> 
> Could you please merged this fix so arm64 builds work OK on Gentoo
> and Gentoo derivatives.

Sure, but it really ought to have a Signed-off-by line from David.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH net-next v4 0/3] kernel: add support to collect hardware logs in crash recovery kernel

2018-04-18 Thread Rahul Lakkireddy
On Wednesday, April 04/18/18, 2018 at 11:45:46 +0530, Dave Young wrote:
> Hi Rahul,
> On 04/17/18 at 01:14pm, Rahul Lakkireddy wrote:
> > On production servers running variety of workloads over time, kernel
> > panic can happen sporadically after days or even months. It is
> > important to collect as much debug logs as possible to root cause
> > and fix the problem, that may not be easy to reproduce. Snapshot of
> > underlying hardware/firmware state (like register dump, firmware
> > logs, adapter memory, etc.), at the time of kernel panic will be very
> > helpful while debugging the culprit device driver.
> > 
> > This series of patches add new generic framework that enable device
> > drivers to collect device specific snapshot of the hardware/firmware
> > state of the underlying device in the crash recovery kernel. In crash
> > recovery kernel, the collected logs are added as elf notes to
> > /proc/vmcore, which is copied by user space scripts for post-analysis.
> > 
> > The sequence of actions done by device drivers to append their device
> > specific hardware/firmware logs to /proc/vmcore are as follows:
> > 
> > 1. During probe (before hardware is initialized), device drivers
> > register to the vmcore module (via vmcore_add_device_dump()), with
> > callback function, along with buffer size and log name needed for
> > firmware/hardware log collection.
> 
> I assumed the elf notes info should be prepared while kexec_[file_]load
> phase. But I did not read the old comment, not sure if it has been discussed
> or not.
> 

We must not collect dumps in crashing kernel. Adding more things in
crash dump path risks not collecting vmcore at all. Eric had
discussed this in more detail at:

https://lkml.org/lkml/2018/3/24/319

We are safe to collect dumps in the second kernel. Each device dump
will be exported as an elf note in /proc/vmcore.

> If do this in 2nd kernel a question is driver can be loaded later than vmcore 
> init.

Yes, drivers will add their device dumps after vmcore init.

> How to guarantee the function works if vmcore reading happens before
> the driver is loaded?
> 
> Also it is possible that kdump initramfs does not contains the driver
> module.
> 
> Am I missing something?
> 

Yes, driver must be in initramfs if it wants to collect and add device
dump to /proc/vmcore in second kernel.

> > 
> > 2. vmcore module allocates the buffer with requested size. It adds
> > an elf note and invokes the device driver's registered callback
> > function.
> > 
> > 3. Device driver collects all hardware/firmware logs into the buffer
> > and returns control back to vmcore module.
> > 
> > The device specific hardware/firmware logs can be seen as elf notes:
> > 
> > # readelf -n /proc/vmcore
> > 
> > Displaying notes found at file offset 0x1000 with length 0x04003288:
> >   Owner Data size   Description
> >   VMCOREDD_cxgb4_:02:00.4 0x02000fd8Unknown note type: (0x0700)
> >   VMCOREDD_cxgb4_:04:00.4 0x02000fd8Unknown note type: (0x0700)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   CORE 0x0150   NT_PRSTATUS (prstatus structure)
> >   VMCOREINFO   0x074f   Unknown note type: (0x)
> > 
> > Patch 1 adds API to vmcore module to allow drivers to register callback
> > to collect the device specific hardware/firmware logs.  The logs will
> > be added to /proc/vmcore as elf notes.
> > 
> > Patch 2 updates read and mmap logic to append device specific hardware/
> > firmware logs as elf notes.
> > 
> > Patch 3 shows a cxgb4 driver example using the API to collect
> > hardware/firmware logs in crash recovery kernel, before hardware is
> > initialized.
> > 
> > Thanks,
> > Rahul
> > 
> > RFC v1: https://lkml.org/lkml/2018/3/2/542
> > RFC v2: https://lkml.org/lkml/2018/3/16/326
> > 
[...]

Thanks,
Rahul

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [Query] ARM64 kaslr support - randomness, seeding and kdump

2018-04-18 Thread Mark Rutland
On Sun, Apr 15, 2018 at 01:44:16AM +0530, Bhupesh Sharma wrote:
> 4. Accordingly, I wanted to get opinions on whether arm64 timer count is a 
> good
> entropy source on platforms which indeed support EFI_RNG_PROTOCOL?

On its own, the timer is not a good entropy source.

If we have the EFI_RNG_PROTOCOL, we can use that directly.

> And whether we should  be looking to extend 'arch_get_random_*' or
> 'random_get_entropy' for arm64, to provide seed/entropy using APIs
> like 'efi_random_get_seed'?

The EFI RNG protocol is only available during boot services, so we can't
call this during the usual operation of the kernel. The seed the stub
generates into the RNG table is already thrown into the entropy pool by
efi_config_parse_tables(). Look for LINUX_EFI_RANDOM_SEED_TABLE_GUID.

So any attemps to acquire a random number via the usual APIs will in
part be affects by this entropy, and nothing needs to be done to
arch_get_random_* to use this entropy.

Thanks,
Mark.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec