Hi,
I encountered kdump load failure problem at SLES12 SP1 and SLES12 SP2.
The 'systemctl status kdump.service' returned:
tpfe2 /var/log # systemctl status kdump.service
kdump.service - Load kdump kernel on startup
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled)
Active:
On 11/05/2013 04:39 AM, Petr Tesarik wrote:
V Mon, 04 Nov 2013 16:37:13 -0800
Jay Lan jay.zen@gmail.com napsáno:
Hi,
It seems the makedumpfile.spec was written for fedora. It
probably would work in rhel, but I was not able to build it
in an sles11sp2 build environment...
The spec file
Hi,
It seems the makedumpfile.spec was written for fedora. It
probably would work in rhel, but I was not able to build it
in an sles11sp2 build environment...
The spec file complained it needed elfutils-devel-static.
I tried to install elfutils-devel-static-0.156-5.fc19.x86_64.rpm
to an
My kdump initrd failed in mounting root disk.
The system runs sles11sp2 and
kexec-tools-2.0.0-53.43.10.
I saw the scripts executed in initrd contain debug like this:
[ $debug ] echo
How do I turn on the debug switch when creating the kdump
initrd?
Thanks,
Jay
Hi,
I searched the web on this subject, and found a discussion thread
on this matter back in March 12, 2012:
https://lkml.org/lkml/2012/3/13/36
and it seems I hit the same problem on my machine. :-(
We have a system with 4TB memory. When it was shipped, crashkernel
was set to 512M. The
Hi,
I have a 2.6.32-131.6.1.el6 kernel, and kexec-tools-2.0.0-145.el6.x86_64
rpm on a cent6 machine.
When i forced a kdump by 'echo c /proc/sysrq-trigger',
the kdump kernel panicked during the boot. Surprisingly,
it tried to load lustre modules that should not be part of
the initrd and
Hi,
I have a 2.6.32-131.6.1.el6 kernel, and kexec-tools-2.0.0-145.el6.x86_64
rpm on a cent6 machine.
When i forced a kdump by 'echo c /proc/sysrq-trigger',
the kdump kernel panicked during the boot. Surprisingly,
it tried to load lustre modules that should not be part of
the initrd and
:
On Tue, Sep 27, 2011 at 11:38:28AM -0700, Jay Lan wrote:
Hi all,
I have a system running 2.6.18-238.12.1.el5 kernel.
The kexec version is kexec-tools-1.102pre-126.el5_6.6.
The kernel was booted OK. Then it ran /etc/init.d/kdump
and a new kdump initrd image was created.
A kernel crash was triggered
Hi all,
I have a system running 2.6.18-238.12.1.el5 kernel.
The kexec version is kexec-tools-1.102pre-126.el5_6.6.
The kernel was booted OK. Then it ran /etc/init.d/kdump
and a new kdump initrd image was created.
A kernel crash was triggered. The kdump kernel was booted
alright. However, it
On 09/27/2011 11:38 AM, Jay Lan wrote:
Hi all,
I have a system running 2.6.18-238.12.1.el5 kernel.
The kexec version is kexec-tools-1.102pre-126.el5_6.6.
The kernel was booted OK. Then it ran /etc/init.d/kdump
and a new kdump initrd image was created.
A kernel crash was triggered. The kdump
Kdump with INIT on IPF worked on SGI's IA64 servers when i left SGI last
December.
Is this problem something new to you after .27?
SGI folks should test this patchset to ensure no surprise is introduced to
their servers.
Cheers,
jay
- Original Message -
From: Hidetoshi Seto
Hi,
I found a new config CORE_DUMP_DEFAULT_ELF_HEADERS in
2.6.28-rc2 for Write ELF core dumps with partial segments.
It is default to 'N'. What is the recommended setting for
this config flag in the kdump world?
Regards,
jay
___
kexec mailing list
Booting up a 2.6.28-rc2 kdump kernel, i observed these
messages at console:
Warning: Core image elf header not found
Kdump: vmcore not initialized
It was a default configuration with a new config flag
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS set to 'y'.
The machine worked fine with 2.6.27.
Simon Horman wrote:
On Tue, Oct 21, 2008 at 09:52:27AM -0700, Jay Lan wrote:
Simon Horman wrote:
Hi Simon,
Just got back from vacation. Sorry for late response.
This bug was discovered by Jay Lan and he also proposed this fix, however
thee is some discussion about what if any related
Simon Horman wrote:
Hi Simon,
Just got back from vacation. Sorry for late response.
This bug was discovered by Jay Lan and he also proposed this fix, however
thee is some discussion about what if any related changes should be made at
the same time.
The bug comes about because the break
Luck, Tony wrote:
Does this make kexec/kdump happier? Bare minimum testing so far
(builds and boots on tiger ... didn't try kexec yet).
Hi Tony,
Yep, the 2.6.27-rc7 kdump kernel built with this patch worked fine!
Actually you probably can predict the results by doing 'readelf -l
vmlinux'. If
Jay Lan wrote:
Bernhard Walle wrote:
* Luck, Tony [EMAIL PROTECTED] [2008-08-29]:
your commit
commit 10617bbe84628eb18ab5f723d3ba35005adde143
Author: Tony Luck [EMAIL PROTECTED]
Date: Tue Aug 12 10:34:20 2008 -0700
[IA64] Ensure cpu0 can access per-cpu variables
Simon Horman wrote:
On Fri, Sep 19, 2008 at 07:17:05PM -0700, Jay Lan wrote:
This patch combines consecutive PL_LOAD segments into one.
The end address of the last PL_LOAD segment, calculated by
adding p_memsz to p_paddr rounded up to ELF_PAGE_SIZE,
will be the end address
Ken'ichi Ohmichi wrote:
Hi Jay,
Hi Ken'ichi San,
The latest patch worked on my 2p A350 IA64 as well as on my 128p 256G
memory A4700 machines! And it still took less than 2 minutes to
complete makedumpfile on the freshly booted A4700 (compared to 6
minutes doing 'cp --sparse=always' :) It would
Simon Horman wrote:
On Wed, Sep 17, 2008 at 02:07:15PM -0700, Jay Lan wrote:
Hi,
My root disk was populated with sles10sp2, but the kernel was
2.6.27-rc5 and /sbin/kexec was built from 2.0.0 version.
Many times when kdump kernel failed early i found after reboot that
/sbin/kexec became
Neil Horman wrote:
On Tue, Sep 23, 2008 at 09:41:50AM -0700, Jay Lan wrote:
Simon Horman wrote:
On Wed, Sep 17, 2008 at 02:07:15PM -0700, Jay Lan wrote:
Hi,
My root disk was populated with sles10sp2, but the kernel was
2.6.27-rc5 and /sbin/kexec was built from 2.0.0 version.
Many times
Ken'ichi Ohmichi wrote:
Hi Hedi, Jay,
Hedi Berriche wrote:
In addition to what other folks have mentioned about giving the latest crash
version a try, I'd like to point out that makedumpfile did spit a couple of
warnings while creating the vmcore
| Can't distinguish the pgtable.
| The
Dave Anderson wrote:
Try using at least -d4 and redirect the output to a file. It's much
more verbose than the above, but it shows every readmem() made from
the dumpfile:
# crash -d4 vmlinux vmcore.cp /tmp/debug.cp
q
# crash -d4 vmlinux vmcore.makedumpfile /tmp/debug.makedumpfile
q
it tries to save the vmcore to a disk. A normal
cached access may cause MCAs.
This patch would label memory with attribute of EFI_MEMORY_UC only as
Uncached RAM so that kexec would know not to include it in the vmcore.
I will submit a separate kexec-tools patch to the kexec list.
Signed-off-by: Jay Lan
broke kdump on our Altix 350. I get following early crash in kdump
kernel
Sorry about that. I'll try to reproduce it here.
I had some discussion about that with Jay Lan that he could not
reproduce that on his machine. We thought it was different config, but
now I can verify that the problem
After getting around a few kdump kernel panic/hang, i finally was
able to complete a kdump vmcore with 2.6.27-rc5. The system under
testing was an IA64 with 128 cpu and 256G memory A4700 system.
The /proc/vmcore is:
a4700rac:/boot # ll /proc/vmcore
-r 1 root root 263006257684 2008-09-10
Bernhard Walle wrote:
* Jay Lan [EMAIL PROTECTED] [2008-09-08]:
Any input helping me speed up debugging is appreciated.
I would start with comparing the ELF program headers of /proc/vmcore
which you get with readelf -l /proc/vmcore in kdump environment and
the /proc/iomem which kexec
When trying to do 'cp /proc/vmcore ...', the kdump kernel MCA'ed.
KDB showed me this backtrace: (it is really nice to have kdb working
with kdump :))
Entering kdb (current=0xe0303257, pid 3519) on processor 0 due
to KDB_ENTER()
[0]kdb bt
Stack traceback for pid 3519
0xe0303257
Ken'ichi Ohmichi wrote:
Hi Bernhard, Jay,
Bernhard Walle wrote:
Hi Ken'ichi Ohmichi,
* Jay Lan [2008-08-27 18:43]:
Thanks for your patch!
I am wondering if the discontigmem kernel has a legitimate bug,
we probably should report it?
I tested your patch on a machine that used to fail
Bernhard Walle wrote:
Hi,
* Simon Horman [2008-08-29 15:19]:
On Thu, Aug 28, 2008 at 05:14:04PM -0700, Jay Lan wrote:
Specifically, an ia64.
I added printf() to purgatory-ia64.c, compiled, and executed
the kexec command from a shell window (ie, not from a script), but
i still did not see
Ken'ichi Ohmichi wrote:
Hi Bernhard,
Bernhard Walle wrote:
* Ken'ichi Ohmichi [2008-08-05 21:07]:
BTW, I'd like to know some conditions of this problem.
So please let me know the makedumpfile commandline which you run.
Ex. # makedumpfile -d 31 -x vmlinux /proc/vmcore dumpfile
#
I have an IA64 system with 250G memory. I reserved 1024M memory for the
kdump kernel. It worked fine... up to 2.6.23.
Starting 2.6.24-rc1, booting a kdump kernel on the machine has been
failed on OOM. I tried 1280M, but still failed. I threw in 2048M and
then it worked. When OOM happened, it
Repost to include linux-ia64...
I have an IA64 system with 250G memory. I reserved 1024M memory for the
kdump kernel. It worked fine... up to 2.6.23.
Starting 2.6.24-rc1, booting a kdump kernel on the machine has been
failed on OOM. I tried 1280M, but still failed. I threw in 2048M and
then it
Neil Horman wrote:
On Fri, Jul 18, 2008 at 02:43:11AM +0200, Bernhard Walle wrote: * Neil
Horman [2008-08-14 08:18]:That being said, Bernhard, I suppose it
would be worthwhile standardizing some configuration settings, seeing as
despite our implementation differences, we seem to
Ken'ichi Ohmichi wrote:
Hi,
Jay Lan wrote:
Ken'ichi Ohmichi wrote:
Hi Jay,
Ken'ichi Ohmichi wrote:
I created the attached patch that makedumpfile does not scan
memory gap when creating 1st-bitmap. Could you please try it ?
This patch is for makedumpfile-1.2.6.
I found a bug
Ken'ichi Ohmichi wrote:
Hi Jay,
Ken'ichi Ohmichi wrote:
I created the attached patch that makedumpfile does not scan
memory gap when creating 1st-bitmap. Could you please try it ?
This patch is for makedumpfile-1.2.6.
I found a bug in the patch I sent before, and I fixed it in the
Vivek Goyal wrote:
On Wed, Jul 16, 2008 at 11:25:44AM -0400, Neil Horman wrote:
On Wed, Jul 16, 2008 at 11:12:40AM -0400, Vivek Goyal wrote:
On Tue, Jul 15, 2008 at 06:07:40PM -0700, Jay Lan wrote:
Are there known problems if you boot up kdump kernel with
multipl cpus?
I had run into one
Neil Horman wrote:
On Wed, Jul 16, 2008 at 12:23:43PM -0400, Vivek Goyal wrote:
On Wed, Jul 16, 2008 at 11:25:44AM -0400, Neil Horman wrote:
On Wed, Jul 16, 2008 at 11:12:40AM -0400, Vivek Goyal wrote:
On Tue, Jul 15, 2008 at 06:07:40PM -0700, Jay Lan wrote:
Are there known problems if you
Are there known problems if you boot up kdump kernel with
multipl cpus?
It takes unacceptably long time to run makedumpfile in
saving dump at a huge memory system. In my testing it
took 16hr25min to run create_dump_bitmap() on a 1TB system.
Pfn's are processed sequentially with single cpu. We
Bernhard Walle wrote:
* Ken'ichi Ohmichi [2008-07-07 11:50]:
Ken'ichi Ohmichi wrote:
Hi Bernhard,
Thank you for your patch.
I like this idea :-)
I am busy now, and I will consider the patch well the next week.
Thank you for the patch, and sorry for my late response.
I added the progress
How do i force a dump on a hung system when
1) it does not respond to the serial console, and
2) there is no hardware button to trigger an NMI
Am i out of luck?
Thanks,
- jay
___
kexec mailing list
kexec@lists.infradead.org
Vivek Goyal wrote:
On Tue, Aug 21, 2007 at 06:18:31AM -0700, Jay Lan wrote:
[..]
Now user will be able to view all the die_chain users through sysfs and
be able to modify the order in which these should run by modifying their
priority. Hence all the RAS tools can co-exist.
This is my image
42 matches
Mail list logo