Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Jan Beulich
>>> On 24.11.15 at 14:46,  wrote:
> On Tue, 2015-11-24 at 10:35 +, Andrew Cooper wrote:
>> On 24/11/15 10:17, Petr Tesarik wrote:
>> > On Tue, 24 Nov 2015 10:09:01 +
>> > David Vrabel  wrote:
>> > 
>> > > On 24/11/15 09:55, Malcolm Crossley wrote:
>> > > > On 24/11/15 08:59, Jan Beulich wrote:
>> > > > > > > > On 24.11.15 at 07:55,  wrote:
>> > > > > > What about:
>> > > > > > 
>> > > > > > 4) Instead of relying on the kernel maintained p2m list for m2p
>> > > > > >conversion use the hypervisor maintained m2p list which
>> > > > > > should be
>> > > > > >available in the dump as well. This is the way the alive
>> > > > > > kernel is
>> > > > > >working, so mimic it during crash dump analysis.
>> > > > > I fully agree; I have to admit that looking at the p2m when doing
>> > > > > page
>> > > > > table walks for a PV Dom0 (having all machine addresses in page
>> > > > > table
>> > > > > entries) seems kind of backwards. (But I say this knowing nothing
>> > > > > about the tool.)
>> > > > > 
>> > > > I don't think we can reliably use the m2p for PV domains because
>> > > > PV domains don't always issue a m2p update hypercall when they
>> > > > change
>> > > > their p2m mapping.
>> > > This only applies to foreign pages which won't be very interesting to
>> > > a
>> > > crash tool.
>> > True. I think the main reason crash hasn't done this is that it cannot
>> > find the hypervisor maintained m2p list. It should be sufficient to add
>> > some more fields to XEN_VMCOREINFO, so that crash can locate the
>> > mapping in the dump.
>> 
>> The M2P lives at an ABI-specified location in all virtual address spaces
>> for PV guests.
>> 
>> Either 0xF580 or 0x8000 depending on bitness.
> 
> In theory it can actually be dynamic. XENMEM_machphys_mapping is the way to
> get at it (for both bitnesses).
> 
> For 64-bit guests I think that is most an "in theory" thing and it never
> has actually been so.
> 
> For a 32-bit guest case I don't recall if it is just a 32on32 vs 32on64
> thing, or if something (either guest or toolstack) gets to pick more
> dynamically or even if it is a dom0 vs domU thing.

It's only for 32-on-64 where this range can change (and there it's the
64-bit address that crash would care about anyway).

Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Ian Campbell
On Tue, 2015-11-24 at 10:35 +, Andrew Cooper wrote:
> On 24/11/15 10:17, Petr Tesarik wrote:
> > On Tue, 24 Nov 2015 10:09:01 +
> > David Vrabel  wrote:
> > 
> > > On 24/11/15 09:55, Malcolm Crossley wrote:
> > > > On 24/11/15 08:59, Jan Beulich wrote:
> > > > > > > > On 24.11.15 at 07:55,  wrote:
> > > > > > What about:
> > > > > > 
> > > > > > 4) Instead of relying on the kernel maintained p2m list for m2p
> > > > > >    conversion use the hypervisor maintained m2p list which
> > > > > > should be
> > > > > >    available in the dump as well. This is the way the alive
> > > > > > kernel is
> > > > > >    working, so mimic it during crash dump analysis.
> > > > > I fully agree; I have to admit that looking at the p2m when doing
> > > > > page
> > > > > table walks for a PV Dom0 (having all machine addresses in page
> > > > > table
> > > > > entries) seems kind of backwards. (But I say this knowing nothing
> > > > > about the tool.)
> > > > > 
> > > > I don't think we can reliably use the m2p for PV domains because
> > > > PV domains don't always issue a m2p update hypercall when they
> > > > change
> > > > their p2m mapping.
> > > This only applies to foreign pages which won't be very interesting to
> > > a
> > > crash tool.
> > True. I think the main reason crash hasn't done this is that it cannot
> > find the hypervisor maintained m2p list. It should be sufficient to add
> > some more fields to XEN_VMCOREINFO, so that crash can locate the
> > mapping in the dump.
> 
> The M2P lives at an ABI-specified location in all virtual address spaces
> for PV guests.
> 
> Either 0xF580 or 0x8000 depending on bitness.

In theory it can actually be dynamic. XENMEM_machphys_mapping is the way to
get at it (for both bitnesses).

For 64-bit guests I think that is most an "in theory" thing and it never
has actually been so.

For a 32-bit guest case I don't recall if it is just a 32on32 vs 32on64
thing, or if something (either guest or toolstack) gets to pick more
dynamically or even if it is a dom0 vs domU thing.

Ian.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Andrew Cooper
On 24/11/15 13:41, Andrew Cooper wrote:
> On 24/11/15 13:39, Jan Beulich wrote:
> On 24.11.15 at 13:57,  wrote:
>>> V Tue, 24 Nov 2015 10:35:03 +
>>> Andrew Cooper  napsáno:
>>>
 On 24/11/15 10:17, Petr Tesarik wrote:
> On Tue, 24 Nov 2015 10:09:01 +
> David Vrabel  wrote:
>
>> On 24/11/15 09:55, Malcolm Crossley wrote:
>>> On 24/11/15 08:59, Jan Beulich wrote:
>>> On 24.11.15 at 07:55,  wrote:
> What about:
>
> 4) Instead of relying on the kernel maintained p2m list for m2p
>conversion use the hypervisor maintained m2p list which should be
>available in the dump as well. This is the way the alive kernel is
>working, so mimic it during crash dump analysis.
 I fully agree; I have to admit that looking at the p2m when doing page
 table walks for a PV Dom0 (having all machine addresses in page table
 entries) seems kind of backwards. (But I say this knowing nothing
 about the tool.)

>>> I don't think we can reliably use the m2p for PV domains because
>>> PV domains don't always issue a m2p update hypercall when they change
>>> their p2m mapping.
>> This only applies to foreign pages which won't be very interesting to a
>> crash tool.
> True. I think the main reason crash hasn't done this is that it cannot
> find the hypervisor maintained m2p list. It should be sufficient to add
> some more fields to XEN_VMCOREINFO, so that crash can locate the
> mapping in the dump.
 The M2P lives at an ABI-specified location in all virtual address spaces
 for PV guests.

 Either 0xF580 or 0x8000 depending on bitness.
>>> Hm, this is nice, but kind of chicken-and-egg problem. A system dump
>>> contains a snapshot of the machine's RAM. But the addresses you
>>> mentioned are virtual addresses. How do I translate them to physical
>>> addresses without an m2p mapping? I need at least the value of CR3 for
>>> that domain, and most likely a way to determine if it is a PV domain.
>> This ought to also be present in Xen's master page table
>> (idle_pg_table[]), and I suppose we can take for granted a symbol
>> table being available.
> The idle_pg_table is already present in the VMCORE notes.

Ah, except it is aliased to the name pgd_l4.

~Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Andrew Cooper
On 24/11/15 13:39, Jan Beulich wrote:
 On 24.11.15 at 13:57,  wrote:
>> V Tue, 24 Nov 2015 10:35:03 +
>> Andrew Cooper  napsáno:
>>
>>> On 24/11/15 10:17, Petr Tesarik wrote:
 On Tue, 24 Nov 2015 10:09:01 +
 David Vrabel  wrote:

> On 24/11/15 09:55, Malcolm Crossley wrote:
>> On 24/11/15 08:59, Jan Beulich wrote:
>> On 24.11.15 at 07:55,  wrote:
 What about:

 4) Instead of relying on the kernel maintained p2m list for m2p
conversion use the hypervisor maintained m2p list which should be
available in the dump as well. This is the way the alive kernel is
working, so mimic it during crash dump analysis.
>>> I fully agree; I have to admit that looking at the p2m when doing page
>>> table walks for a PV Dom0 (having all machine addresses in page table
>>> entries) seems kind of backwards. (But I say this knowing nothing
>>> about the tool.)
>>>
>> I don't think we can reliably use the m2p for PV domains because
>> PV domains don't always issue a m2p update hypercall when they change
>> their p2m mapping.
> This only applies to foreign pages which won't be very interesting to a
> crash tool.
 True. I think the main reason crash hasn't done this is that it cannot
 find the hypervisor maintained m2p list. It should be sufficient to add
 some more fields to XEN_VMCOREINFO, so that crash can locate the
 mapping in the dump.
>>> The M2P lives at an ABI-specified location in all virtual address spaces
>>> for PV guests.
>>>
>>> Either 0xF580 or 0x8000 depending on bitness.
>> Hm, this is nice, but kind of chicken-and-egg problem. A system dump
>> contains a snapshot of the machine's RAM. But the addresses you
>> mentioned are virtual addresses. How do I translate them to physical
>> addresses without an m2p mapping? I need at least the value of CR3 for
>> that domain, and most likely a way to determine if it is a PV domain.
> This ought to also be present in Xen's master page table
> (idle_pg_table[]), and I suppose we can take for granted a symbol
> table being available.

The idle_pg_table is already present in the VMCORE notes.

~Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Jan Beulich
>>> On 24.11.15 at 13:57,  wrote:
> V Tue, 24 Nov 2015 10:35:03 +
> Andrew Cooper  napsáno:
> 
>> On 24/11/15 10:17, Petr Tesarik wrote:
>> > On Tue, 24 Nov 2015 10:09:01 +
>> > David Vrabel  wrote:
>> >
>> >> On 24/11/15 09:55, Malcolm Crossley wrote:
>> >>> On 24/11/15 08:59, Jan Beulich wrote:
>> >>> On 24.11.15 at 07:55,  wrote:
>> > What about:
>> >
>> > 4) Instead of relying on the kernel maintained p2m list for m2p
>> >conversion use the hypervisor maintained m2p list which should be
>> >available in the dump as well. This is the way the alive kernel is
>> >working, so mimic it during crash dump analysis.
>>  I fully agree; I have to admit that looking at the p2m when doing page
>>  table walks for a PV Dom0 (having all machine addresses in page table
>>  entries) seems kind of backwards. (But I say this knowing nothing
>>  about the tool.)
>> 
>> >>> I don't think we can reliably use the m2p for PV domains because
>> >>> PV domains don't always issue a m2p update hypercall when they change
>> >>> their p2m mapping.
>> >> This only applies to foreign pages which won't be very interesting to a
>> >> crash tool.
>> > True. I think the main reason crash hasn't done this is that it cannot
>> > find the hypervisor maintained m2p list. It should be sufficient to add
>> > some more fields to XEN_VMCOREINFO, so that crash can locate the
>> > mapping in the dump.
>> 
>> The M2P lives at an ABI-specified location in all virtual address spaces
>> for PV guests.
>> 
>> Either 0xF580 or 0x8000 depending on bitness.
> 
> Hm, this is nice, but kind of chicken-and-egg problem. A system dump
> contains a snapshot of the machine's RAM. But the addresses you
> mentioned are virtual addresses. How do I translate them to physical
> addresses without an m2p mapping? I need at least the value of CR3 for
> that domain, and most likely a way to determine if it is a PV domain.

This ought to also be present in Xen's master page table
(idle_pg_table[]), and I suppose we can take for granted a symbol
table being available.

Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Petr Tesarik
V Tue, 24 Nov 2015 10:35:03 +
Andrew Cooper  napsáno:

> On 24/11/15 10:17, Petr Tesarik wrote:
> > On Tue, 24 Nov 2015 10:09:01 +
> > David Vrabel  wrote:
> >
> >> On 24/11/15 09:55, Malcolm Crossley wrote:
> >>> On 24/11/15 08:59, Jan Beulich wrote:
> >>> On 24.11.15 at 07:55,  wrote:
> > What about:
> >
> > 4) Instead of relying on the kernel maintained p2m list for m2p
> >conversion use the hypervisor maintained m2p list which should be
> >available in the dump as well. This is the way the alive kernel is
> >working, so mimic it during crash dump analysis.
>  I fully agree; I have to admit that looking at the p2m when doing page
>  table walks for a PV Dom0 (having all machine addresses in page table
>  entries) seems kind of backwards. (But I say this knowing nothing
>  about the tool.)
> 
> >>> I don't think we can reliably use the m2p for PV domains because
> >>> PV domains don't always issue a m2p update hypercall when they change
> >>> their p2m mapping.
> >> This only applies to foreign pages which won't be very interesting to a
> >> crash tool.
> > True. I think the main reason crash hasn't done this is that it cannot
> > find the hypervisor maintained m2p list. It should be sufficient to add
> > some more fields to XEN_VMCOREINFO, so that crash can locate the
> > mapping in the dump.
> 
> The M2P lives at an ABI-specified location in all virtual address spaces
> for PV guests.
> 
> Either 0xF580 or 0x8000 depending on bitness.

Hm, this is nice, but kind of chicken-and-egg problem. A system dump
contains a snapshot of the machine's RAM. But the addresses you
mentioned are virtual addresses. How do I translate them to physical
addresses without an m2p mapping? I need at least the value of CR3 for
that domain, and most likely a way to determine if it is a PV domain.

Petr T
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Andrew Cooper
On 24/11/15 10:17, Petr Tesarik wrote:
> On Tue, 24 Nov 2015 10:09:01 +
> David Vrabel  wrote:
>
>> On 24/11/15 09:55, Malcolm Crossley wrote:
>>> On 24/11/15 08:59, Jan Beulich wrote:
>>> On 24.11.15 at 07:55,  wrote:
> What about:
>
> 4) Instead of relying on the kernel maintained p2m list for m2p
>conversion use the hypervisor maintained m2p list which should be
>available in the dump as well. This is the way the alive kernel is
>working, so mimic it during crash dump analysis.
 I fully agree; I have to admit that looking at the p2m when doing page
 table walks for a PV Dom0 (having all machine addresses in page table
 entries) seems kind of backwards. (But I say this knowing nothing
 about the tool.)

>>> I don't think we can reliably use the m2p for PV domains because
>>> PV domains don't always issue a m2p update hypercall when they change
>>> their p2m mapping.
>> This only applies to foreign pages which won't be very interesting to a
>> crash tool.
> True. I think the main reason crash hasn't done this is that it cannot
> find the hypervisor maintained m2p list. It should be sufficient to add
> some more fields to XEN_VMCOREINFO, so that crash can locate the
> mapping in the dump.

The M2P lives at an ABI-specified location in all virtual address spaces
for PV guests.

Either 0xF580 or 0x8000 depending on bitness.

This will be more compatible to use, as it won't depend on a newer
hypervisor with modified notes.

~Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Petr Tesarik
On Tue, 24 Nov 2015 10:09:01 +
David Vrabel  wrote:

> On 24/11/15 09:55, Malcolm Crossley wrote:
> > On 24/11/15 08:59, Jan Beulich wrote:
> > On 24.11.15 at 07:55,  wrote:
> >>> What about:
> >>>
> >>> 4) Instead of relying on the kernel maintained p2m list for m2p
> >>>conversion use the hypervisor maintained m2p list which should be
> >>>available in the dump as well. This is the way the alive kernel is
> >>>working, so mimic it during crash dump analysis.
> >>
> >> I fully agree; I have to admit that looking at the p2m when doing page
> >> table walks for a PV Dom0 (having all machine addresses in page table
> >> entries) seems kind of backwards. (But I say this knowing nothing
> >> about the tool.)
> >>
> > I don't think we can reliably use the m2p for PV domains because
> > PV domains don't always issue a m2p update hypercall when they change
> > their p2m mapping.
> 
> This only applies to foreign pages which won't be very interesting to a
> crash tool.

True. I think the main reason crash hasn't done this is that it cannot
find the hypervisor maintained m2p list. It should be sufficient to add
some more fields to XEN_VMCOREINFO, so that crash can locate the
mapping in the dump.

Petr T
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread David Vrabel
On 24/11/15 09:55, Malcolm Crossley wrote:
> On 24/11/15 08:59, Jan Beulich wrote:
> On 24.11.15 at 07:55,  wrote:
>>> What about:
>>>
>>> 4) Instead of relying on the kernel maintained p2m list for m2p
>>>conversion use the hypervisor maintained m2p list which should be
>>>available in the dump as well. This is the way the alive kernel is
>>>working, so mimic it during crash dump analysis.
>>
>> I fully agree; I have to admit that looking at the p2m when doing page
>> table walks for a PV Dom0 (having all machine addresses in page table
>> entries) seems kind of backwards. (But I say this knowing nothing
>> about the tool.)
>>
> I don't think we can reliably use the m2p for PV domains because
> PV domains don't always issue a m2p update hypercall when they change
> their p2m mapping.

This only applies to foreign pages which won't be very interesting to a
crash tool.

David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Malcolm Crossley
On 24/11/15 08:59, Jan Beulich wrote:
 On 24.11.15 at 07:55,  wrote:
>> What about:
>>
>> 4) Instead of relying on the kernel maintained p2m list for m2p
>>conversion use the hypervisor maintained m2p list which should be
>>available in the dump as well. This is the way the alive kernel is
>>working, so mimic it during crash dump analysis.
> 
> I fully agree; I have to admit that looking at the p2m when doing page
> table walks for a PV Dom0 (having all machine addresses in page table
> entries) seems kind of backwards. (But I say this knowing nothing
> about the tool.)
> 
I don't think we can reliably use the m2p for PV domains because
PV domains don't always issue a m2p update hypercall when they change
their p2m mapping.

Malcolm

> ___
> Xen-devel mailing list
> xen-de...@lists.xen.org
> http://lists.xen.org/xen-devel
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Jan Beulich
>>> On 24.11.15 at 07:55,  wrote:
> What about:
> 
> 4) Instead of relying on the kernel maintained p2m list for m2p
>conversion use the hypervisor maintained m2p list which should be
>available in the dump as well. This is the way the alive kernel is
>working, so mimic it during crash dump analysis.

I fully agree; I have to admit that looking at the p2m when doing page
table walks for a PV Dom0 (having all machine addresses in page table
entries) seems kind of backwards. (But I say this knowing nothing
about the tool.)

Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Jan Beulich
>>> On 24.11.15 at 07:55,  wrote:
> What about:
> 
> 4) Instead of relying on the kernel maintained p2m list for m2p
>conversion use the hypervisor maintained m2p list which should be
>available in the dump as well. This is the way the alive kernel is
>working, so mimic it during crash dump analysis.

I fully agree; I have to admit that looking at the p2m when doing page
table walks for a PV Dom0 (having all machine addresses in page table
entries) seems kind of backwards. (But I say this knowing nothing
about the tool.)

Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Andrew Cooper
On 24/11/15 10:17, Petr Tesarik wrote:
> On Tue, 24 Nov 2015 10:09:01 +
> David Vrabel  wrote:
>
>> On 24/11/15 09:55, Malcolm Crossley wrote:
>>> On 24/11/15 08:59, Jan Beulich wrote:
>>> On 24.11.15 at 07:55,  wrote:
> What about:
>
> 4) Instead of relying on the kernel maintained p2m list for m2p
>conversion use the hypervisor maintained m2p list which should be
>available in the dump as well. This is the way the alive kernel is
>working, so mimic it during crash dump analysis.
 I fully agree; I have to admit that looking at the p2m when doing page
 table walks for a PV Dom0 (having all machine addresses in page table
 entries) seems kind of backwards. (But I say this knowing nothing
 about the tool.)

>>> I don't think we can reliably use the m2p for PV domains because
>>> PV domains don't always issue a m2p update hypercall when they change
>>> their p2m mapping.
>> This only applies to foreign pages which won't be very interesting to a
>> crash tool.
> True. I think the main reason crash hasn't done this is that it cannot
> find the hypervisor maintained m2p list. It should be sufficient to add
> some more fields to XEN_VMCOREINFO, so that crash can locate the
> mapping in the dump.

The M2P lives at an ABI-specified location in all virtual address spaces
for PV guests.

Either 0xF580 or 0x8000 depending on bitness.

This will be more compatible to use, as it won't depend on a newer
hypervisor with modified notes.

~Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Petr Tesarik
On Tue, 24 Nov 2015 10:09:01 +
David Vrabel  wrote:

> On 24/11/15 09:55, Malcolm Crossley wrote:
> > On 24/11/15 08:59, Jan Beulich wrote:
> > On 24.11.15 at 07:55,  wrote:
> >>> What about:
> >>>
> >>> 4) Instead of relying on the kernel maintained p2m list for m2p
> >>>conversion use the hypervisor maintained m2p list which should be
> >>>available in the dump as well. This is the way the alive kernel is
> >>>working, so mimic it during crash dump analysis.
> >>
> >> I fully agree; I have to admit that looking at the p2m when doing page
> >> table walks for a PV Dom0 (having all machine addresses in page table
> >> entries) seems kind of backwards. (But I say this knowing nothing
> >> about the tool.)
> >>
> > I don't think we can reliably use the m2p for PV domains because
> > PV domains don't always issue a m2p update hypercall when they change
> > their p2m mapping.
> 
> This only applies to foreign pages which won't be very interesting to a
> crash tool.

True. I think the main reason crash hasn't done this is that it cannot
find the hypervisor maintained m2p list. It should be sufficient to add
some more fields to XEN_VMCOREINFO, so that crash can locate the
mapping in the dump.

Petr T
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Malcolm Crossley
On 24/11/15 08:59, Jan Beulich wrote:
 On 24.11.15 at 07:55,  wrote:
>> What about:
>>
>> 4) Instead of relying on the kernel maintained p2m list for m2p
>>conversion use the hypervisor maintained m2p list which should be
>>available in the dump as well. This is the way the alive kernel is
>>working, so mimic it during crash dump analysis.
> 
> I fully agree; I have to admit that looking at the p2m when doing page
> table walks for a PV Dom0 (having all machine addresses in page table
> entries) seems kind of backwards. (But I say this knowing nothing
> about the tool.)
> 
I don't think we can reliably use the m2p for PV domains because
PV domains don't always issue a m2p update hypercall when they change
their p2m mapping.

Malcolm

> ___
> Xen-devel mailing list
> xen-de...@lists.xen.org
> http://lists.xen.org/xen-devel
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread David Vrabel
On 24/11/15 09:55, Malcolm Crossley wrote:
> On 24/11/15 08:59, Jan Beulich wrote:
> On 24.11.15 at 07:55,  wrote:
>>> What about:
>>>
>>> 4) Instead of relying on the kernel maintained p2m list for m2p
>>>conversion use the hypervisor maintained m2p list which should be
>>>available in the dump as well. This is the way the alive kernel is
>>>working, so mimic it during crash dump analysis.
>>
>> I fully agree; I have to admit that looking at the p2m when doing page
>> table walks for a PV Dom0 (having all machine addresses in page table
>> entries) seems kind of backwards. (But I say this knowing nothing
>> about the tool.)
>>
> I don't think we can reliably use the m2p for PV domains because
> PV domains don't always issue a m2p update hypercall when they change
> their p2m mapping.

This only applies to foreign pages which won't be very interesting to a
crash tool.

David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Andrew Cooper
On 24/11/15 13:41, Andrew Cooper wrote:
> On 24/11/15 13:39, Jan Beulich wrote:
> On 24.11.15 at 13:57,  wrote:
>>> V Tue, 24 Nov 2015 10:35:03 +
>>> Andrew Cooper  napsáno:
>>>
 On 24/11/15 10:17, Petr Tesarik wrote:
> On Tue, 24 Nov 2015 10:09:01 +
> David Vrabel  wrote:
>
>> On 24/11/15 09:55, Malcolm Crossley wrote:
>>> On 24/11/15 08:59, Jan Beulich wrote:
>>> On 24.11.15 at 07:55,  wrote:
> What about:
>
> 4) Instead of relying on the kernel maintained p2m list for m2p
>conversion use the hypervisor maintained m2p list which should be
>available in the dump as well. This is the way the alive kernel is
>working, so mimic it during crash dump analysis.
 I fully agree; I have to admit that looking at the p2m when doing page
 table walks for a PV Dom0 (having all machine addresses in page table
 entries) seems kind of backwards. (But I say this knowing nothing
 about the tool.)

>>> I don't think we can reliably use the m2p for PV domains because
>>> PV domains don't always issue a m2p update hypercall when they change
>>> their p2m mapping.
>> This only applies to foreign pages which won't be very interesting to a
>> crash tool.
> True. I think the main reason crash hasn't done this is that it cannot
> find the hypervisor maintained m2p list. It should be sufficient to add
> some more fields to XEN_VMCOREINFO, so that crash can locate the
> mapping in the dump.
 The M2P lives at an ABI-specified location in all virtual address spaces
 for PV guests.

 Either 0xF580 or 0x8000 depending on bitness.
>>> Hm, this is nice, but kind of chicken-and-egg problem. A system dump
>>> contains a snapshot of the machine's RAM. But the addresses you
>>> mentioned are virtual addresses. How do I translate them to physical
>>> addresses without an m2p mapping? I need at least the value of CR3 for
>>> that domain, and most likely a way to determine if it is a PV domain.
>> This ought to also be present in Xen's master page table
>> (idle_pg_table[]), and I suppose we can take for granted a symbol
>> table being available.
> The idle_pg_table is already present in the VMCORE notes.

Ah, except it is aliased to the name pgd_l4.

~Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Ian Campbell
On Tue, 2015-11-24 at 10:35 +, Andrew Cooper wrote:
> On 24/11/15 10:17, Petr Tesarik wrote:
> > On Tue, 24 Nov 2015 10:09:01 +
> > David Vrabel  wrote:
> > 
> > > On 24/11/15 09:55, Malcolm Crossley wrote:
> > > > On 24/11/15 08:59, Jan Beulich wrote:
> > > > > > > > On 24.11.15 at 07:55,  wrote:
> > > > > > What about:
> > > > > > 
> > > > > > 4) Instead of relying on the kernel maintained p2m list for m2p
> > > > > >    conversion use the hypervisor maintained m2p list which
> > > > > > should be
> > > > > >    available in the dump as well. This is the way the alive
> > > > > > kernel is
> > > > > >    working, so mimic it during crash dump analysis.
> > > > > I fully agree; I have to admit that looking at the p2m when doing
> > > > > page
> > > > > table walks for a PV Dom0 (having all machine addresses in page
> > > > > table
> > > > > entries) seems kind of backwards. (But I say this knowing nothing
> > > > > about the tool.)
> > > > > 
> > > > I don't think we can reliably use the m2p for PV domains because
> > > > PV domains don't always issue a m2p update hypercall when they
> > > > change
> > > > their p2m mapping.
> > > This only applies to foreign pages which won't be very interesting to
> > > a
> > > crash tool.
> > True. I think the main reason crash hasn't done this is that it cannot
> > find the hypervisor maintained m2p list. It should be sufficient to add
> > some more fields to XEN_VMCOREINFO, so that crash can locate the
> > mapping in the dump.
> 
> The M2P lives at an ABI-specified location in all virtual address spaces
> for PV guests.
> 
> Either 0xF580 or 0x8000 depending on bitness.

In theory it can actually be dynamic. XENMEM_machphys_mapping is the way to
get at it (for both bitnesses).

For 64-bit guests I think that is most an "in theory" thing and it never
has actually been so.

For a 32-bit guest case I don't recall if it is just a 32on32 vs 32on64
thing, or if something (either guest or toolstack) gets to pick more
dynamically or even if it is a dom0 vs domU thing.

Ian.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Jan Beulich
>>> On 24.11.15 at 14:46,  wrote:
> On Tue, 2015-11-24 at 10:35 +, Andrew Cooper wrote:
>> On 24/11/15 10:17, Petr Tesarik wrote:
>> > On Tue, 24 Nov 2015 10:09:01 +
>> > David Vrabel  wrote:
>> > 
>> > > On 24/11/15 09:55, Malcolm Crossley wrote:
>> > > > On 24/11/15 08:59, Jan Beulich wrote:
>> > > > > > > > On 24.11.15 at 07:55,  wrote:
>> > > > > > What about:
>> > > > > > 
>> > > > > > 4) Instead of relying on the kernel maintained p2m list for m2p
>> > > > > >conversion use the hypervisor maintained m2p list which
>> > > > > > should be
>> > > > > >available in the dump as well. This is the way the alive
>> > > > > > kernel is
>> > > > > >working, so mimic it during crash dump analysis.
>> > > > > I fully agree; I have to admit that looking at the p2m when doing
>> > > > > page
>> > > > > table walks for a PV Dom0 (having all machine addresses in page
>> > > > > table
>> > > > > entries) seems kind of backwards. (But I say this knowing nothing
>> > > > > about the tool.)
>> > > > > 
>> > > > I don't think we can reliably use the m2p for PV domains because
>> > > > PV domains don't always issue a m2p update hypercall when they
>> > > > change
>> > > > their p2m mapping.
>> > > This only applies to foreign pages which won't be very interesting to
>> > > a
>> > > crash tool.
>> > True. I think the main reason crash hasn't done this is that it cannot
>> > find the hypervisor maintained m2p list. It should be sufficient to add
>> > some more fields to XEN_VMCOREINFO, so that crash can locate the
>> > mapping in the dump.
>> 
>> The M2P lives at an ABI-specified location in all virtual address spaces
>> for PV guests.
>> 
>> Either 0xF580 or 0x8000 depending on bitness.
> 
> In theory it can actually be dynamic. XENMEM_machphys_mapping is the way to
> get at it (for both bitnesses).
> 
> For 64-bit guests I think that is most an "in theory" thing and it never
> has actually been so.
> 
> For a 32-bit guest case I don't recall if it is just a 32on32 vs 32on64
> thing, or if something (either guest or toolstack) gets to pick more
> dynamically or even if it is a dom0 vs domU thing.

It's only for 32-on-64 where this range can change (and there it's the
64-bit address that crash would care about anyway).

Jan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Andrew Cooper
On 24/11/15 13:39, Jan Beulich wrote:
 On 24.11.15 at 13:57,  wrote:
>> V Tue, 24 Nov 2015 10:35:03 +
>> Andrew Cooper  napsáno:
>>
>>> On 24/11/15 10:17, Petr Tesarik wrote:
 On Tue, 24 Nov 2015 10:09:01 +
 David Vrabel  wrote:

> On 24/11/15 09:55, Malcolm Crossley wrote:
>> On 24/11/15 08:59, Jan Beulich wrote:
>> On 24.11.15 at 07:55,  wrote:
 What about:

 4) Instead of relying on the kernel maintained p2m list for m2p
conversion use the hypervisor maintained m2p list which should be
available in the dump as well. This is the way the alive kernel is
working, so mimic it during crash dump analysis.
>>> I fully agree; I have to admit that looking at the p2m when doing page
>>> table walks for a PV Dom0 (having all machine addresses in page table
>>> entries) seems kind of backwards. (But I say this knowing nothing
>>> about the tool.)
>>>
>> I don't think we can reliably use the m2p for PV domains because
>> PV domains don't always issue a m2p update hypercall when they change
>> their p2m mapping.
> This only applies to foreign pages which won't be very interesting to a
> crash tool.
 True. I think the main reason crash hasn't done this is that it cannot
 find the hypervisor maintained m2p list. It should be sufficient to add
 some more fields to XEN_VMCOREINFO, so that crash can locate the
 mapping in the dump.
>>> The M2P lives at an ABI-specified location in all virtual address spaces
>>> for PV guests.
>>>
>>> Either 0xF580 or 0x8000 depending on bitness.
>> Hm, this is nice, but kind of chicken-and-egg problem. A system dump
>> contains a snapshot of the machine's RAM. But the addresses you
>> mentioned are virtual addresses. How do I translate them to physical
>> addresses without an m2p mapping? I need at least the value of CR3 for
>> that domain, and most likely a way to determine if it is a PV domain.
> This ought to also be present in Xen's master page table
> (idle_pg_table[]), and I suppose we can take for granted a symbol
> table being available.

The idle_pg_table is already present in the VMCORE notes.

~Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Petr Tesarik
V Tue, 24 Nov 2015 10:35:03 +
Andrew Cooper  napsáno:

> On 24/11/15 10:17, Petr Tesarik wrote:
> > On Tue, 24 Nov 2015 10:09:01 +
> > David Vrabel  wrote:
> >
> >> On 24/11/15 09:55, Malcolm Crossley wrote:
> >>> On 24/11/15 08:59, Jan Beulich wrote:
> >>> On 24.11.15 at 07:55,  wrote:
> > What about:
> >
> > 4) Instead of relying on the kernel maintained p2m list for m2p
> >conversion use the hypervisor maintained m2p list which should be
> >available in the dump as well. This is the way the alive kernel is
> >working, so mimic it during crash dump analysis.
>  I fully agree; I have to admit that looking at the p2m when doing page
>  table walks for a PV Dom0 (having all machine addresses in page table
>  entries) seems kind of backwards. (But I say this knowing nothing
>  about the tool.)
> 
> >>> I don't think we can reliably use the m2p for PV domains because
> >>> PV domains don't always issue a m2p update hypercall when they change
> >>> their p2m mapping.
> >> This only applies to foreign pages which won't be very interesting to a
> >> crash tool.
> > True. I think the main reason crash hasn't done this is that it cannot
> > find the hypervisor maintained m2p list. It should be sufficient to add
> > some more fields to XEN_VMCOREINFO, so that crash can locate the
> > mapping in the dump.
> 
> The M2P lives at an ABI-specified location in all virtual address spaces
> for PV guests.
> 
> Either 0xF580 or 0x8000 depending on bitness.

Hm, this is nice, but kind of chicken-and-egg problem. A system dump
contains a snapshot of the machine's RAM. But the addresses you
mentioned are virtual addresses. How do I translate them to physical
addresses without an m2p mapping? I need at least the value of CR3 for
that domain, and most likely a way to determine if it is a PV domain.

Petr T
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-24 Thread Jan Beulich
>>> On 24.11.15 at 13:57,  wrote:
> V Tue, 24 Nov 2015 10:35:03 +
> Andrew Cooper  napsáno:
> 
>> On 24/11/15 10:17, Petr Tesarik wrote:
>> > On Tue, 24 Nov 2015 10:09:01 +
>> > David Vrabel  wrote:
>> >
>> >> On 24/11/15 09:55, Malcolm Crossley wrote:
>> >>> On 24/11/15 08:59, Jan Beulich wrote:
>> >>> On 24.11.15 at 07:55,  wrote:
>> > What about:
>> >
>> > 4) Instead of relying on the kernel maintained p2m list for m2p
>> >conversion use the hypervisor maintained m2p list which should be
>> >available in the dump as well. This is the way the alive kernel is
>> >working, so mimic it during crash dump analysis.
>>  I fully agree; I have to admit that looking at the p2m when doing page
>>  table walks for a PV Dom0 (having all machine addresses in page table
>>  entries) seems kind of backwards. (But I say this knowing nothing
>>  about the tool.)
>> 
>> >>> I don't think we can reliably use the m2p for PV domains because
>> >>> PV domains don't always issue a m2p update hypercall when they change
>> >>> their p2m mapping.
>> >> This only applies to foreign pages which won't be very interesting to a
>> >> crash tool.
>> > True. I think the main reason crash hasn't done this is that it cannot
>> > find the hypervisor maintained m2p list. It should be sufficient to add
>> > some more fields to XEN_VMCOREINFO, so that crash can locate the
>> > mapping in the dump.
>> 
>> The M2P lives at an ABI-specified location in all virtual address spaces
>> for PV guests.
>> 
>> Either 0xF580 or 0x8000 depending on bitness.
> 
> Hm, this is nice, but kind of chicken-and-egg problem. A system dump
> contains a snapshot of the machine's RAM. But the addresses you
> mentioned are virtual addresses. How do I translate them to physical
> addresses without an m2p mapping? I need at least the value of CR3 for
> that domain, and most likely a way to determine if it is a PV domain.

This ought to also be present in Xen's master page table
(idle_pg_table[]), and I suppose we can take for granted a symbol
table being available.

Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-23 Thread Juergen Gross
On 23/11/15 21:18, Daniel Kiper wrote:
> Hi all,
> 
> Some time ago Linux kernel commit 054954eb051f35e74b75a566a96fe756015352c8
> (xen: switch to linear virtual mapped sparse p2m list) introduced linear
> virtual mapped sparse p2m list. It fixed some issues, however, it broke
> crash tool too. I tried to fix this issue but the problem looks more
> difficult than I expected.
> 
> Let's focus on "crash vmcore vmlinux". vmcore file was generated from dom0.
> "crash vmcore xen-syms" works without any issue.
> 
> At first sight problem looks simple. Just add a function which reads p2m list
> from vmcore and voila. I have done it. Then another issue arisen.
> 
> Please take a look at following backtrace:
> 
> #24426 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
> type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24427 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
> mfn=1299707) at kernel.c:9050
> #24428 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
> kernel.c:8867
> #24429 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
> #24430 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
> at x86_64.c:1997
> #24431 0x00528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
> at x86_64.c:1887
> #24432 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
> paddr=0x7fff51c7c100, verbose=0) at memory.c:2900
> #24433 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
> type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24434 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
> mfn=1299707) at kernel.c:9050
> #24435 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
> kernel.c:8867
> #24436 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
> #24437 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
> at x86_64.c:1997
> #24438 0x00528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
> at x86_64.c:1887
> #24439 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
> paddr=0x7fff51c7ca60, verbose=0) at memory.c:2900
> #24440 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
> type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24441 0x0050f746 in __xen_pvops_m2p_vma (machine=6364917760, 
> mfn=1553935) at kernel.c:9050
> #24442 0x0050edb7 in __xen_m2p (machine=6364917760, mfn=1553935) at 
> kernel.c:8867
> #24443 0x0050e948 in xen_m2p (machine=6364917760) at kernel.c:8796
> #2 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
> at x86_64.c:1997
> #24445 0x00528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
> at x86_64.c:1887
> #24446 0x0048d708 in kvtop (tc=0x0, kvaddr=18446744072099176512, 
> paddr=0x7fff51c7d3c0, verbose=0) at memory.c:2900
> #24447 0x0048b0f6 in readmem (addr=18446744072099176512, memtype=1, 
> buffer=0xfbb500, size=768,
> type=0x8fd772 "module struct", error_handle=6) at memory.c:2157
> #24448 0x004fb0ab in module_init () at kernel.c:3355
> 
> As you can see module_init() calls readmem() which attempts to read virtual 
> address
> which lies outside of kernel text mapping (0x8000 - 
> 0xa000).
> In this case addr=18446744072099176512 == 0xa003a040 which is known 
> as module
> mapping space. readmem() needs physical address, so, it calls kvtop() then 
> kvtop()
> calls x86_64_kvtop(). x86_64_kvtop() is not able to easily calculate, using 
> simple
> arithmetic like in case of kernel text mapping space, physical address from 
> virtual
> address. Hence it calls x86_64_kvtop_xen_wpt() to calculate it by traversing 
> page
> tables. x86_64_kvtop_xen_wpt() needs to do some m2p translation so it calls 
> xen_m2p()
> which calls __xen_m2p() and finally it calls __xen_pvops_m2p_vma() (my 
> function which
> tries to read linear virtual mapped sparse p2m list). Then 
> __xen_pvops_m2p_vma() calls
> readmem() which tries to read addr=18446683600570023936 == 0xc900 
> which
> is VMA used for m2p list. Well, once again physical address must be 
> calculated by
> traversing page tables. However, this requires access to m2p list which leads 
> to
> another readmem() call. Starting from here we are in the loop. After 
> thousands of
> repetitions crash dies due to stack overflow. Not nice... :-(((
> 
> Do we have any viable fix for this issue? I considered a few but I have not 
> found 

crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-23 Thread Daniel Kiper
Hi all,

Some time ago Linux kernel commit 054954eb051f35e74b75a566a96fe756015352c8
(xen: switch to linear virtual mapped sparse p2m list) introduced linear
virtual mapped sparse p2m list. It fixed some issues, however, it broke
crash tool too. I tried to fix this issue but the problem looks more
difficult than I expected.

Let's focus on "crash vmcore vmlinux". vmcore file was generated from dom0.
"crash vmcore xen-syms" works without any issue.

At first sight problem looks simple. Just add a function which reads p2m list
from vmcore and voila. I have done it. Then another issue arisen.

Please take a look at following backtrace:

#24426 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
buffer=0x3c0a060, size=4096,
type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
#24427 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
mfn=1299707) at kernel.c:9050
#24428 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
kernel.c:8867
#24429 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
#24430 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
at x86_64.c:1997
#24431 0x00528890 in x86_64_kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7c100, verbose=0)
at x86_64.c:1887
#24432 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7c100, verbose=0) at memory.c:2900
#24433 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
buffer=0x3c0a060, size=4096,
type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
#24434 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
mfn=1299707) at kernel.c:9050
#24435 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
kernel.c:8867
#24436 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
#24437 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
at x86_64.c:1997
#24438 0x00528890 in x86_64_kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7ca60, verbose=0)
at x86_64.c:1887
#24439 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7ca60, verbose=0) at memory.c:2900
#24440 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
buffer=0x3c0a060, size=4096,
type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
#24441 0x0050f746 in __xen_pvops_m2p_vma (machine=6364917760, 
mfn=1553935) at kernel.c:9050
#24442 0x0050edb7 in __xen_m2p (machine=6364917760, mfn=1553935) at 
kernel.c:8867
#24443 0x0050e948 in xen_m2p (machine=6364917760) at kernel.c:8796
#2 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
at x86_64.c:1997
#24445 0x00528890 in x86_64_kvtop (tc=0x0, kvaddr=18446744072099176512, 
paddr=0x7fff51c7d3c0, verbose=0)
at x86_64.c:1887
#24446 0x0048d708 in kvtop (tc=0x0, kvaddr=18446744072099176512, 
paddr=0x7fff51c7d3c0, verbose=0) at memory.c:2900
#24447 0x0048b0f6 in readmem (addr=18446744072099176512, memtype=1, 
buffer=0xfbb500, size=768,
type=0x8fd772 "module struct", error_handle=6) at memory.c:2157
#24448 0x004fb0ab in module_init () at kernel.c:3355

As you can see module_init() calls readmem() which attempts to read virtual 
address
which lies outside of kernel text mapping (0x8000 - 
0xa000).
In this case addr=18446744072099176512 == 0xa003a040 which is known as 
module
mapping space. readmem() needs physical address, so, it calls kvtop() then 
kvtop()
calls x86_64_kvtop(). x86_64_kvtop() is not able to easily calculate, using 
simple
arithmetic like in case of kernel text mapping space, physical address from 
virtual
address. Hence it calls x86_64_kvtop_xen_wpt() to calculate it by traversing 
page
tables. x86_64_kvtop_xen_wpt() needs to do some m2p translation so it calls 
xen_m2p()
which calls __xen_m2p() and finally it calls __xen_pvops_m2p_vma() (my function 
which
tries to read linear virtual mapped sparse p2m list). Then 
__xen_pvops_m2p_vma() calls
readmem() which tries to read addr=18446683600570023936 == 0xc900 
which
is VMA used for m2p list. Well, once again physical address must be calculated 
by
traversing page tables. However, this requires access to m2p list which leads to
another readmem() call. Starting from here we are in the loop. After thousands 
of
repetitions crash dies due to stack overflow. Not nice... :-(((

Do we have any viable fix for this issue? I considered a few but I have not 
found prefect one.

1) In theory we can use p2m tree to solve that problem because it is available 
in parallel
   with VMA mapped p2m right now. However, this is temporary solution and it 
will be phased
   out sooner or later. We need long term 

crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-23 Thread Daniel Kiper
Hi all,

Some time ago Linux kernel commit 054954eb051f35e74b75a566a96fe756015352c8
(xen: switch to linear virtual mapped sparse p2m list) introduced linear
virtual mapped sparse p2m list. It fixed some issues, however, it broke
crash tool too. I tried to fix this issue but the problem looks more
difficult than I expected.

Let's focus on "crash vmcore vmlinux". vmcore file was generated from dom0.
"crash vmcore xen-syms" works without any issue.

At first sight problem looks simple. Just add a function which reads p2m list
from vmcore and voila. I have done it. Then another issue arisen.

Please take a look at following backtrace:

#24426 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
buffer=0x3c0a060, size=4096,
type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
#24427 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
mfn=1299707) at kernel.c:9050
#24428 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
kernel.c:8867
#24429 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
#24430 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
at x86_64.c:1997
#24431 0x00528890 in x86_64_kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7c100, verbose=0)
at x86_64.c:1887
#24432 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7c100, verbose=0) at memory.c:2900
#24433 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
buffer=0x3c0a060, size=4096,
type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
#24434 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
mfn=1299707) at kernel.c:9050
#24435 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
kernel.c:8867
#24436 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
#24437 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
at x86_64.c:1997
#24438 0x00528890 in x86_64_kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7ca60, verbose=0)
at x86_64.c:1887
#24439 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
paddr=0x7fff51c7ca60, verbose=0) at memory.c:2900
#24440 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
buffer=0x3c0a060, size=4096,
type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
#24441 0x0050f746 in __xen_pvops_m2p_vma (machine=6364917760, 
mfn=1553935) at kernel.c:9050
#24442 0x0050edb7 in __xen_m2p (machine=6364917760, mfn=1553935) at 
kernel.c:8867
#24443 0x0050e948 in xen_m2p (machine=6364917760) at kernel.c:8796
#2 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
at x86_64.c:1997
#24445 0x00528890 in x86_64_kvtop (tc=0x0, kvaddr=18446744072099176512, 
paddr=0x7fff51c7d3c0, verbose=0)
at x86_64.c:1887
#24446 0x0048d708 in kvtop (tc=0x0, kvaddr=18446744072099176512, 
paddr=0x7fff51c7d3c0, verbose=0) at memory.c:2900
#24447 0x0048b0f6 in readmem (addr=18446744072099176512, memtype=1, 
buffer=0xfbb500, size=768,
type=0x8fd772 "module struct", error_handle=6) at memory.c:2157
#24448 0x004fb0ab in module_init () at kernel.c:3355

As you can see module_init() calls readmem() which attempts to read virtual 
address
which lies outside of kernel text mapping (0x8000 - 
0xa000).
In this case addr=18446744072099176512 == 0xa003a040 which is known as 
module
mapping space. readmem() needs physical address, so, it calls kvtop() then 
kvtop()
calls x86_64_kvtop(). x86_64_kvtop() is not able to easily calculate, using 
simple
arithmetic like in case of kernel text mapping space, physical address from 
virtual
address. Hence it calls x86_64_kvtop_xen_wpt() to calculate it by traversing 
page
tables. x86_64_kvtop_xen_wpt() needs to do some m2p translation so it calls 
xen_m2p()
which calls __xen_m2p() and finally it calls __xen_pvops_m2p_vma() (my function 
which
tries to read linear virtual mapped sparse p2m list). Then 
__xen_pvops_m2p_vma() calls
readmem() which tries to read addr=18446683600570023936 == 0xc900 
which
is VMA used for m2p list. Well, once again physical address must be calculated 
by
traversing page tables. However, this requires access to m2p list which leads to
another readmem() call. Starting from here we are in the loop. After thousands 
of
repetitions crash dies due to stack overflow. Not nice... :-(((

Do we have any viable fix for this issue? I considered a few but I have not 
found prefect one.

1) In theory we can use p2m tree to solve that problem because it is available 
in parallel
   with VMA mapped p2m right now. However, this is temporary solution and it 
will be phased
   out sooner or later. We need long term 

Re: crash tool - problem with new Xen linear virtual mapped sparse p2m list

2015-11-23 Thread Juergen Gross
On 23/11/15 21:18, Daniel Kiper wrote:
> Hi all,
> 
> Some time ago Linux kernel commit 054954eb051f35e74b75a566a96fe756015352c8
> (xen: switch to linear virtual mapped sparse p2m list) introduced linear
> virtual mapped sparse p2m list. It fixed some issues, however, it broke
> crash tool too. I tried to fix this issue but the problem looks more
> difficult than I expected.
> 
> Let's focus on "crash vmcore vmlinux". vmcore file was generated from dom0.
> "crash vmcore xen-syms" works without any issue.
> 
> At first sight problem looks simple. Just add a function which reads p2m list
> from vmcore and voila. I have done it. Then another issue arisen.
> 
> Please take a look at following backtrace:
> 
> #24426 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
> type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24427 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
> mfn=1299707) at kernel.c:9050
> #24428 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
> kernel.c:8867
> #24429 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
> #24430 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
> at x86_64.c:1997
> #24431 0x00528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
> at x86_64.c:1887
> #24432 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
> paddr=0x7fff51c7c100, verbose=0) at memory.c:2900
> #24433 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
> type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24434 0x0050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
> mfn=1299707) at kernel.c:9050
> #24435 0x0050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
> kernel.c:8867
> #24436 0x0050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
> #24437 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
> at x86_64.c:1997
> #24438 0x00528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
> at x86_64.c:1887
> #24439 0x0048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
> paddr=0x7fff51c7ca60, verbose=0) at memory.c:2900
> #24440 0x0048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
> type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24441 0x0050f746 in __xen_pvops_m2p_vma (machine=6364917760, 
> mfn=1553935) at kernel.c:9050
> #24442 0x0050edb7 in __xen_m2p (machine=6364917760, mfn=1553935) at 
> kernel.c:8867
> #24443 0x0050e948 in xen_m2p (machine=6364917760) at kernel.c:8796
> #2 0x00528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
> at x86_64.c:1997
> #24445 0x00528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
> at x86_64.c:1887
> #24446 0x0048d708 in kvtop (tc=0x0, kvaddr=18446744072099176512, 
> paddr=0x7fff51c7d3c0, verbose=0) at memory.c:2900
> #24447 0x0048b0f6 in readmem (addr=18446744072099176512, memtype=1, 
> buffer=0xfbb500, size=768,
> type=0x8fd772 "module struct", error_handle=6) at memory.c:2157
> #24448 0x004fb0ab in module_init () at kernel.c:3355
> 
> As you can see module_init() calls readmem() which attempts to read virtual 
> address
> which lies outside of kernel text mapping (0x8000 - 
> 0xa000).
> In this case addr=18446744072099176512 == 0xa003a040 which is known 
> as module
> mapping space. readmem() needs physical address, so, it calls kvtop() then 
> kvtop()
> calls x86_64_kvtop(). x86_64_kvtop() is not able to easily calculate, using 
> simple
> arithmetic like in case of kernel text mapping space, physical address from 
> virtual
> address. Hence it calls x86_64_kvtop_xen_wpt() to calculate it by traversing 
> page
> tables. x86_64_kvtop_xen_wpt() needs to do some m2p translation so it calls 
> xen_m2p()
> which calls __xen_m2p() and finally it calls __xen_pvops_m2p_vma() (my 
> function which
> tries to read linear virtual mapped sparse p2m list). Then 
> __xen_pvops_m2p_vma() calls
> readmem() which tries to read addr=18446683600570023936 == 0xc900 
> which
> is VMA used for m2p list. Well, once again physical address must be 
> calculated by
> traversing page tables. However, this requires access to m2p list which leads 
> to
> another readmem() call. Starting from here we are in the loop. After 
> thousands of
> repetitions crash dies due to stack overflow. Not nice... :-(((
> 
> Do we have any viable fix for this issue? I considered a few but I have not 
> found