Re: [PATCH] iommu/vt-d: Return the correct dma mask when we are bypassing the IOMMU

2019-10-17 Thread Arvind Sankar
On Thu, Oct 17, 2019 at 09:08:47AM +0200, Christoph Hellwig wrote:
> On Wed, Oct 16, 2019 at 03:15:52PM -0400, Arvind Sankar wrote:
> > > > Reported-by: Arvind Sankar 
> > > > Tested-by: Arvind Sankar 
> > > > Originally-by: Christoph Hellwig 
> > > > Signed-off-by: Christoph Hellwig 
> > > > Fixed-by: Arvind Sankar 
> > > > Signed-off-by: Arvind Sankar 
> > > 
> > > This patch looks good to me.
> > > 
> > > Reviewed-by: Lu Baolu 
> > > 
> > 
> > Hi Christoph, will you be taking this through your dma-mapping branch?
> 
> Given this is a patch to intel-iommu I expect Joerg to pick it up.
> But if he is fine with that I can also queue it up instead.

David Woodhouse is listed as maintainer for intel-iommu. Cc'ing him as
well.


Re: [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment

2019-12-21 Thread Arvind Sankar
On Sat, Dec 21, 2019 at 03:03:53PM +, Tom Murphy wrote:
> In the intel iommu driver devices which only support 32bit DMA can't be
> direct mapped. The implementation of this is weird. Currently we assign
> it a direct mapped domain and then remove the domain later and replace
> it with a domain of type IOMMU_DOMAIN_IDENTITY. We should just assign it
> a domain of type IOMMU_DOMAIN_IDENTITY from the begging rather than
> needlessly swapping domains.
> 
> Signed-off-by: Tom Murphy 
> ---
>  drivers/iommu/intel-iommu.c | 88 +
>  1 file changed, 31 insertions(+), 57 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 0c8d81f56a30..c1ea66467918 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -5640,7 +5609,12 @@ static int intel_iommu_add_device(struct device *dev)
>   domain = iommu_get_domain_for_dev(dev);
>   dmar_domain = to_dmar_domain(domain);
>   if (domain->type == IOMMU_DOMAIN_DMA) {
> - if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY) {
> + /*
> +  * We check dma_mask >= dma_get_required_mask(dev) because
> +  * 32 bit DMA falls back to non-identity mapping.
> +  */
> + if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY &&
> + dma_mask >= dma_get_required_mask(dev)) {
>   ret = iommu_request_dm_for_dev(dev);
>   if (ret) {
>   dmar_remove_one_dev_info(dev);
> -- 
> 2.20.1
> 

Should this be dma_direct_get_required_mask? dma_get_required_mask may
return DMA_BIT_MASK(32) -- it callbacks into intel_get_required_mask,
but I'm not sure what iommu_no_mapping(dev) will do at this point?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-16 Thread Arvind Sankar
On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> 
> I am discussing with Ross the other option. We can create
> .rodata.mle_header section and put it at fixed offset as
> kernel_info is. So, we would have, e.g.:
> 
> arch/x86/boot/compressed/vmlinux.lds.S:
> .rodata.kernel_info KERNEL_INFO_OFFSET : {
> *(.rodata.kernel_info)
> }
> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at 
> bad address!")
> 
> .rodata.mle_header MLE_HEADER_OFFSET : {
> *(.rodata.mle_header)
> }
> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at bad 
> address!")
> 
> arch/x86/boot/compressed/sl_stub.S:
> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> 
> .section ".rodata.mle_header", "a"
> 
> SYM_DATA_START(mle_header)
> .long   0x9082ac5a/* UUID0 */
> .long   0x74a7476f/* UUID1 */
> .long   0xa2555c0f/* UUID2 */
> .long   0x42b651cb/* UUID3 */
> .long   0x0034/* MLE header size */
> .long   0x00020002/* MLE version 2.2 */
> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> (virt. address) */
> .long   0x/* First valid page of MLE */
> .long   0x/* Offset within binary of first byte of MLE */
> .long   0x/* Offset within binary of last byte + 1 of MLE 
> */
> .long   0x0223/* Bit vector of MLE-supported capabilities */
> .long   0x/* Starting linear address of command line 
> (unused) */
> .long   0x/* Ending linear address of command line 
> (unused) */
> SYM_DATA_END(mle_header)
> 
> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> Anyway, is it acceptable?
> 
> There is also another problem. We have to put into mle_header size of
> the Linux kernel image. Currently it is done by the bootloader but
> I think it is not a role of the bootloader. The kernel image should
> provide all data describing its properties and do not rely on the
> bootloader to do that. Ross and I investigated various options but we
> did not find a good/simple way to do that. Could you suggest how we
> should do that or at least where we should take a look to get some
> ideas?
> 
> Daniel

What exactly is the size you need here? Is it just the size of the
protected mode image, that's startup_32 to _edata. Or is it the size of
the whole bzImage file, or something else? I guess the same question
applies to "first valid page of MLE" and "first byte of MLE", and the
linear entry point -- are those all relative to startup_32 or do they
need to be relative to the start of the bzImage, i.e. you have to add
the size of the real-mode boot stub?

If you need to include the size of the bzImage file, that's not known
when the files in boot/compressed are built. It's only known after the
real-mode stub is linked. arch/x86/boot/tools/build.c fills in various
details in the setup header and creates the bzImage file, but it does
not currently modify anything in the protected-mode portion of the
compressed kernel (i.e. arch/x86/boot/compressed/vmlinux, which then
gets converted to binary format as arch/x86/boot/vmlinux.bin), so it
would need to be extended if you need to modify the MLE header to
include the bzImage size or anything depending on the size of the
real-mode stub.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Arvind Sankar
On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> > >
> > > I am discussing with Ross the other option. We can create
> > > .rodata.mle_header section and put it at fixed offset as
> > > kernel_info is. So, we would have, e.g.:
> > >
> > > arch/x86/boot/compressed/vmlinux.lds.S:
> > > .rodata.kernel_info KERNEL_INFO_OFFSET : {
> > > *(.rodata.kernel_info)
> > > }
> > > ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
> > > at bad address!")
> > >
> > > .rodata.mle_header MLE_HEADER_OFFSET : {
> > > *(.rodata.mle_header)
> > > }
> > > ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
> > > bad address!")
> > >
> > > arch/x86/boot/compressed/sl_stub.S:
> > > #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> > >
> > > .section ".rodata.mle_header", "a"
> > >
> > > SYM_DATA_START(mle_header)
> > > .long   0x9082ac5a/* UUID0 */
> > > .long   0x74a7476f/* UUID1 */
> > > .long   0xa2555c0f/* UUID2 */
> > > .long   0x42b651cb/* UUID3 */
> > > .long   0x0034/* MLE header size */
> > > .long   0x00020002/* MLE version 2.2 */
> > > .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> > > (virt. address) */
> > > .long   0x/* First valid page of MLE */
> > > .long   0x/* Offset within binary of first byte of 
> > > MLE */
> > > .long   0x/* Offset within binary of last byte + 1 of 
> > > MLE */
> > > .long   0x0223/* Bit vector of MLE-supported capabilities 
> > > */
> > > .long   0x/* Starting linear address of command line 
> > > (unused) */
> > > .long   0x/* Ending linear address of command line 
> > > (unused) */
> > > SYM_DATA_END(mle_header)
> > >
> > > Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> > > Anyway, is it acceptable?
> 
> What do you think about my MLE_HEADER_OFFSET and related stuff proposal?
> 

I'm wondering if it would be easier to just allow relocations in these
special "header" sections. I need to check how easy/hard it is to do
that without triggering linker warnings.

Thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Arvind Sankar
On Mon, Oct 19, 2020 at 10:38:08AM -0400, Ross Philipson wrote:
> On 10/16/20 4:51 PM, Arvind Sankar wrote:
> > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> >>
> >> I am discussing with Ross the other option. We can create
> >> .rodata.mle_header section and put it at fixed offset as
> >> kernel_info is. So, we would have, e.g.:
> >>
> >> arch/x86/boot/compressed/vmlinux.lds.S:
> >> .rodata.kernel_info KERNEL_INFO_OFFSET : {
> >> *(.rodata.kernel_info)
> >> }
> >> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
> >> at bad address!")
> >>
> >> .rodata.mle_header MLE_HEADER_OFFSET : {
> >> *(.rodata.mle_header)
> >> }
> >> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
> >> bad address!")
> >>
> >> arch/x86/boot/compressed/sl_stub.S:
> >> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> >>
> >> .section ".rodata.mle_header", "a"
> >>
> >> SYM_DATA_START(mle_header)
> >> .long   0x9082ac5a/* UUID0 */
> >> .long   0x74a7476f/* UUID1 */
> >> .long   0xa2555c0f/* UUID2 */
> >> .long   0x42b651cb/* UUID3 */
> >> .long   0x0034/* MLE header size */
> >> .long   0x00020002/* MLE version 2.2 */
> >> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> >> (virt. address) */
> >> .long   0x/* First valid page of MLE */
> >> .long   0x/* Offset within binary of first byte of MLE 
> >> */
> >> .long   0x/* Offset within binary of last byte + 1 of 
> >> MLE */
> >> .long   0x0223/* Bit vector of MLE-supported capabilities 
> >> */
> >> .long   0x/* Starting linear address of command line 
> >> (unused) */
> >> .long   0x/* Ending linear address of command line 
> >> (unused) */
> >> SYM_DATA_END(mle_header)
> >>
> >> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> >> Anyway, is it acceptable?
> >>
> >> There is also another problem. We have to put into mle_header size of
> >> the Linux kernel image. Currently it is done by the bootloader but
> >> I think it is not a role of the bootloader. The kernel image should
> >> provide all data describing its properties and do not rely on the
> >> bootloader to do that. Ross and I investigated various options but we
> >> did not find a good/simple way to do that. Could you suggest how we
> >> should do that or at least where we should take a look to get some
> >> ideas?
> >>
> >> Daniel
> > 
> > What exactly is the size you need here? Is it just the size of the
> > protected mode image, that's startup_32 to _edata. Or is it the size of
> 
> It is the size of the protected mode image. Though how to reference
> those symbols to get the size might all more relocation issues.
> 

Ok, then I think mleh_rva(_edata) should get you that -- I assume you
don't want to include the uninitialized data in the size? The kernel
will access memory beyond _edata (upto the init_size in the setup
header), but that's not part of the image itself.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-29 Thread Arvind Sankar
On Tue, Sep 29, 2020 at 10:03:47AM -0400, Ross Philipson wrote:
> On 9/25/20 3:18 PM, Arvind Sankar wrote:
> > You will also need to avoid initializing data with symbol addresses.
> > 
> > .long mle_header
> > .long sl_stub_entry
> > .long sl_gdt
> > 
...
> > 
> > The other two are more messy, unfortunately there is no easy way to tell
> > the linker what we want here. The other entry point addresses (for the
> > EFI stub) are populated in a post-processing step after the compressed
> > kernel has been linked, we could teach it to also update kernel_info.
> > 
> > Without that, for kernel_info, you could change it to store the offset
> > of the MLE header from kernel_info, instead of from the start of the
> > image.

Actually, kernel_info is currently placed inside .rodata, which is quite
a ways into the compressed kernel, so an offset from kernel_info would
end up having to be negative, which would be ugly. I'll see if I can
come up with some way to avoid this.

> > 
> > For the MLE header, it could be moved to .head.text in head_64.S, and
> > initialized with
> > .long rva(sl_stub)
> > This will also let it be placed at a fixed offset from startup_32, so
> > that kernel_info can just be populated with a constant.
> 
> Thank you for the detailed reply. I am going to start digging into this now.
> 
> Ross
> 
> > 
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-25 Thread Arvind Sankar
On Fri, Sep 25, 2020 at 10:56:43AM -0400, Ross Philipson wrote:
> On 9/24/20 1:38 PM, Arvind Sankar wrote:
> > On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
> > 
> >> diff --git a/arch/x86/boot/compressed/head_64.S 
> >> b/arch/x86/boot/compressed/head_64.S
> >> index 97d37f0..42043bf 100644
> >> --- a/arch/x86/boot/compressed/head_64.S
> >> +++ b/arch/x86/boot/compressed/head_64.S
> >> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
> >>  SYM_FUNC_END(efi32_stub_entry)
> >>  #endif
> >>  
> >> +#ifdef CONFIG_SECURE_LAUNCH
> >> +SYM_FUNC_START(sl_stub_entry)
> >> +  /*
> >> +   * On entry, %ebx has the entry abs offset to sl_stub_entry. To
> >> +   * find the beginning of where we are loaded, sub off from the
> >> +   * beginning.
> >> +   */
> > 
> > This requirement should be added to the documentation. Is it necessary
> > or can this stub just figure out the address the same way as the other
> > 32-bit entry points, using the scratch space in bootparams as a little
> > stack?
> 
> It is based on the state of the BSP when TXT vectors to the measured
> launch environment. It is documented in the TXT spec and the SDMs.
> 

I think it would be useful to add to the x86 boot documentation how
exactly this new entry point is called, even if it's just adding a link
to some section of those specs. The doc should also say that an
mle_header_offset of 0 means the kernel isn't secure launch enabled.

> > 
> > For the 32-bit assembler code that's being added, tip/master now has
> > changes that prevent the compressed kernel from having any runtime
> > relocations.  You'll need to revise some of the code and the data
> > structures initial values to avoid creating relocations.
> 
> Could you elaborate on this some more? I am not sure I see places in the
> secure launch asm that would be creating relocations like this.
> 
> Thank you,
> Ross
> 

You should see them if you do
readelf -r arch/x86/boot/compressed/vmlinux

In terms of the code, things like:

addl%ebx, (sl_gdt_desc + 2)(%ebx)

will create a relocation, because the linker interprets this as wanting
the runtime address of sl_gdt_desc, rather than just the offset from
startup_32.

https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/tree/arch/x86/boot/compressed/head_64.S#n48

has a comment with some explanation and a macro that the 32-bit code in
startup_32 uses to avoid creating relocations.

Since the SL code is in a different assembler file (and a different
section), you can't directly use the same macro. I would suggest getting
rid of sl_stub_entry and entering directly at sl_stub, and then the code
in sl_stub.S can use sl_stub for the base address, defining the rva()
macro there as

#define rva(X) ((X) - sl_stub)

You will also need to avoid initializing data with symbol addresses.

.long mle_header
.long sl_stub_entry
.long sl_gdt

will create relocations. The third one is easy, just replace it with
sl_gdt - sl_gdt_desc and initialize it at runtime with

lealrva(sl_gdt_desc)(%ebx), %eax
addl%eax, 2(%eax)
lgdt(%eax)

The other two are more messy, unfortunately there is no easy way to tell
the linker what we want here. The other entry point addresses (for the
EFI stub) are populated in a post-processing step after the compressed
kernel has been linked, we could teach it to also update kernel_info.

Without that, for kernel_info, you could change it to store the offset
of the MLE header from kernel_info, instead of from the start of the
image.

For the MLE header, it could be moved to .head.text in head_64.S, and
initialized with
.long rva(sl_stub)
This will also let it be placed at a fixed offset from startup_32, so
that kernel_info can just be populated with a constant.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-24 Thread Arvind Sankar
On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
> later AMD SKINIT) to vector to during the late launch. The symbol
> sl_stub_entry is that entry point and its offset into the kernel is
> conveyed to the launching code using the MLE (Measured Launch
> Environment) header in the structure named mle_header. The offset of the
> MLE header is set in the kernel_info. The routine sl_stub contains the
> very early late launch setup code responsible for setting up the basic
> environment to allow the normal kernel startup_32 code to proceed. It is
> also responsible for properly waking and handling the APs on Intel
> platforms. The routine sl_main which runs after entering 64b mode is
> responsible for measuring configuration and module information before
> it is used like the boot params, the kernel command line, the TXT heap,
> an external initramfs, etc.
> 
> Signed-off-by: Ross Philipson 

Which version of the kernel is this based on?

> diff --git a/arch/x86/boot/compressed/head_64.S 
> b/arch/x86/boot/compressed/head_64.S
> index 97d37f0..42043bf 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
>  SYM_FUNC_END(efi32_stub_entry)
>  #endif
>  
> +#ifdef CONFIG_SECURE_LAUNCH
> +SYM_FUNC_START(sl_stub_entry)
> + /*
> +  * On entry, %ebx has the entry abs offset to sl_stub_entry. To
> +  * find the beginning of where we are loaded, sub off from the
> +  * beginning.
> +  */

This requirement should be added to the documentation. Is it necessary
or can this stub just figure out the address the same way as the other
32-bit entry points, using the scratch space in bootparams as a little
stack?

> + leal(startup_32 - sl_stub_entry)(%ebx), %ebx
> +
> + /* More room to work in sl_stub in the text section */
> + jmp sl_stub
> +
> +SYM_FUNC_END(sl_stub_entry)
> +#endif
> +
>   .code64
>   .org 0x200
>  SYM_CODE_START(startup_64)
> @@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
>   shrq$3, %rcx
>   rep stosq
>  
> +#ifdef CONFIG_SECURE_LAUNCH
> + /*
> +  * Have to do the final early sl stub work in 64b area.
> +  *
> +  * *** NOTE ***
> +  *
> +  * Several boot params get used before we get a chance to measure
> +  * them in this call. This is a known issue and we currently don't
> +  * have a solution. The scratch field doesn't matter and loadflags
> +  * have KEEP_SEGMENTS set by the stub code. There is no obvious way
> +  * to do anything about the use of kernel_alignment or init_size
> +  * though these seem low risk.
> +  */

There are various fields in bootparams that depend on where the
kernel/initrd and cmdline are loaded in memory. If the entire bootparams
page is getting measured, does that mean they all have to be at fixed
addresses on every boot?

Also KEEP_SEGMENTS support is gone from the kernel since v5.7, since it
was unused. startup_32 now always loads a GDT and then the segment
registers. I think this should be ok for you as the only thing the flag
used to do in the 64-bit kernel was to stop startup_32 from blindly
loading __BOOT_DS into the segment registers before it had setup its own
GDT.

For the 32-bit assembler code that's being added, tip/master now has
changes that prevent the compressed kernel from having any runtime
relocations.  You'll need to revise some of the code and the data
structures initial values to avoid creating relocations.

Thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-21 Thread Arvind Sankar
On Wed, Oct 21, 2020 at 05:28:33PM +0200, Daniel Kiper wrote:
> On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
> > On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> > > On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > > > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> > > > >
> > > > > I am discussing with Ross the other option. We can create
> > > > > .rodata.mle_header section and put it at fixed offset as
> > > > > kernel_info is. So, we would have, e.g.:
> > > > >
> > > > > arch/x86/boot/compressed/vmlinux.lds.S:
> > > > > .rodata.kernel_info KERNEL_INFO_OFFSET : {
> > > > > *(.rodata.kernel_info)
> > > > > }
> > > > > ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, 
> > > > > "kernel_info at bad address!")
> > > > >
> > > > > .rodata.mle_header MLE_HEADER_OFFSET : {
> > > > > *(.rodata.mle_header)
> > > > > }
> > > > > ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header 
> > > > > at bad address!")
> > > > >
> > > > > arch/x86/boot/compressed/sl_stub.S:
> > > > > #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> > > > >
> > > > > .section ".rodata.mle_header", "a"
> > > > >
> > > > > SYM_DATA_START(mle_header)
> > > > > .long   0x9082ac5a/* UUID0 */
> > > > > .long   0x74a7476f/* UUID1 */
> > > > > .long   0xa2555c0f/* UUID2 */
> > > > > .long   0x42b651cb/* UUID3 */
> > > > > .long   0x0034/* MLE header size */
> > > > > .long   0x00020002/* MLE version 2.2 */
> > > > > .long   mleh_rva(sl_stub_entry)/* Linear entry point of 
> > > > > MLE (virt. address) */
> > > > > .long   0x/* First valid page of MLE */
> > > > > .long   0x/* Offset within binary of first byte 
> > > > > of MLE */
> > > > > .long   0x/* Offset within binary of last byte + 
> > > > > 1 of MLE */
> > > > > .long   0x0223/* Bit vector of MLE-supported 
> > > > > capabilities */
> > > > > .long   0x/* Starting linear address of command 
> > > > > line (unused) */
> > > > > .long   0x/* Ending linear address of command 
> > > > > line (unused) */
> > > > > SYM_DATA_END(mle_header)
> > > > >
> > > > > Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> > > > > Anyway, is it acceptable?
> > >
> > > What do you think about my MLE_HEADER_OFFSET and related stuff proposal?
> > >
> >
> > I'm wondering if it would be easier to just allow relocations in these
> > special "header" sections. I need to check how easy/hard it is to do
> > that without triggering linker warnings.
> 
> Ross and I still bouncing some ideas. We came to the conclusion that
> putting mle_header into kernel .rodata.kernel_info section or even
> arch/x86/boot/compressed/kernel_info.S file would be the easiest thing
> to do at this point. Of course I would suggest some renaming too. E.g.
> .rodata.kernel_info to .rodata.kernel_headers, etc. Does it make sense
> for you?
> 
> Daniel

I haven't been able to come up with any different options that don't
require post-processing of the kernel image. Allowing relocations in
specific sections seems to not be possible with lld, and anyway would
require the fields to be 64-bit sized so it doesn't really help.

Putting mle_header into kernel_info seems like a reasonable thing to me,
and if you do that, putting it into kernel_info.S would seem to be
necessary?  Would you also have a fixed field with the offset of the
mle_header from kernel_info?  That seems nicer than having the
bootloader scan the variable data for magic strings.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header

2020-10-29 Thread Arvind Sankar
On Thu, Oct 29, 2020 at 09:41:13PM +0100, Thomas Gleixner wrote:
> On Thu, Oct 29 2020 at 17:59, Paolo Bonzini wrote:
> > On 29/10/20 17:56, Arvind Sankar wrote:
> >>> For those two just add:
> >>>   struct apic *apic = x86_system_apic;
> >>> before all the assignments.
> >>> Less churn and much better code.
> >>>
> >> Why would it be better code?
> >> 
> >
> > I think he means the compiler produces better code, because it won't
> > read the global variable repeatedly.  Not sure if that's true,(*) but I
> > think I do prefer that version if Arnd wants to do that tweak.
> 
> It's not true.
> 
>  foo *p = bar;
> 
>  p->a = 1;
>  p->b = 2;
> 
> The compiler is free to reload bar after accessing p->a and with
> 
> bar->a = 1;
> bar->b = 1;
> 
> it can either cache bar in a register or reread it after bar->a
> 
> The generated code is the same as long as there is no reason to reload,
> e.g. register pressure.
> 
> Thanks,
> 
> tglx

It's not quite the same.

https://godbolt.org/z/4dzPbM

With -fno-strict-aliasing, the compiler reloads the pointer if you write
to the start of what it points to, but not if you write to later
elements.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header

2020-10-29 Thread Arvind Sankar
On Thu, Oct 29, 2020 at 05:59:54PM +0100, Paolo Bonzini wrote:
> On 29/10/20 17:56, Arvind Sankar wrote:
> >> For those two just add:
> >>struct apic *apic = x86_system_apic;
> >> before all the assignments.
> >> Less churn and much better code.
> >>
> > Why would it be better code?
> > 
> 
> I think he means the compiler produces better code, because it won't
> read the global variable repeatedly.  Not sure if that's true,(*) but I
> think I do prefer that version if Arnd wants to do that tweak.
> 
> Paolo
> 
> (*) if it is, it might also be due to Linux using -fno-strict-aliasing
> 

Nope, it doesn't read it multiple times. I guess it still assumes that
apic isn't in the middle of what it points to: it would reload the
address if the first element of *apic was modified, but doesn't for
other elements. Interesting.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header

2020-10-29 Thread Arvind Sankar
On Thu, Oct 29, 2020 at 03:05:31PM +, David Laight wrote:
> From: Arnd Bergmann
> > Sent: 28 October 2020 21:21
> > 
> > From: Arnd Bergmann 
> > 
> > There are hundreds of warnings in a W=2 build about a local
> > variable shadowing the global 'apic' definition:
> > 
> > arch/x86/kvm/lapic.h:149:65: warning: declaration of 'apic' shadows a 
> > global declaration [-Wshadow]
> > 
> > Avoid this by renaming the global 'apic' variable to the more descriptive
> > 'x86_system_apic'. It was originally called 'genapic', but both that
> > and the current 'apic' seem to be a little overly generic for a global
> > variable.
> > 
> > Fixes: c48f14966cc4 ("KVM: inline kvm_apic_present() and 
> > kvm_lapic_enabled()")
> > Fixes: c8d46cf06dc2 ("x86: rename 'genapic' to 'apic'")
> > Signed-off-by: Arnd Bergmann 
> > ---
> > v2: rename the global instead of the local variable in the header
> ...
> > diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
> > index 284e73661a18..33e2dc78ca11 100644
> > --- a/arch/x86/hyperv/hv_apic.c
> > +++ b/arch/x86/hyperv/hv_apic.c
> > @@ -259,14 +259,14 @@ void __init hv_apic_init(void)
> > /*
> >  * Set the IPI entry points.
> >  */
> > -   orig_apic = *apic;
> > -
> > -   apic->send_IPI = hv_send_ipi;
> > -   apic->send_IPI_mask = hv_send_ipi_mask;
> > -   apic->send_IPI_mask_allbutself = hv_send_ipi_mask_allbutself;
> > -   apic->send_IPI_allbutself = hv_send_ipi_allbutself;
> > -   apic->send_IPI_all = hv_send_ipi_all;
> > -   apic->send_IPI_self = hv_send_ipi_self;
> > +   orig_apic = *x86_system_apic;
> > +
> > +   x86_system_apic->send_IPI = hv_send_ipi;
> > +   x86_system_apic->send_IPI_mask = hv_send_ipi_mask;
> > +   x86_system_apic->send_IPI_mask_allbutself = 
> > hv_send_ipi_mask_allbutself;
> > +   x86_system_apic->send_IPI_allbutself = hv_send_ipi_allbutself;
> > +   x86_system_apic->send_IPI_all = hv_send_ipi_all;
> > +   x86_system_apic->send_IPI_self = hv_send_ipi_self;
> > }
> > 
> > if (ms_hyperv.hints & HV_X64_APIC_ACCESS_RECOMMENDED) {
> > @@ -285,10 +285,10 @@ void __init hv_apic_init(void)
> >  */
> > apic_set_eoi_write(hv_apic_eoi_write);
> > if (!x2apic_enabled()) {
> > -   apic->read  = hv_apic_read;
> > -   apic->write = hv_apic_write;
> > -   apic->icr_write = hv_apic_icr_write;
> > -   apic->icr_read  = hv_apic_icr_read;
> > +   x86_system_apic->read  = hv_apic_read;
> > +   x86_system_apic->write = hv_apic_write;
> > +   x86_system_apic->icr_write = hv_apic_icr_write;
> > +   x86_system_apic->icr_read  = hv_apic_icr_read;
> > }
> 
> For those two just add:
>   struct apic *apic = x86_system_apic;
> before all the assignments.
> Less churn and much better code.
> 

Why would it be better code?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu