Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-16 Thread Andrew Cooper
On 17/11/2016 00:00, Boris Ostrovsky wrote:
>> When we want to enable ACPI vcpu hotplug for HVM guests, 
 What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
 aren't we?
>>> Are we?  If so, how?
>>>
>>> I don't see any toolstack or qemu code able to cope with APCI CPU
>>> hotplug.  I can definitely see ACPI PCI hotplug in qemu, but that does
>>> make sense.
>> piix4_acpi_system_hot_add_init():
>>acpi_cpu_hotplug_init(parent, OBJECT(s), >gpe_cpu,
>> PIIX4_CPU_HOTPLUG_IO_BASE);
>>
>>
 Or are you thinking about moving this functionality to the hypervisor?
>>> As an aside, we need to move some part of PCI hotplug into the
>>> hypervisor longterm.  At the moment, any new entity coming along and
>>> attaching to an ioreq server still needs to negotiate with Qemu to make
>>> the device appear.  This is awkward but doable if all device models are
>>> in dom0, but is far harder if the device models are in different domains.
>>>
>>> As for CPU hotplug, (if I have indeed overlooked something), Qemu has no
>>> business in this matter. 
>> Yes. And if we are going to do it for PVH we might as well do it for HVM
>> --- I think most of the code will be the same, save for how SCI is sent.
>
> So I discovered that we actually cannot unplug an HVM VCPU with qemu,
> there is no support for that via QMP (which is what we use).
>
> 'xl vcpu-set  N' is nop when we unplug.

Lovely!

Sounds like an even better reason to implement it properly when someone
has some TUITs.

Anyway, so long as the PVH implementation will be clean to reuse when
someone gets time to retrofit it to plain HVM guests, I am happy.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-16 Thread Boris Ostrovsky

>
> When we want to enable ACPI vcpu hotplug for HVM guests, 
>>> What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
>>> aren't we?
>> Are we?  If so, how?
>>
>> I don't see any toolstack or qemu code able to cope with APCI CPU
>> hotplug.  I can definitely see ACPI PCI hotplug in qemu, but that does
>> make sense.
> piix4_acpi_system_hot_add_init():
>acpi_cpu_hotplug_init(parent, OBJECT(s), >gpe_cpu,
> PIIX4_CPU_HOTPLUG_IO_BASE);
>
>
>>> Or are you thinking about moving this functionality to the hypervisor?
>> As an aside, we need to move some part of PCI hotplug into the
>> hypervisor longterm.  At the moment, any new entity coming along and
>> attaching to an ioreq server still needs to negotiate with Qemu to make
>> the device appear.  This is awkward but doable if all device models are
>> in dom0, but is far harder if the device models are in different domains.
>>
>> As for CPU hotplug, (if I have indeed overlooked something), Qemu has no
>> business in this matter. 
> Yes. And if we are going to do it for PVH we might as well do it for HVM
> --- I think most of the code will be the same, save for how SCI is sent.


So I discovered that we actually cannot unplug an HVM VCPU with qemu,
there is no support for that via QMP (which is what we use).

'xl vcpu-set  N' is nop when we unplug.


-boris


>
> -boris
>
>>  The device model exists to be an
>> implementation of an LPC bridge, and is not responsible for any CPU
>> related functionality; Xen does all vcpu handling.
>>
>>
>> The Xen project and community have had a very rich history of hacking
>> things up in the past, an frankly, it shows.  I want to ensure that
>> development progresses in an architecturally clean and appropriate
>> direction, especially if this enables us to remove some of the duck tape
>> holding pre-existing features together.
>>
>> ~Andrew
>



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-16 Thread Jan Beulich
>>> On 15.11.16 at 20:19,  wrote:
> On 15/11/16 15:56, Jan Beulich wrote:
> On 15.11.16 at 16:44,  wrote:
>>> On 11/15/2016 10:17 AM, Jan Beulich wrote:
> The other option was XEN_X86_EMU_ACPI. Would it be better?
 As that's a little too wide (and I think someone else had also
 disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
 (for "fixed features"), or if that's still too wide, break things up
 (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
>>> I think this may be a bit too fine-grained. Fixed-features would be
>>> good, but is GPE block considered part of fixed features?
>> See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
>> text ahead of this makes it pretty clear that altogether they're
>> being called fixed hardware register blocks. So if you consider FF
>> misleading, FHRB would be another option.
> 
> Please can we also considering a naming appropriate for joint use with
> HVM guests as well.
> 
> For PVH, (if enabled), Xen handles all (implemented) fixed function
> registers.
> 
> For HVM, Xen already intercepts and interposes on the PM1a_STS and
> PM1a_EN registers heading towards qemu, for the apparent purpose of
> raising SCIs on behalf of qemu.
> 
> When we want to enable ACPI vcpu hotplug for HVM guests, Xen will have
> to maintain the current intercept behaviour for qemu, but also take on
> the PM1b role entirely.

PM1b? There's no such thing right now, and it's also not being added
by Boris' series, so I don't know what you're thinking about making it
a requirement to have (and be handled in Xen). Yes, PM1a and PM1b
are intentionally split so that two distinct parties (on raw hardware:
chips) could each take on part of the combined functionality. But, if
at all, that should have been leveraged when the implementation was
done originally (to separate Xen's and qemu's parts); I don't see how
it matters now (unless you're again referring to some future overhaul).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Boris Ostrovsky
On 11/15/2016 03:07 PM, Andrew Cooper wrote:
> On 15/11/16 19:38, Boris Ostrovsky wrote:
>> On 11/15/2016 02:19 PM, Andrew Cooper wrote:
>>> On 15/11/16 15:56, Jan Beulich wrote:
>>> On 15.11.16 at 16:44,  wrote:
> On 11/15/2016 10:17 AM, Jan Beulich wrote:
>>> The other option was XEN_X86_EMU_ACPI. Would it be better?
>> As that's a little too wide (and I think someone else had also
>> disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
>> (for "fixed features"), or if that's still too wide, break things up
>> (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
> I think this may be a bit too fine-grained. Fixed-features would be
> good, but is GPE block considered part of fixed features?
 See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
 text ahead of this makes it pretty clear that altogether they're
 being called fixed hardware register blocks. So if you consider FF
 misleading, FHRB would be another option.
>>> Please can we also considering a naming appropriate for joint use with
>>> HVM guests as well.
>>>
>>> For PVH, (if enabled), Xen handles all (implemented) fixed function
>>> registers.
>>>
>>> For HVM, Xen already intercepts and interposes on the PM1a_STS and
>>> PM1a_EN registers heading towards qemu, for the apparent purpose of
>>> raising SCIs on behalf of qemu.
>>>
>>> When we want to enable ACPI vcpu hotplug for HVM guests, 
>> What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
>> aren't we?
> Are we?  If so, how?
>
> I don't see any toolstack or qemu code able to cope with APCI CPU
> hotplug.  I can definitely see ACPI PCI hotplug in qemu, but that does
> make sense.

piix4_acpi_system_hot_add_init():
   acpi_cpu_hotplug_init(parent, OBJECT(s), >gpe_cpu,
PIIX4_CPU_HOTPLUG_IO_BASE);


>
>> Or are you thinking about moving this functionality to the hypervisor?
> As an aside, we need to move some part of PCI hotplug into the
> hypervisor longterm.  At the moment, any new entity coming along and
> attaching to an ioreq server still needs to negotiate with Qemu to make
> the device appear.  This is awkward but doable if all device models are
> in dom0, but is far harder if the device models are in different domains.
>
> As for CPU hotplug, (if I have indeed overlooked something), Qemu has no
> business in this matter. 

Yes. And if we are going to do it for PVH we might as well do it for HVM
--- I think most of the code will be the same, save for how SCI is sent.

-boris

>  The device model exists to be an
> implementation of an LPC bridge, and is not responsible for any CPU
> related functionality; Xen does all vcpu handling.
>
>
> The Xen project and community have had a very rich history of hacking
> things up in the past, an frankly, it shows.  I want to ensure that
> development progresses in an architecturally clean and appropriate
> direction, especially if this enables us to remove some of the duck tape
> holding pre-existing features together.
>
> ~Andrew



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Andrew Cooper
On 15/11/16 19:38, Boris Ostrovsky wrote:
> On 11/15/2016 02:19 PM, Andrew Cooper wrote:
>> On 15/11/16 15:56, Jan Beulich wrote:
>> On 15.11.16 at 16:44,  wrote:
 On 11/15/2016 10:17 AM, Jan Beulich wrote:
>> The other option was XEN_X86_EMU_ACPI. Would it be better?
> As that's a little too wide (and I think someone else had also
> disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
> (for "fixed features"), or if that's still too wide, break things up
> (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
 I think this may be a bit too fine-grained. Fixed-features would be
 good, but is GPE block considered part of fixed features?
>>> See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
>>> text ahead of this makes it pretty clear that altogether they're
>>> being called fixed hardware register blocks. So if you consider FF
>>> misleading, FHRB would be another option.
>> Please can we also considering a naming appropriate for joint use with
>> HVM guests as well.
>>
>> For PVH, (if enabled), Xen handles all (implemented) fixed function
>> registers.
>>
>> For HVM, Xen already intercepts and interposes on the PM1a_STS and
>> PM1a_EN registers heading towards qemu, for the apparent purpose of
>> raising SCIs on behalf of qemu.
>>
>> When we want to enable ACPI vcpu hotplug for HVM guests, 
> What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
> aren't we?

Are we?  If so, how?

I don't see any toolstack or qemu code able to cope with APCI CPU
hotplug.  I can definitely see ACPI PCI hotplug in qemu, but that does
make sense.

> Or are you thinking about moving this functionality to the hypervisor?

As an aside, we need to move some part of PCI hotplug into the
hypervisor longterm.  At the moment, any new entity coming along and
attaching to an ioreq server still needs to negotiate with Qemu to make
the device appear.  This is awkward but doable if all device models are
in dom0, but is far harder if the device models are in different domains.

As for CPU hotplug, (if I have indeed overlooked something), Qemu has no
business in this matter.  The device model exists to be an
implementation of an LPC bridge, and is not responsible for any CPU
related functionality; Xen does all vcpu handling.


The Xen project and community have had a very rich history of hacking
things up in the past, an frankly, it shows.  I want to ensure that
development progresses in an architecturally clean and appropriate
direction, especially if this enables us to remove some of the duck tape
holding pre-existing features together.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Boris Ostrovsky
On 11/15/2016 02:19 PM, Andrew Cooper wrote:
> On 15/11/16 15:56, Jan Beulich wrote:
> On 15.11.16 at 16:44,  wrote:
>>> On 11/15/2016 10:17 AM, Jan Beulich wrote:
> The other option was XEN_X86_EMU_ACPI. Would it be better?
 As that's a little too wide (and I think someone else had also
 disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
 (for "fixed features"), or if that's still too wide, break things up
 (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
>>> I think this may be a bit too fine-grained. Fixed-features would be
>>> good, but is GPE block considered part of fixed features?
>> See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
>> text ahead of this makes it pretty clear that altogether they're
>> being called fixed hardware register blocks. So if you consider FF
>> misleading, FHRB would be another option.
> Please can we also considering a naming appropriate for joint use with
> HVM guests as well.
>
> For PVH, (if enabled), Xen handles all (implemented) fixed function
> registers.
>
> For HVM, Xen already intercepts and interposes on the PM1a_STS and
> PM1a_EN registers heading towards qemu, for the apparent purpose of
> raising SCIs on behalf of qemu.
>
> When we want to enable ACPI vcpu hotplug for HVM guests, 

What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
aren't we?

Or are you thinking about moving this functionality to the hypervisor?

-boris

> Xen will have
> to maintain the current intercept behaviour for qemu, but also take on
> the PM1b role entirely.
>
> ~Andrew


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Andrew Cooper
On 15/11/16 15:56, Jan Beulich wrote:
 On 15.11.16 at 16:44,  wrote:
>> On 11/15/2016 10:17 AM, Jan Beulich wrote:
 The other option was XEN_X86_EMU_ACPI. Would it be better?
>>> As that's a little too wide (and I think someone else had also
>>> disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
>>> (for "fixed features"), or if that's still too wide, break things up
>>> (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
>> I think this may be a bit too fine-grained. Fixed-features would be
>> good, but is GPE block considered part of fixed features?
> See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
> text ahead of this makes it pretty clear that altogether they're
> being called fixed hardware register blocks. So if you consider FF
> misleading, FHRB would be another option.

Please can we also considering a naming appropriate for joint use with
HVM guests as well.

For PVH, (if enabled), Xen handles all (implemented) fixed function
registers.

For HVM, Xen already intercepts and interposes on the PM1a_STS and
PM1a_EN registers heading towards qemu, for the apparent purpose of
raising SCIs on behalf of qemu.

When we want to enable ACPI vcpu hotplug for HVM guests, Xen will have
to maintain the current intercept behaviour for qemu, but also take on
the PM1b role entirely.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Jan Beulich
>>> On 15.11.16 at 16:44,  wrote:
> On 11/15/2016 10:17 AM, Jan Beulich wrote:
>>> The other option was XEN_X86_EMU_ACPI. Would it be better?
>> As that's a little too wide (and I think someone else had also
>> disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
>> (for "fixed features"), or if that's still too wide, break things up
>> (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?
> 
> I think this may be a bit too fine-grained. Fixed-features would be
> good, but is GPE block considered part of fixed features?

See figure 4-12 in ACPI 6.1: GEP{0,1} are included there, and the
text ahead of this makes it pretty clear that altogether they're
being called fixed hardware register blocks. So if you consider FF
misleading, FHRB would be another option.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Boris Ostrovsky
On 11/15/2016 10:17 AM, Jan Beulich wrote:
>> The other option was XEN_X86_EMU_ACPI. Would it be better?
> As that's a little too wide (and I think someone else had also
> disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
> (for "fixed features"), or if that's still too wide, break things up
> (PM1a, PM1b, PM2, TMR, GPE0, GPE1)?

I think this may be a bit too fine-grained. Fixed-features would be
good, but is GPE block considered part of fixed features?

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Jan Beulich
>>> On 15.11.16 at 15:55,  wrote:
> On 11/15/2016 04:24 AM, Jan Beulich wrote:
> On 09.11.16 at 15:39,  wrote:
>>> --- a/xen/arch/x86/hvm/ioreq.c
>>> +++ b/xen/arch/x86/hvm/ioreq.c
>>> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(static int acpi_ioaccess(
>>>  int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>>>  {
>>> +unsigned int i;
>>> +unsigned int bits = bytes * 8;
>>> +unsigned int idx = port & 3;
>>> +uint8_t *reg = NULL;
>>> +bool is_cpu_map = false;
>>> +struct domain *currd = current->domain;
>>> +
>>> +BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
>>> + (ACPI_GPE0_BLK_LEN_V1 != 4));
>>> +
>>> +if ( has_ioreq_cpuhp(currd) )
>>> +return X86EMUL_UNHANDLEABLE;
>> Hmm, so it seems you indeed mean the flag to have the inverse sense
>> of what I would have expected, presumably in order for HVM guests
>> to continue to have all emulation flags set. I think that's a little
>> unfortunate, or at least the name of flag and predicate are somewhat
>> misleading (as there's no specific CPU hotplug related ioreq).
> 
> The other option was XEN_X86_EMU_ACPI. Would it be better?

As that's a little too wide (and I think someone else had also
disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
(for "fixed features"), or if that's still too wide, break things up
(PM1a, PM1b, PM2, TMR, GPE0, GPE1)?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Boris Ostrovsky
On 11/15/2016 04:24 AM, Jan Beulich wrote:
 On 09.11.16 at 15:39,  wrote:
>> --- a/xen/arch/x86/hvm/ioreq.c
>> +++ b/xen/arch/x86/hvm/ioreq.c
>> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(static int acpi_ioaccess(
>>  int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>>  {
>> +unsigned int i;
>> +unsigned int bits = bytes * 8;
>> +unsigned int idx = port & 3;
>> +uint8_t *reg = NULL;
>> +bool is_cpu_map = false;
>> +struct domain *currd = current->domain;
>> +
>> +BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
>> + (ACPI_GPE0_BLK_LEN_V1 != 4));
>> +
>> +if ( has_ioreq_cpuhp(currd) )
>> +return X86EMUL_UNHANDLEABLE;
> Hmm, so it seems you indeed mean the flag to have the inverse sense
> of what I would have expected, presumably in order for HVM guests
> to continue to have all emulation flags set. I think that's a little
> unfortunate, or at least the name of flag and predicate are somewhat
> misleading (as there's no specific CPU hotplug related ioreq).

The other option was XEN_X86_EMU_ACPI. Would it be better?

>
>> +if ( is_cpu_map )
>> +{
>> +unsigned int first_bit, last_bit;
>> +
>> +first_bit = (port - ACPI_CPU_MAP) * 8;
>> +last_bit = min(currd->arch.avail_vcpus, first_bit + bits);
>> +for ( i = first_bit; i < last_bit; i++ )
>> +*val |= (1U << (i - first_bit));
>> +}
>> +else
>> +memcpy(val, [idx], bytes);
>> +}
>> +else
>> +{
>> +if ( is_cpu_map )
>> +/*
>> + * CPU map is only read by DSDT's PRSC method and should never
>> + * be written by a guest.
>> + */
>> +return X86EMUL_UNHANDLEABLE;
>> +
>> +/* Write either status or enable reegister. */
>> +if ( (bytes > 2) || ((bytes == 2) && (port & 1)) )
>> +return X86EMUL_UNHANDLEABLE;
>> +
>> +if ( idx < 2 ) /* status, write 1 to clear. */
>> +{
>> +reg[idx] &= ~(*val & 0xff);
>> +if ( bytes == 2 )
>> +reg[idx + 1] &= ~((*val >> 8) & 0xff);
>> +}
>> +else   /* enable */
>> +memcpy([idx], val, bytes);
>> +}
> Overall - how does this implementation match up with the following
> requirements from the spec:
>
> ● Reserved or unimplemented bits always return zero (control or enable).
> ● Writes to reserved or unimplemented bits have no affect.
>
> To me it looks as it at this point all bits are reserved/unimplemented.

We do have one bit that we need --- bit 2 of GPE --- but the rest indeed
looks like it is unused. I'll check whether there are any required buts
to be supported and if not then only checking for bit 2 will make things
slightly simpler here.

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-15 Thread Jan Beulich
>>> On 09.11.16 at 15:39,  wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(static int acpi_ioaccess(
>  int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> +unsigned int i;
> +unsigned int bits = bytes * 8;
> +unsigned int idx = port & 3;
> +uint8_t *reg = NULL;
> +bool is_cpu_map = false;
> +struct domain *currd = current->domain;
> +
> +BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> + (ACPI_GPE0_BLK_LEN_V1 != 4));
> +
> +if ( has_ioreq_cpuhp(currd) )
> +return X86EMUL_UNHANDLEABLE;

Hmm, so it seems you indeed mean the flag to have the inverse sense
of what I would have expected, presumably in order for HVM guests
to continue to have all emulation flags set. I think that's a little
unfortunate, or at least the name of flag and predicate are somewhat
misleading (as there's no specific CPU hotplug related ioreq).

In any event, with the handler getting registered only when 
!has_ioreq_cpuhp() I think this should be an ASSERT() - iirc the
emulation flags can be set only once at domain creation time.

> +switch (port)
> +{
> +case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> +(ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1):

Please align the opening parenthesis with the respective expression
on the previous line (an really the parentheses aren't needed here,
so you could just align the two starting A-s).

> +reg = currd->arch.hvm_domain.acpi_io.pm1a;
> +break;
> +case ACPI_GPE0_BLK_ADDRESS_V1 ...
> +(ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1):
> +reg = currd->arch.hvm_domain.acpi_io.gpe;
> +break;
> +case ACPI_CPU_MAP ... (ACPI_CPU_MAP + ACPI_CPU_MAP_LEN - 1):
> +is_cpu_map = true;
> +break;
> +default:
> +return X86EMUL_UNHANDLEABLE;
> +}

Blank lines between non-fall-through case statements please.

> +if ( bytes == 0 )
> +return X86EMUL_OKAY;
> +
> +if ( dir == IOREQ_READ )
> +{
> +*val &= ~((1U << bits) - 1);

Undefined behavior for bits == 32.

> +if ( is_cpu_map )
> +{
> +unsigned int first_bit, last_bit;
> +
> +first_bit = (port - ACPI_CPU_MAP) * 8;
> +last_bit = min(currd->arch.avail_vcpus, first_bit + bits);
> +for ( i = first_bit; i < last_bit; i++ )
> +*val |= (1U << (i - first_bit));
> +}
> +else
> +memcpy(val, [idx], bytes);
> +}
> +else
> +{
> +if ( is_cpu_map )
> +/*
> + * CPU map is only read by DSDT's PRSC method and should never
> + * be written by a guest.
> + */
> +return X86EMUL_UNHANDLEABLE;
> +
> +/* Write either status or enable reegister. */
> +if ( (bytes > 2) || ((bytes == 2) && (port & 1)) )
> +return X86EMUL_UNHANDLEABLE;
> +
> +if ( idx < 2 ) /* status, write 1 to clear. */
> +{
> +reg[idx] &= ~(*val & 0xff);
> +if ( bytes == 2 )
> +reg[idx + 1] &= ~((*val >> 8) & 0xff);
> +}
> +else   /* enable */
> +memcpy([idx], val, bytes);
> +}

Overall - how does this implementation match up with the following
requirements from the spec:

● Reserved or unimplemented bits always return zero (control or enable).
● Writes to reserved or unimplemented bits have no affect.

To me it looks as it at this point all bits are reserved/unimplemented.

>  return X86EMUL_OKAY;

So regarding my comment on the previous patch: That one should
fail the call unconditionally, and the one here should switch to
returning success.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-11 Thread Konrad Rzeszutek Wilk
On Wed, Nov 09, 2016 at 09:39:56AM -0500, Boris Ostrovsky wrote:
> Signed-off-by: Boris Ostrovsky 
> ---
> CC: Paul Durrant 
> ---
> Changes in v2:
> * Use 'true/false' values for bools
> 
> 
>  xen/arch/x86/hvm/ioreq.c | 72 
> 
>  1 file changed, 72 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index e6ff48f..3ef01cf 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(
>  static int acpi_ioaccess(
>  int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> +unsigned int i;
> +unsigned int bits = bytes * 8;
> +unsigned int idx = port & 3;
> +uint8_t *reg = NULL;
> +bool is_cpu_map = false;
> +struct domain *currd = current->domain;
> +
> +BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> + (ACPI_GPE0_BLK_LEN_V1 != 4));
> +
> +if ( has_ioreq_cpuhp(currd) )
> +return X86EMUL_UNHANDLEABLE;
> +
> +switch (port)

Spaces around 'port'

otherwise Reviewed-by: Konrad Rzeszutek Wilk 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests

2016-11-09 Thread Paul Durrant
> -Original Message-
> From: Boris Ostrovsky [mailto:boris.ostrov...@oracle.com]
> Sent: 09 November 2016 14:40
> To: xen-devel@lists.xen.org
> Cc: jbeul...@suse.com; Andrew Cooper ;
> Wei Liu ; Ian Jackson ; Roger
> Pau Monne ; Boris Ostrovsky
> ; Paul Durrant 
> Subject: [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
> 
> Signed-off-by: Boris Ostrovsky 
> ---
> CC: Paul Durrant 

Reviewed-by: Paul Durrant 

> ---
> Changes in v2:
> * Use 'true/false' values for bools
> 
> 
>  xen/arch/x86/hvm/ioreq.c | 72
> 
>  1 file changed, 72 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index e6ff48f..3ef01cf 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(
>  static int acpi_ioaccess(
>  int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> +unsigned int i;
> +unsigned int bits = bytes * 8;
> +unsigned int idx = port & 3;
> +uint8_t *reg = NULL;
> +bool is_cpu_map = false;
> +struct domain *currd = current->domain;
> +
> +BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> + (ACPI_GPE0_BLK_LEN_V1 != 4));
> +
> +if ( has_ioreq_cpuhp(currd) )
> +return X86EMUL_UNHANDLEABLE;
> +
> +switch (port)
> +{
> +case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> +(ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1):
> +reg = currd->arch.hvm_domain.acpi_io.pm1a;
> +break;
> +case ACPI_GPE0_BLK_ADDRESS_V1 ...
> +(ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1):
> +reg = currd->arch.hvm_domain.acpi_io.gpe;
> +break;
> +case ACPI_CPU_MAP ... (ACPI_CPU_MAP + ACPI_CPU_MAP_LEN - 1):
> +is_cpu_map = true;
> +break;
> +default:
> +return X86EMUL_UNHANDLEABLE;
> +}
> +
> +if ( bytes == 0 )
> +return X86EMUL_OKAY;
> +
> +if ( dir == IOREQ_READ )
> +{
> +*val &= ~((1U << bits) - 1);
> +
> +if ( is_cpu_map )
> +{
> +unsigned int first_bit, last_bit;
> +
> +first_bit = (port - ACPI_CPU_MAP) * 8;
> +last_bit = min(currd->arch.avail_vcpus, first_bit + bits);
> +for ( i = first_bit; i < last_bit; i++ )
> +*val |= (1U << (i - first_bit));
> +}
> +else
> +memcpy(val, [idx], bytes);
> +}
> +else
> +{
> +if ( is_cpu_map )
> +/*
> + * CPU map is only read by DSDT's PRSC method and should never
> + * be written by a guest.
> + */
> +return X86EMUL_UNHANDLEABLE;
> +
> +/* Write either status or enable reegister. */
> +if ( (bytes > 2) || ((bytes == 2) && (port & 1)) )
> +return X86EMUL_UNHANDLEABLE;
> +
> +if ( idx < 2 ) /* status, write 1 to clear. */
> +{
> +reg[idx] &= ~(*val & 0xff);
> +if ( bytes == 2 )
> +reg[idx + 1] &= ~((*val >> 8) & 0xff);
> +}
> +else   /* enable */
> +memcpy([idx], val, bytes);
> +}
> +
>  return X86EMUL_OKAY;
>  }
> 
> --
> 2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel