On Thu, 10 Oct 2019 16:20:39 -0300
Eduardo Habkost <ehabk...@redhat.com> wrote:

> On Thu, Oct 10, 2019 at 05:57:54PM +0200, Igor Mammedov wrote:
> > On Thu, 10 Oct 2019 09:59:42 -0400
> > "Michael S. Tsirkin" <m...@redhat.com> wrote:
> >   
> > > On Thu, Oct 10, 2019 at 03:39:12PM +0200, Igor Mammedov wrote:  
> > > > On Thu, 10 Oct 2019 05:56:55 -0400
> > > > "Michael S. Tsirkin" <m...@redhat.com> wrote:
> > > >   
> > > > > On Wed, Oct 09, 2019 at 09:22:49AM -0400, Igor Mammedov wrote:  
> > > > > > As an alternative to passing to firmware topology info via new 
> > > > > > fwcfg files
> > > > > > so it could recreate APIC IDs based on it and order CPUs are 
> > > > > > enumerated,
> > > > > > 
> > > > > > extend CPU hotplug interface to return APIC ID as response to the 
> > > > > > new command
> > > > > > CPHP_GET_CPU_ID_CMD.    
> > > > > 
> > > > > One big piece missing here is motivation:  
> > > > I thought the only willing reader was Laszlo (who is aware of context)
> > > > so I skipped on details and confused others :/
> > > >   
> > > > > Who's going to use this interface?  
> > > > In current state it's for firmware, since ACPI tables can cheat
> > > > by having APIC IDs statically built in.
> > > > 
> > > > If we were creating CPU objects in ACPI dynamically
> > > > we would be using this command as well.  
> > > 
> > > I'm not sure how it's even possible to create devices dynamically. Well
> > > I guess it's possible with LoadTable. Is this what you had in
> > > mind?  
> > 
> > Yep. I even played this shiny toy and I can say it's very tempting one.
> > On the  other side, even problem of legacy OSes not working with it aside,
> > it's hard to debug and reproduce compared to static tables.
> > So from maintaining pov I dislike it enough to be against it.
> > 
> >   
> > > > It would save
> > > > us quite a bit space in ACPI blob but it would be a pain
> > > > to debug and diagnose problems in ACPI tables, so I'd rather
> > > > stay with static CPU descriptions in ACPI tables for the sake
> > > > of maintenance.  
> > > > > So far CPU hotplug was used by the ACPI, so we didn't
> > > > > really commit to a fixed interface too strongly.
> > > > > 
> > > > > Is this a replacement to Laszlo's fw cfg interface?
> > > > > If yes is the idea that OVMF going to depend on CPU hotplug directly 
> > > > > then?
> > > > > It does not depend on it now, does it?  
> > > > It doesn't, but then it doesn't support cpu hotplug,
> > > > OVMF(SMM) needs to cooperate with QEMU "and" ACPI tables to perform
> > > > the task and using the same interface/code path between all involved
> > > > parties makes the task easier with the least amount of duplicated
> > > > interfaces and more robust.
> > > > 
> > > > Re-implementing alternative interface for firmware (fwcfg or what not)
> > > > would work as well, but it's only question of time when ACPI and
> > > > this new interface disagree on how world works and process falls
> > > > apart.  
> > > 
> > > Then we should consider switching acpi to use fw cfg.
> > > Or build another interface that can scale.  
> > 
> > Could be an option, it would be a pain to write a driver in AML for fwcfg 
> > access though
> > (I've looked at possibility to access fwcfg from AML about a year ago and 
> > gave up.
> > I'm definitely not volunteering for the second attempt and can't even give 
> > an estimate
> > it it's viable approach).
> > 
> > But what scaling issue you are talking about, exactly?
> > With current CPU hotplug interface we can handle upto UNIT32_MAX cpus, and 
> > extend
> > interface without need to increase IO window we are using now.
> > 
> > Granted IO access it not fastest compared to fwcfg in DMA mode, but we 
> > already
> > doing stop machine when switching to SMM which is orders of magnitude 
> > slower.
> > Consensus was to compromise on speed of CPU hotplug versus more complex and 
> > more
> > problematic unicast SMM mode in OVMF (can't find a particular email but we 
> > have discussed
> > it with Laszlo already, when I considered ways to optimize hotplug speed)  
> 
> If we were designing the interface from the ground up, I would
> agree with Michael.  But I don't see why we would reimplement
> everything from scratch now, if just providing the
> cpu_selector => cpu_hardware_id mapping to firmware is enough to
> make the existing interface work.
> 
> If somebody is really unhappy with the current interface and
> wants to implement a new purely fw_cfg-based one (and write the
> corresponding ACPI code), they would be welcome.  I just don't
> see why we should spend our time doing that now.

Right, we can give fwcfg a shot next time we try to allocate
new register block for a new PV interface, assuming it suits
interface requirements.

Reply via email to