Hey all,

Yeah I don’t disagree Laszlo. While developing these patches, It was pretty
confusing to determine which parts of PlatformLibInit were compatible with
CloudHv and which parts weren’t.

I have no idea how much work would be involved in splitting out
cloud-hypervisor code. It would be good to hear input from the
cloud-hypervisor maintainers on this.

Best,
Thomas

On Mon, Jan 15, 2024 at 9:52 AM Ard Biesheuvel <a...@kernel.org> wrote:

> On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <ler...@redhat.com> wrote:
> >
> > On 1/12/24 19:31, Thomas Barrett wrote:
> > > This series adds support for cloud-hypervisor guests with >1TiB
> > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > still occuring for CloudHv guests because the PlatformScanE820
> > > utility is not currently supported for CloudHv.
> > >
> > > My working branch for these changes can be found here:
> > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > >
> > > Cc: Anatol Belski <anbel...@linux.microsoft.com>
> > > Cc: Ard Biesheuvel <ardb+tianoc...@kernel.org>
> > > Cc: Gerd Hoffmann <kra...@redhat.com>
> > > Cc: Jianyong Wu <jianyong...@arm.com>
> > > Cc: Jiewen Yao <jiewen....@intel.com>
> > > Cc: Laszlo Ersek <ler...@redhat.com>
> > > Cc: Rob Bradford <rbradf...@rivosinc.com>
> > >
> > > Thomas Barrett (3):
> > >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > >   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> > >
> > >  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
> > >  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> > >  2 files changed, 79 insertions(+), 30 deletions(-)
> > >
> >
> > Thanks for posting v3, this one looks well-formed, so I can make some
> > comments.
> >
> > TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
> > branches introduced to PlatformInitLib.
> >
> > OVMF supports multiple hypervisors, and the general idea has been that a
> > single module (or even a single platform DSC) should preferably not
> > attempt to support multiple hypervisors. That's why we have a separate
> > Xen platform, and a big number of Xen-customized, standalone modules.
> >
> > The idea with this is that any particular developer is very unlikely to
> > develop for, and test on, multiple hypervisors. Therefore unification (=
> > elimination of all possible code duplication) between distinct
> > hypervisor code snippets is actually detrimental for maintenance,
> > because it technically enables a developer to regress a platform that
> > they never actively work with.
> >
> > I believe bhyve is similarly separated out (just like Xen).
> >
> > It's one thing to collect support for multiple board types (machine
> > types) for QEMU into a given module; that's still the same hypervisor --
> > testing only requires a different "-M" option on the qemu command line.
> >
> > But fusing Cloud Hypervisor code with QEMU code makes me fidget in my
> seat.
> >
> > I've now grepped the OvmfPkg directory tree for existent instances of
> > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
> > seeped into a whole bunch of modules that should only be QEMU. If you
> > need those modules customized for CloudHv, it'd be best to detach them
> > for CloudHv, and eliminate the dead code (such as anything that depends
> > on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
> > Both communities will benefit. Again, this is all based on the empirical
> > fact that QEMU and CloudHv developers don't tend to cross-develop.
> >
> > I understand that it's always just a small addition; in each specific
> > case, it's just one or two more "if"s in common code. But the end result
> > is terrible to maintain in the long term.
> >
> > Of course this all depends on having a separate platform DSC for
> > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
> >
> > So I'd suggest (a) isolating current CloudHv logic to new library
> > instances, drivers, etc, (b) adding this enhancement to CloudHv's own
> > instance of PlatformInitLib.
> >
> > Counter-arguments, objections etc welcome -- feel free to prove me wrong
> > (e.g. whatever I'm saying about prior art / status quo).
> >
> > Also I'm not necessarily blocking this patch set; if others don't mind
> > this, they can ACK it (and I won't NACK).
> >
>
> Thanks Laszlo.
>
> I'm afraid I seem to have made your last point moot :-)
>
> But I agree with your points above: EDK2 makes the fork&tweak approach
> very easy, so CloudHv can keep its own versions of many of the QEMU
> specific glue libraries. It does, however, require a certain degree of
> hygiene on the part of the developer to introduce abstractions where
> possible, to avoid forking huge drivers or libraries for a 2 line
> delta between QEMU and CloudHv.
>
> So let's decree that future CloudHv contributions that follow this old
> pattern will not be considered until/unless there is some confidence
> on our part that there is a long term plan in motion that cleans this
> all up and repays the accumulated technical debt.
>


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113842): https://edk2.groups.io/g/devel/message/113842
Mute This Topic: https://groups.io/mt/103689730/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to