On 03/31/16 23:10, Mahan, Patrick wrote:
> Laszlo,
> 
> What is FLR, I must admit my ignorance to that acronym.

Function Level Reset.

https://en.wikipedia.org/wiki/X86_virtualization#FLR

> I'm not fully up on using a VM as s development environment.  Does the NIC 
> need to have it's linux driver loaded
> on the linux hypervisor?  Or can the VM access the PCIe directly?

On the host, the native driver must not bind the device to assign.
Either the driver module should be blacklisted, or the device to assign
should be claimed by the pci-stub driver first. (You can read about this
more in the blog posts I'll link below.)

> Can VirtualBox be used (I already have it for another on going project), but 
> I can go to QEMU.

I've never used VirtualBox. As far as I know, this kind of device
assignment is unique to QEMU + KVM + VFIO.

> I am running Ubuntu 14.04.4 currently, though my kernel guy is telling me I 
> need to switch to Redhat :-).

I have zero experience with Ubuntu.

Personally I would recommend to start with a fresh Fedora host.

My colleague Alex Williamson (VFIO maintainer in Linux and QEMU) has
written an extensive series of blog posts on device assignment (albeit
with a focus on GPUs, not NICs). Please see:

http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-2.html
http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-3-host.html
http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-4-our-first.html
http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-5-vga-mode.html

If you have further questions, the vfio-users mailing list is the place
to ask them:

https://www.redhat.com/mailman/listinfo/vfio-users

When assigning your NIC to the guest, it is possible to present your own
under-development UEFI driver to the guest, as if it had been flashed
into the NIC's ROM BAR. You can see an example here (look for
"/etc/fake/boot.bin"):

http://libvirt.org/formatdomain.html#elementsHostDev

  rom

    The rom element is used to change how a PCI device's ROM is
    presented to the guest. The optional bar attribute can be set to
    "on" or "off", and determines whether or not the device's ROM will
    be visible in the guest's memory map. (In PCI documentation, the
    "rombar" setting controls the presence of the Base Address Register
    for the ROM). If no rom bar is specified, the qemu default will be
    used (older versions of qemu used a default of "off", while newer
    qemus have a default of "on"). Since 0.9.7 (QEMU and KVM only). The
    optional file attribute contains an absolute path to a binary file
    to be presented to the guest as the device's ROM BIOS. This can be
    useful, for example, to provide a PXE boot ROM for a virtual
    function of an sr-iov capable ethernet device (which has no boot
    ROMs for the VFs). Since 0.9.10 (QEMU and KVM only).

This should allow you to quickly iterate between rebuilding the driver
on the host, and testing it with OVMF: the UEFI driver will be
automatically dispatched from the physical NIC's (faked) ROM BAR; you
won't have to copy the UEFI driver binary to a virtual disk image again
and again.

Thanks
Laszlo

> 
> Thanks for the pointers,
> 
> Patrick
> 
> ________________________________________
> From: Laszlo Ersek <ler...@redhat.com>
> Sent: Thursday, March 31, 2016 12:34 PM
> To: Mahan, Patrick
> Cc: edk2-devel@lists.01.org; Alex Williamson
> Subject: Re: [edk2] Recommend platforms for doing Intel UEFI driver 
> development
> 
> On 03/31/16 20:52, Mahan, Patrick wrote:
>> I believe I have seen this mentioned here before, but I cannot find the 
>> answer via google or gmane.  But
>> there was a developer package that consisted of an Intel motherboard and 
>> software that allowed you to
>> do UEFI Driver development.  Not sure but I seem to recall it was some 
>> third-party.
>>
>> What are the recommend platforms for doing Intel 32/64 UEFI driver 
>> development.
>>
>> Note: I am currently working on a Dell 5810 running UEFI from AMI but I am 
>> having some issues like controlling
>> BDS, etc.
> 
> I don't expect the following to match your use case, but for
> completeness's sake, I'll recommend it anyway:
> 
> If the device that you are writing a UEFI driver for is a discrete
> PCI(e) card, then chances are it will work with virtualization and
> device assignment on x86_64 hosts running a fresh Linux kernel (KVM &
> VFIO) and QEMU.
> 
> This could allow you to develop and run your UEFI driver on top of OVMF,
> in a QEMU/KVM virtual machine. The benefits should be the usual ones of
> virtualization: if you crash or need to reboot for another reason, you
> can simply destroy and relaunch the VM, without power-cycling a physical
> box. (VFIO should handle the resetting of the assigned physical device
> for you, assuming the device supports FLR.) No need to worry about a
> physical machine's filesystem(s), and so on.
> 
> You could also hack OVMF's BDS (consisting of IntelFrameworkModulePkg's
> BdsDxe and OvmfPkg's PlatformBdsLib) any way you like.
> 
> Once you were pleased with the UEFI driver, you could even boot Linux or
> Windows in the guest, and test the OS-native drivers. In this case,
> simply forcing off and restarting the VM (if there was trouble) wouldn't
> do much good to your (virtual) Linux or Windows filesystems, but then
> again, you could snapshot the virtual disk in a known good state (with
> the VM not running), before you start experimenting. Then, if you have
> to force off the VM for any reason, you can quickly restore the virtual
> disk to the known-good snapshot.
> 
> Thanks
> Laszlo
> 

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to