On 8/17/22 10:20 AM, Thompson, Corey via lists.yoctoproject.org wrote:
Thanks for the quick reply.  I guess your response highlights two things
that aren't entirely clear to me.


On Tue, Aug 16, 2022 at 10:17:26PM -0500, Mark Hatle wrote:
When using rel-v2022.1, we have only tested the configuration exactly as it
is defined.

First...

I'm struggling to work out how OpenEmbedded intended for runqemu to work
from a user standpoint, so bear with me on this one.  It seems to me
that some of these settings don't belong in a build script, but should
be configured by the user when they choose to launch QEMU.  By burning
these parameters into the build recipes, isn't everyone forced into a
particular configuration?

runqemu --help ....

The default "should work" for most people. Then you override the defaults by passing specific options.

Options for the rootfs, kernel to boot, etc come from the bitbake variables that all? start with QB_, and are in the file in your tmp/deploy/images/qemuboot-.... file.

But having said that, I also don't understand why there are Xilinx
recipes overriding some of the OpenEmbedded recipes.  When I compare
qemu-xilinx-helper-native_1.0.bb with the qemu-helper-native_1.0.bb from
oe-core, the Xilinx version appears to do the same thing, having only
removed functionality.  What was wrong with qemu-helper-native_1.0.bb
that it needed replacement?

"official" QEMU runs a single instance of QEMU, that targets an individual 
board.

"xilinx" QEMU (fork) runs multiple copies of QEMU, that emulate various parts of the SoC/board. As a group the whole thing is being emulated.

As for the helper scripting, I wasn't directly involved in that work. So I'm not sure why the qemu-helper vs qemu-xilinx-helper was invented. Something I can look at in the future, but right now we're kind of stuck with how it's implemented.

On the "run" side, runqemu does a bunch of generic setup, and then calls various helper progrms. The qemu-xilinx-helper-native (and it's dependencies) provide scripting that the runqemu calls that translate the overall command line to split it and execute the multiple qemu components and then "connect" them together (with sockets if I remember right).

As a side note, I was wondering how qemu-oe-bridge-helper worked,
because in my experience qemu-bridge-helper typically needs the setuid
bit set in order to work.  I was considering simply updating my bbclass

runqemu, if it does something that needs root access, is supposed to try to execute the action using 'sudo'. If your user has unconditional sudo access (or is within the time limit of a previous sudo action), no password required -- otherwise sudo will prompt you OR prevent you.

Setting anything setuid is incorrect. Nothing within the Yocto Project should be that way. You should only ever have the automation run the runqemu helper components. For a user who does not (and never will) have root access, typically SLiRP is used, since that can all be emulated in userspace.

to specify the host's /usr/libexec/qemu-bridge-helper rather than
qemu-oe-bridge-helper, when I decided to actually look at
qemu-oe-bridge-helper and discovered that it's only a shell script to
find the host's qemu-bridge-helper path.  So now I'm even more at a loss
to why meta-xilinx would override this recipe and inhibit the deployment
of this simple script.

I don't know why it's implemented the way it is. Either it was an oversight (not included) or it didn't work for some reason and was disabled. If I had to guess, probably the former. Unless we have a specific test case for an action, it's not going to get tested.

I personally use slirp and tun (default), but not tap.


However, if you look at "honister-next", you should see we have reworked
this. I use both tun and slirp regularly, I can't say I've tried to use tap.
So if something is missing that is required for it to work, can you please
open a bug on github.com/Xilinx/meta-xilinx and explain _exactly_ how you
would expect to setup and use it?

Sure, I will experiment with some workaround on my end and when I settle
on a solution that I am happy with, I'll open a bug on GitHub and share
what I'm doing.  The short version (in case this reveals that I'm
misunderstanding something) is that I simply search QB_OPT_APPEND for
this text:

     user,tftp=${DEPLOY_DIR_IMAGE}

I believe this was removed so that tun worked properly. We don't force a specific implementation any longer. Even if a tftpboot option IS defined, it wouldn't default to 'user', but should default to the 'netdev' instead so that it can be vairable.

And replace it with this text:

     bridge,br=virbr0,helper=${STAGING_BINDIR_NATIVE}/qemu-oe-bridge-helper

I'm not sure this will work. That kind of hard coding should be limited to either runqemu itself or something else.

Looking in the runqemu code:

    def setup_net_bridge(self):
self.set('NETWORK_CMD', '-netdev bridge,br=%s,id=net0,helper=%s -device virtio-net-pci,netdev=net0 ' % ( self.net_bridge, os.path.join(self.bindir_native, 'qemu-oe-bridge-helper')))

That is the right way, and appears to already be there. Hardcoding the above would prevent someone from using slirp or tun in the future.


Of course, virbr0 is a virtual bridge which expected to already exist on
our workstations.  This is why it doesn't sit well with me for this sort
of host-specific configuration to appear in a build recipe.

runqemu will allow multiple qemu sessions to be started in parallel with different independent machines. It's used for both interactive development as well as automated use-cases (which can run in parallel either in the same "build", or via multiple builds.)

All of the questions and details above are why I try to get use-cases from people for complex problems. If I can understand how you expect to use it, then I can determine can regular Yocto Project do this -- if so how? Once I know that, I can pivot to the "Xilinx" version of QEMU and make sure it can do the same -- or clearly document WHY it can't do it.

Ultimately my goal is to enable QEMU functionality so that we can run the Yocto Project test cases, and enable Yocto Project style user development. (We're not there yet.). Today most of the QEMU work is handled by PetaLinux, which does NOT use runqemu.



(I'm hoping tomorrow I'll finally move honister-next to honister...)

Second...

Question is what are you using it for.

If you are tracking official Xilinx releases, then you need to be on rel-v.... Otherwise you are NOT on an official release. But if you are tracking an official release, be aware it's effectively frozen after we release. So it's up to you to track bug fixes, security fixes, feature enhancements, etc. (We do occasional updates, but it's usually for what I'd call "late binding features", to enable boards or other components that were not ready in time for the original release.). But this snapshot in time is static, and will get stale. (Which is why I _ALWAYS_ recommend you need to work with an OSV. Either contract a commercial one, or staff an internal one. Once you go down this path you _WILL_ fork it and have to make your own decisions as far as testing and maintenance goes.)

With the rel-v* branches, Vivado is the primary integration point. EVERYTHING is tied back to a specific Vivado version.

Would you recommend that I track the Yocto release named branches
instead of the Xilinx release named branches?  At first I chose
rel-v2021.2, and now rel-v2022.1, only because I'm following the Vivado
release that other members of our team are using.  Which do you think
makes the most sense?

Yocto branches track Yocto Project work, and are "best effort" and often previews of the future. They will NOT be tagged to a specific Vivado release. So I would expect that "honister" will move from Vivado 2022.1 to 2022.2 once 2022.2 is released. While master (when it goes live) will go from 2022.1 -> 2022.2 -> 2023.1 -> 2023.2 -> .....

So don't expect the Vivado version to be 'fixed' in the YP branches, as the Yocto Project is the primary integration point, Vivado is just a detail and one that _WILL_ change.


Also the only support you can get for the Yocto Project branches is here on the mailing list. The rel-v* branches you can ask the broader Xilinx for help.

--Mark


Thanks,
Corey





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#5043): 
https://lists.yoctoproject.org/g/meta-xilinx/message/5043
Mute This Topic: https://lists.yoctoproject.org/mt/93070424/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/meta-xilinx/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to