Re: Re: shm as domain to virt-viewer protocol? (Daniel P. Berrang?)
Hi David On Fri, Mar 27, 2020 at 9:52 PM David Geise wrote: [..] > I'm still just skimming & sizing things but it appears the options are: > > 1. Fork viewer at some level and become a downstream source code consumer; > don't worry about compatibility, etc but don't expect pull requests to be > accepted. I.e. add to the open-source jungle. > > 2. Similar to #1 but minimize the jungle-effect, try to only fork only > Gtk-Spice. To support this downstream environment try to minimize changes to > a hook into a separate module that implements shm I/O. Limit shm usage to > guest->viewer bitmap transfers. Try to avoid memcpy's, ideally point shm > directly into video card's output buffer to minimize cpu/bus overhead if > possible. Alternately use RLE or other low-overhead compression. Virt-viewer > project's PR acceptance possible but unlikely. > > 3. I haven't looked at the spice api yet, but look for the opportunity to > re-impliment spice on a shm transport that would partially or entirely > replace tcp for standalone scenarios. This seems unlikely unless spice has a > good transport abstraction layer. > > 4. Look for extensibility hooks in the spice api itself which could be > leveraged to implement a 'sideband' shm i/o to offload the high-bandwidth > traffic (screen refresh) without changing the spice itself. Just need some > way to xfer a pointer & probably a few event notifications. > > Your thoughts? > As you probably realize, this is not as simple as it may look like. There are various approaches doing hw accelerated rendering, and various combinations are possible with virtio-gpu, vgpu, and passthrough, spice or not. I think there has been some discussion about combining virtio-gpu & passthrough to do better gpu display scraping somehow. But for now, it is probably easier to setup some process in the guest, like looking-glass/streaming-agent, and give that display content somehow to the host via different means (ivshmem is probably easy, but fragile/hackish). At the client end, I would highly recommend to reuse the dmabuf mechanism we have in place to share GPU buffers/display from qemu. That dmabuf mechanism is also used by the spice client, and by vgpu, so in theory you won't have to touch spice, or spice-gtk, or virt-viewer. Iow, basic screen scraping could be done by modifying guest side and qemu only. But I believe there are better plans being discussed. I think the discussion should be moved to qemu.
Re: Re: shm as domain to virt-viewer protocol? (Daniel P. Berrang?)
Thanks for the thoughtful response, Mr. Berrangé. I don't want to try the list's patience; I'm somewhat new to the ecosystem, so apologies if my posts are a bit naïve. Feel free to let me know if this conversation is better handled via another channel. And thanks so much for your blog; I just discovered it - so much valuable info! Please, anyone feel free to correct me if I'm wrong here but although I agree that creating a parallel implementation isn't ideal, I took a quick skim of virtio-gpu and my assessment is that virtio-gpu's focus on open-gl/vulcan and a lack of priotiry in supporting windows/directx means that avenue is probably not worth holding out much hope for. At least not in the near-term. And beyond the code issues, graphics card vendors don't seem at all interested in opening up their propritary datacenter-focused apis to consumer workstation scenarios making the situation ever more problematic. Therefore despite the negatives, passthru-gpu appears to be a reasonable viable path for the short-term for better windows + low-latency support. I agree my proposal isn't the ideal long-term solution, but an easy to install, well-integrated, highly-optimized low-latency windows solution might be quite popular and gain community support. It fills a gap while the ultimate solution is developed, and it should be a lot easier to implement than a shared-gpu approach that I doubt will ever support directx in a meaningful way. I'm still just skimming & sizing things but it appears the options are: 1. Fork viewer at some level and become a downstream source code consumer; don't worry about compatibility, etc but don't expect pull requests to be accepted. I.e. add to the open-source jungle. 2. Similar to #1 but minimize the jungle-effect, try to only fork only Gtk-Spice. To support this downstream environment try to minimize changes to a hook into a separate module that implements shm I/O. Limit shm usage to guest->viewer bitmap transfers. Try to avoid memcpy's, ideally point shm directly into video card's output buffer to minimize cpu/bus overhead if possible. Alternately use RLE or other low-overhead compression. Virt-viewer project's PR acceptance possible but unlikely. 3. I haven't looked at the spice api yet, but look for the opportunity to re-impliment spice on a shm transport that would partially or entirely replace tcp for standalone scenarios. This seems unlikely unless spice has a good transport abstraction layer. 4. Look for extensibility hooks in the spice api itself which could be leveraged to implement a 'sideband' shm i/o to offload the high-bandwidth traffic (screen refresh) without changing the spice itself. Just need some way to xfer a pointer & probably a few event notifications. Am I'm missing anything? If not it seems the path forward is check feasibility of #4 then #3, probably ending up at #2. #2 seems easiest to implement in many ways. I can just fork & sideband my own little project until it succeeds or fails. #2 also seems to be the easiest to implement for a single part-time developer, basically just folding the ideas from looking-glass into a solution that's more integrated into the existing virt-viewer / guest driver stack. The only downside is windows driver signing; obviously use developer mode to start, then either I can try to get the driver thru Microsoft's driver certification process (havent done driver development since windows 2000), or ideally RedHat would adopt & distribute the code. In general I'd rather avoid the community politics of trying to do something high-impact like propose api changes to some community's baby like spice or vnc. I attempted to contribute to the linux kernel something like 25 years ago and was publicly shamed by Linus himself and ended up leaving the open-source community and linux. I'd like to split the difference between looking-glass's "go it 100% alone" approach and the borg approach. I want to try to give back, ideally get it mainstreamed, if it gains traction great, but if it doesn't that's ok too. Your thoughts?
Re: Adding an "Enable Launch Security" checkbox to the Memory Details dialog
On Fri, 2020-03-27 at 12:13 -0400, Cole Robinson wrote: > CCing Erik who knows more about that launchSecurity/sev than I do > > On 3/27/20 11:44 AM, Charles Arnold wrote: > > What is the opinion of adding a checkbox called "Enable Launch > > Security" under the 'Current allocation' and 'Maximum allocation' > > boxes > > on the Details->Memory dialog? It would only be enabled if libvirt > > detected support for it. > > > > Provided libvirt capabilities report everything we need to know to > whether it's really supported on the host and will actually work, and > there's a sensible noncontroversial set of defaults we can fill in, > then > a single checkbox is worth considering. It's certainly an advanced > feature but it's also getting more and more mention these days so > maybe > it's good to get out ahead of any future RFEs. > > But if we can boil it down to being that simple I guess the question > is > whether a checkbox in the UI is valuable when users can use 'virt-xml > VMNAME --edit --launchSecurity sev' to fill in the same default > values. > I guess it depends on who we expect will want to use this option. We > should think about how it fits the UI philosophy/DESIGN.md: > > https://github.com/virt-manager/virt-manager/blob/master/DESIGN.md As I look this over here are my thoughts. How many users do we expect will use it: It is a relatively new feature in libvirt based on newer hardware so I'm not sure we can answer this now. How critical is it for users who need/want it: Definitely not a blocker. I view it as your comment above as a "good to get ahead of any future RFEs." How self explanatory is the feature: The name itself may not lend itself to be self explanatory to the average user. This is more of an intermediate or advanced feature. It is well documented on the libvirt domain XML format pages. How dangerous or difficult to use is the feature: Just a checkbox from the virt-manager level (at this point) but there appears to be other issues below in libvirt or qemu that Daniel pointed out. How much work is it to maintain, test: Minimal IMO although it may evolve over time. The "Enable Launch Security" string would need to be translated. How much work is it to implement: I've already coded it up so my current implemenation didn't seem to hard. What I have does rely on libvirt reporting on whether there is support. If yes, the checkbox is enabled and if no it is not. - Charles
Re: Adding an "Enable Launch Security" checkbox to the Memory Details dialog
On Fri, Mar 27, 2020 at 12:13:09PM -0400, Cole Robinson wrote: > CCing Erik who knows more about that launchSecurity/sev than I do > > On 3/27/20 11:44 AM, Charles Arnold wrote: > > What is the opinion of adding a checkbox called "Enable Launch > > Security" under the 'Current allocation' and 'Maximum allocation' boxes > > on the Details->Memory dialog? It would only be enabled if libvirt > > detected support for it. > > > > Provided libvirt capabilities report everything we need to know to > whether it's really supported on the host and will actually work, and > there's a sensible noncontroversial set of defaults we can fill in, then > a single checkbox is worth considering. It's certainly an advanced > feature but it's also getting more and more mention these days so maybe > it's good to get out ahead of any future RFEs. Two issues right now. There is a ridiculously low limit of 15 VMs on first generation CPUs, perhaps not a huge problem for typical scenarios using virt-manager though. Second though is that while libvirt reports whether the feature exists & is supported in QEMU, QEMU is lieing to us, because it isn't checking whether kvm-amd actually allows the feature to be used. https://bugzilla.redhat.com/show_bug.cgi?id=1689202 https://bugzilla.redhat.com/show_bug.cgi?id=1731439 As long as the checkbox isn't enabled by default, its probably ok to ignore those two issues Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: Adding an "Enable Launch Security" checkbox to the Memory Details dialog
CCing Erik who knows more about that launchSecurity/sev than I do On 3/27/20 11:44 AM, Charles Arnold wrote: > What is the opinion of adding a checkbox called "Enable Launch > Security" under the 'Current allocation' and 'Maximum allocation' boxes > on the Details->Memory dialog? It would only be enabled if libvirt > detected support for it. > Provided libvirt capabilities report everything we need to know to whether it's really supported on the host and will actually work, and there's a sensible noncontroversial set of defaults we can fill in, then a single checkbox is worth considering. It's certainly an advanced feature but it's also getting more and more mention these days so maybe it's good to get out ahead of any future RFEs. But if we can boil it down to being that simple I guess the question is whether a checkbox in the UI is valuable when users can use 'virt-xml VMNAME --edit --launchSecurity sev' to fill in the same default values. I guess it depends on who we expect will want to use this option. We should think about how it fits the UI philosophy/DESIGN.md: https://github.com/virt-manager/virt-manager/blob/master/DESIGN.md - Cole
Adding an "Enable Launch Security" checkbox to the Memory Details dialog
What is the opinion of adding a checkbox called "Enable Launch Security" under the 'Current allocation' and 'Maximum allocation' boxes on the Details->Memory dialog? It would only be enabled if libvirt detected support for it. - Charles