Requesting Xen community's feedback on unikernelized driver domains

2022-03-21 Thread tosher 1
Hello,

This email is to request Xen community's feedback on our work on implementing 
Xen’s driver domains using the unikernel virtual machine model (as opposed to 
using general-purpose OSs like Linux) to reduce the attack surface, among other 
benefits. The effort, called Kite, has implemented driver domains for network 
and storage PV drivers using NetBSD’s rumprun unikernel.

Details of the work are available in a paper that will soon appear at the 2022 
ACM European Conference on Computer Systems (EuroSys’22), available here: 
https://www.ssrg.ece.vt.edu/papers/eurosys22.pdf.

Kite’s source code is available at: https://github.com/ssrg-vt/kite/.

We would love to hear the community’s thoughts and feedback.

Thank you, and looking forward,

Mehrab


P.S. This email is cross-posted in xen-users mailing list 
(https://lists.xenproject.org/archives/html/xen-users/2022-03/msg00013.html).



Re: PCI passthrough support for PVH mode

2022-02-10 Thread tosher 1
Hi Julien,

Thanks for the clarification!

Regrads,
Mehrab






On Thursday, February 10, 2022, 06:12:53 PM EST, Julien Grall  
wrote: 





Hi Bertrand,

On 10/02/2022 08:32, Bertrand Marquis wrote:
>> On 10 Feb 2022, at 07:22, tosher 1  wrote:
>>
>> Hi Jan,
>>
>> Thanks for letting me know this status.
>>
>> I am wondering if PCI passthrough is at least available in Arm for other 
>> virtualization modes like PV, HVM, or PVHVM. For example, is it possible for 
>> someone to attach a PCI device to a guest domain on an Arm machine and use 
>> that domain as a driver domain, like we can do with the Xen on x86?
> 
> On arm there is only one virtualization mode which is equivalent to x86 HVM.


I would like to correct this. Arm guests are more equivalent to x86 PVH 
than HVM. For more details, see:

https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum#PVH:

This is also why we need a brand new solution for PCI passthrough rather 
than piggying back on what was done on HVM in QEMU :).

Cheers,

-- 
Julien Grall




Re: PCI passthrough support for PVH mode

2022-02-10 Thread tosher 1
Hi Bertrand and Jan,

I thought PCI passthrough was a WIP only for PVH mode on Arm and x86. However, 
it seems there are some differences. Thanks for the clarifications.

Regards,
Mehrab






On Thursday, February 10, 2022, 03:32:19 AM EST, Bertrand Marquis 
 wrote: 





Hi Mihrab,

> On 10 Feb 2022, at 07:22, tosher 1  wrote:
> 
> Hi Jan,
> 
> Thanks for letting me know this status.
> 
> I am wondering if PCI passthrough is at least available in Arm for other 
> virtualization modes like PV, HVM, or PVHVM. For example, is it possible for 
> someone to attach a PCI device to a guest domain on an Arm machine and use 
> that domain as a driver domain, like we can do with the Xen on x86?

On arm there is only one virtualization mode which is equivalent to x86 HVM.

Regarding PCI passthrough on arm, this is currently a work in progress that is 
being upstream so it is not available for now.
Once it will be merged in Xen, it will be possible to assign a PCI device to a 
guest so you could then make a “driver domain” using the functionality.

Regards
Bertrand


> 
> Please let me know.
> 
> Regards,
> Mehrab
> 
> 
> 
> 
> On Monday, February 7, 2022, 02:57:45 AM EST, Jan Beulich  
> wrote: 
> 
> 
> 
> 
> 
> On 06.02.2022 06:59, tosher 1 wrote:
> 
>> Back in the year 2020, I was inquiring into the status of PCI passthrough 
>> support for PVH guests. At that time, Arm people were working on using vPCI 
>> for guest VMs. The expectation was to port that implementation to x86 once 
>> ready.
>> 
>> I was wondering if there is any update on this. Does Xen support PCI 
>> passthrough for PVH mode yet? Please let me know.
> 
> 
> The Arm work is still WIP, and in how far it's going to be straightforward to
> re-use that code for x86 is still unclear (afaict at least).
> 
> Jan
> 
> 
> 




Re: PCI passthrough support for PVH mode

2022-02-09 Thread tosher 1
Hi Jan,

Thanks for letting me know this status.

I am wondering if PCI passthrough is at least available in Arm for other 
virtualization modes like PV, HVM, or PVHVM. For example, is it possible for 
someone to attach a PCI device to a guest domain on an Arm machine and use that 
domain as a driver domain, like we can do with the Xen on x86?

Please let me know.

Regards,
Mehrab




On Monday, February 7, 2022, 02:57:45 AM EST, Jan Beulich  
wrote: 





On 06.02.2022 06:59, tosher 1 wrote:

> Back in the year 2020, I was inquiring into the status of PCI passthrough 
> support for PVH guests. At that time, Arm people were working on using vPCI 
> for guest VMs. The expectation was to port that implementation to x86 once 
> ready.
> 
> I was wondering if there is any update on this. Does Xen support PCI 
> passthrough for PVH mode yet? Please let me know.


The Arm work is still WIP, and in how far it's going to be straightforward to
re-use that code for x86 is still unclear (afaict at least).

Jan





PCI passthrough support for PVH mode

2022-02-05 Thread tosher 1
Hi,

Back in the year 2020, I was inquiring into the status of PCI passthrough 
support for PVH guests. At that time, Arm people were working on using vPCI for 
guest VMs. The expectation was to port that implementation to x86 once ready.

I was wondering if there is any update on this. Does Xen support PCI 
passthrough for PVH mode yet? Please let me know.

Thanks,
Mehrab



Re: PVH mode PCI passthrough status

2020-12-29 Thread tosher 1
Hi Roger,

> I think you meant PVH mode in the sentence above instead of PVM?

Sorry, that was a typo. I meant PVH.

> Arm folks are working on using vPCI for domUs, which could easily be picked 
> up by x86 once ready. There's also the option to import xenpt [0] from Paul 
> Durrant and use it with PVH, but it will likely require some work.

Thanks for your response. Do you have any timeline in mind on when support for 
x86 will be available? A rough estimate would help me with planning something.

Thanks,
Mehrab




PVH mode PCI passthrough status

2020-12-28 Thread tosher 1
Hi,

As of Xen 4.10, PCI passthrough support was not available in PVH mode. I was 
wondering if the PCI passthrough support was added in a later version.

It would be great to know the latest status of the PCI passthrough support for 
the Xen PVM mode. Please let me know if you have any updates on this.

 Thanks,
Mehrab



Re: Xen network domain performance for 10Gb NIC

2020-04-28 Thread tosher 1
> Do you get the expected performance from the driver domain when not
> using it as a backend? Ie: running the iperf benchmarks directly on
> the driver domain and not on the guest.


Yes, the bandwidth between the driver domain and the client machine is close to 
10Gbits/sec.



Re: Xen network domain performance for 10Gb NIC

2020-04-27 Thread tosher 1
> Driver domains with passthrough devices need to perform IOMMU
operations in order to add/remove page table entries when doing grant
maps (ie: IOMMU TLB flushes), while dom0 doesn't need to because it
has the whole RAM identity mapped in the IOMMU tables. Depending on
how fast your IOMMU is and what capabilities it has doing such
operations can introduce a significant amount of overhead.

It makes sense to me. Do you know, in general, how to measure IOMMU 
performance, and what features/properties of IOMMU can contribute to getting a 
better network throughput? Please let me know. 

Thanks!



Re: Xen network domain performance for 10Gb NIC

2020-04-27 Thread tosher 1
The driver domain is HVM. Both the driver domain and 






On Monday, April 27, 2020, 1:28:13 AM EDT, Jürgen Groß  wrote: 




> Is the driver domain PV or HVM?

The driver domain is HVM.

> How many vcpus do dom0, the driver domain and the guest have?

Dom0 has 12 vcpus, pinned. Both the driver domain and the guest have 4 vcpus, 
pinned as well.


Juergen



Xen network domain performance for 10Gb NIC

2020-04-26 Thread tosher 1
 Hi everyone,

Lately, I have been experimenting with 10Gb NIC performance on Xen domains. I 
have found that network performance is very poor for PV networking when a 
driver domain is used as a network backend.

My experimental setup is I have two machines connected by the 10Gb network: a 
server running the Xen hypervisor and a desktop machine working as a client. I 
have Ubuntu 18.04.3 LTS running on the Dom0, Domus, Driver Domain, and client 
desktop, where the Xen version is 4.9. I measured the network bandwidth using 
iPerf3.

The network bandwidth between a DomU using Dom0 as backend and the client 
desktop is like 9.39Gbits/sec. However, when I use a network driver domain, 
which has the 10Gb NIC by PCI pass through, the bandwidth between the DomU and 
the client desktop is like 2.41Gbit/sec is one direction and 4.48Gbits/sec in 
another direction. Here, by direction, I mean the client-server direction for 
iPerf3.

These results indicate a huge performance degradation, which is unexpected. I 
am wondering if I am missing any key points here which I should have taken care 
of or if there is any tweak that I can apply.

Thanks,
Mehrab



Re: [Xen-devel] HVM Driver Domain

2020-01-30 Thread tosher 1
> 'xl devd' should add the backend interfaces (vifX.Y) to the bridge if 
> properly configured, as it should be calling the hotplug scripts to do that.

Yes, running ' xl devd' in the driver domain before launching the DomU, solved 
the bridge issue. Thanks a lot.

So, for the people who end up reading this thread, the following is a lesson I 
have learned. 

We need 'xl devd' running for the bridge to work, and to get the 'xl' program 
in the driver domain, depending on the distro being used, we may need to 
install xen-utils or other packages.  However, along with the xl, these 
packages may end up installing Xen hypervisor and update the grub accordingly. 
As a result, there is a chance that, in the next boot, the OS will boot into 
the hypervisor mode. In this case, the driver domain won't work. Therefore, be 
careful not to run the diver domain in Dom0 mode. Change the default boot to 
the regular Linux or delete the Xen image from /boot, and update the grub. 
After Booting the regular Linux, make sure the bridge is set up correctly, and  
'xl devd' is running to have the network driver domain working.

Regards,
Mehrab



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-29 Thread tosher 1
 > BTW, are you creating the driver domain with 'driver_domain=1' in the xl 
 > config file?

No, I wasn't aware of the 'driver_domain' configuration option before, and this 
is what I was missing. With this configuration option, I was able to make the 
HVM driver domain work. However, the PV driver domain worked fine without this 
option configured.

> Are you sure this command is run on the driver domain?

Since I had installed xen-utils in the driver domain for the bridge to work, it 
installed Xen hypervisor in the driver domain. As a result, my driver domain 
became another Dom0. Realizing that I ran regular Ubuntu in the driver domain. 
This was another key point to make the driver domain work.

Thanks for all your help, which made it possible for me to test the HVM driver 
domain.

One last thing, backend interfaces are not being added to the bridge 
automatically.  Do you think it is because regular Ubuntu doesn't have Xen vif 
scripts? If yes, what is the proper thing to do in this case?

Please let me know.

Thanks,
Mehrab


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-27 Thread tosher 1
Rojer,

> You can also start xl devd manually, as that will give you verbose
> output of whats going on. In the driver domain:

> # killall xl
> # xl -vvv devd -F

> That should give you detailed output of what's going on in the driver
> domain, can you paste the output you get from the driver domain when
> you try to start the failed guest?

I ran both commands in the driver domain. Before starting the domU, I get the 
following verbose.

# sudo xl -vvv devd -F
libxl: debug: libxl_device.c:1733:libxl_device_events_handler: Domain 0:ao 
0x556e3e940ef0: create: how=(nil) callback=(nil) poller=0x556e3e940c10
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch 
w=0x7ffca33549d8 wpath=/local/domain/0/backend token=3/0: register slotnum=3
libxl: debug: libxl_device.c:1790:libxl_device_events_handler: Domain 0:ao 
0x556e3e940ef0: inprogress: poller=0x556e3e940c10, flags=i
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x7ffca33549d8 
wpath=/local/domain/0/backend token=3/0: event epath=/local/domain/0/backend
libxl: debug: libxl_event.c:2227:libxl__nested_ao_create: ao 0x556e3e940600: 
nested ao, parent 0x556e3e940ef0
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x556e3e940600: destroy

I know this is not exactly what you asked for. Unfortunately, I don't see any 
other verbose when try to start DomU. The error messages I get from the failed 
DomU launch is as the following, where driver domain id is 7.

libxl: error: libxl_device.c:1075:device_backend_callback: Domain 8:unable to 
add device with path /local/domain/7/backend/vif/8/0
libxl: error: libxl_create.c:1458:domcreate_attach_devices: Domain 8:unable to 
add nic devices
libxl: error: libxl_device.c:1075:device_backend_callback: Domain 8:unable to 
remove device with path /local/domain/7/backend/vif/8/0
libxl: error: libxl_domain.c:1075:devices_destroy_cb: Domain 
8:libxl__devices_destroy failed
libxl: error: libxl_domain.c:1003:libxl__destroy_domid: Domain 8:Non-existant 
domain
libxl: error: libxl_domain.c:962:domain_destroy_callback: Domain 8:Unable to 
destroy guest
libxl: error: libxl_domain.c:889:domain_destroy_cb: Domain 8:Destruction of 
domain failed


On the other hand, if I run devd in Dom0, I get a lot of verbose when I try to 
launch DomU, dependent on Driver Domain for networking. I am not sure if I 
should paste it here. Please let me know what you think.

Thanks,
Mehrab


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-24 Thread tosher 1
> > builder = "hvm"
> > name = "ubuntu-doment-hvm"

> This name...

> > vif = [ 'backend=ubuntu-domnet-hvm,bridge=xenbr1' ]

> ...and this name don't match.


Jason,

Thanks for pointing this out. I feel very stupid. However, the problem is not 
solved yet, but I was able to get to the next step with devd suggested by 
Roger. I will keep you in CC when I reply Roger with a detailed log.

Regards,
Mehrab


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-23 Thread tosher 1


I wasn't able to make the HVM driver domain work even with the latest Xen 
version which is 4.14. I see the 'xendriverdomain' executable in the 
/etc/init.d/ directory, but it doesn't seem to be running in the background. 

On the other hand, I see the official "Qubes OS Architecture" document 
(https://www.qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf) 
defines the driver domain as the following.

"A driver domain is an unprivileged PV-domain that has been securely granted 
access to certain PCI device (e.g. the network card or disk controller) using 
Intel VT-d." - Page 12

Moreover, section 6.1 reads "The network domain is granted direct access to the 
networking hardware, e.g. the WiFi or ethernet card. Besides, it is a regular 
unprivileged PV domain."

Maybe you guys later moved to the HVM driver domain from PV. Would you please 
share the Xen config you use for the network driver domain?

Thanks,
Mehrab


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-22 Thread tosher 1

> I don't see what is wrong here. Are you sure the backend domain is running?
If you mean the HVM network driver domain then, Yes, I am running the backend 
domain.

> Probably irrelevant at this stage, but do you have "xendriverdomain" service 
> running in the backend?
I do not have this service running. However, my PV network driver domain works 
fine, though this service is not running.

What version of Xen are you using that have xendriverdomain service?

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-22 Thread tosher 1
Hi Marek,

Thanks for your response. The server machine I am using for this setup is an 
x86_64 Intel Xeon. For the Dom0, I am using Ubuntu 18.04 running on kernel 
version 5.0.0-37-generic. My Xen version is 4.9.2. 

For the HVM driver domain, I am using Ubuntu 18.04 running on kernel version 
5.0.0-23-generic. I am doing a NIC PCI passthrough to this domain. The Xen 
config file for this domain looks like the following.

builder = "hvm"
name = "ubuntu-doment-hvm"
memory = "2048"
pci = [ '01:00.0,permissive=1' ]
vcpus = 1
disk = ['phy:/dev/vg/ubuntu-hvm,hda,w']
vnc = 1
boot="c"

I have installed xen-tools of version 4.7 in this driver domain so that the 
vif-scirpts work. The network configuration here looks like the following where 
ens5f0 is the interface name for the NIC I did passthrough.

auto lo
iface lo inet loopback

iface ens5f0 inet manual

auto xenbr1
iface xenbr1 inet static
    bridge_ports ens5f0
    address 192.168.1.3
    netmask 255.255.255.0
    gateway 192.168.1.1

The Xen config file content for the DomU is as the following.

name = "ubuntu_on_ubuntu"
bootloader = "/usr/lib/xen-4.9/bin/pygrub"
memory = 1024
vcpus = 1
vif = [ 'backend=ubuntu-domnet-hvm,bridge=xenbr1' ]
disk = [ '/dev/vg/lv_vm_ubuntu_guest,raw,xvda,rw' ]

When I try to launch this DomU, I get the following error.

libxl: error: libxl_nic.c:652:libxl__device_nic_set_devids: Domain 31:Unable to 
set nic defaults for nic 0.

Are these configurations basically very different for what you do for Qubes? 
Please let me know your thoughts.

Thanks,
Mehrab

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] HVM Driver Domain

2020-01-20 Thread tosher 1
Hi all,

I was doing some experiments on the Xen network Driver Domain using Ubuntu 
18.04.  I was able to see the driver domain works fine when I run it in PV 
mode. However, I wasn't able to make the driver domain work when I run it in 
HVM mode. I get the following error when I want my DomU to use HVM driver 
domain for network backend.

libxl: error: libxl_nic.c:652:libxl__device_nic_set_devids: Domain 25:Unable to 
set nic defaults for nic 0

Other than this, I didn't get any log messages from dmesg, xl dmesg commands, 
and files from /var/log/xen/ directory regarding this failure. Therefore, I was 
wondering if it is even possible to create an HVM Driver Domain. Please let me 
know what you think.

Thanks,
Mehrab

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] How PV frontend and backend initializes?

2019-10-16 Thread tosher 1

Anthony and Roger, thanks for your informative responses. It helped a lot.


> I'm however unsure by what you mean with instance, so you might have
> to clarify exactly what you mean in order to get a more concise
> reply.

Let's say there are two DomU's, and their respective network interfaces are 
xenbr0 and xenbr1. Therefore, there supposed to be two PV netback drivers 
running in Dom0 (or driver domain): one for xenbr0 and another for xenbr1. By 
the term instance, I am refering to these drivers. If later there comes another 
interface xenbr3, there will be the third instance of the backend driver. I was 
wondering how these multiple instances are created and when.

Now, as you pointed to the xen toolstack, I explored xl/libxl a little bit. I 
realized for two separate devices, libxl creates two different paths both for 
the frontend and backend. The OSes keeps watching xenstore paths. If an OS 
finds a device of the type it is interested in, it creates the instance of the 
corresponding driver (frontend or backend) if the device is not initialized 
already. The path is the parameter which make one instance different from the 
others.

Please let me know if I understood it wrong. Thanks!


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] How PV frontend and backend initializes?

2019-10-05 Thread tosher 1
I was trying to understand the following things regarding the PV driver.

1. Who create frontend and backend instances?
2. When are these instances created?
3. How xenbus directories are created? What is the hierarchy of the 
directories? 
4. What is the role of "vifname" and who sets it?

Please let me know if you can help with these questions or can direct me to 
some resources.

Thanks
Mehrab

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] Running Driver Domain in different mode and hypervisor

2019-02-27 Thread tosher 1
Hi Guys,
Lately, I have been trying to play with Xen Driver Domain. I have been able to 
make it work where both the  Driver Domain OS and the guest OS are run on 
paravirtualized (PV) machine. However, it doesn't work when any of them are 
hardware virtualized machine (HVM). Therefore, I have the following questions 
now.
1. Is it possible to have the Driver Domain and guests, who are using the 
corresponding drivers, run on virtual machines using other than the PV mode, 
like HVM or PVHVM? If yes, then what are the possible combinations?
2. Is it possible to use virtIO for paravirtualization where the underlying 
hypervisor is Xen? If yes, then can we run Driver Domain using virtIO?
3. Is it possible to have a Driver Domain where the underlying hypervisor is 
KVM instead of Xen?

The last question may not be directly related to Xen. However, I am tempted to 
ask as good things like PV was first introduced in Xen then others hypervisors 
adapted this design.  

Please let me know your valuable opinion and feel free to provide any link 
where I can study further regarding these matters.
Thanks,Mehrab___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel