Very interesting.

If we had a X server (we do don’t we) can I run a web browser through this?


Sent from my iPhone

On Jun 13, 2025, at 10:59 AM, ron minnich <[email protected]> wrote:


In my IWP9 talk I mentioned the linux vm appliance we built for akaros, when I was at google, and I've got a very simple version working on 9front vmx now, tested on my t420.

It needs a kernel, and a u-root (github.com/u-root/u-root) initramfs, with my sidecore command (github.com/u-root/sidecore)

sidecore is a modified cpu client which provides, to the cpu server, over 9p or nfs, a union of the client namespace (as in cpu) and a cpio archive. The cpio is typically a flattened docker container, created using a tool I wrote in github.com/u-root/sidecore-images

I did all this work, while at Samsung, so we could have cpu in Windows. Windows does not have things linux wants, like symlinks, device specials, and so on; sidecore provides them via the cpio file.

So, to make this work, you need a kernel, an initramfs, usually compiled into the kernel, and the sidecore command, which runs in Plan 9. I can provide more instructions to interested parties, but overall, it's pretty easy. What it lacks is polish.

There is an rc script, appliance, which does this: 
vmx -c /fd/0,/fd/1 -M 1024M -n ether1 /usr/glenda/plan9cpulinux
Once that starts, in the guest, you need to do something like this:
 ip addr add 192.168.0.187/24 dev eth0
ip link set dev eth0 up
cpud

Then the appliance is ready for use.
On the plan 9 side, there is a command, linux, which looks like this:

ramfs -S sidecore
mount -ac /srv/sidecore /
mkdir -p /home/glenda
bind /usr/glenda /home/glenda

SIDECORE_ARCH=amd64
SIDECORE_DISTRO=py3-numpi
SIDECORE_KEYFILE=/usr/glenda/.ssh/cpu_rsa
HOME=/home/glenda
PWD=/home/glenda

sidecore -sp 17010 '-9p=false' '-nfs=true'  192.168.0.187 $*

The ramfs setup is so our home is /home/glenda, which linux distros seem to want.
The various environment variables direct sidecore about where to get its key, the architecture of the container, and the distro desired. 
sidecore looks for a file of the form:
$home/sidecore-images/$(SIDECORE_ARCH)-$(SIDECORE_DISTRO)@$(SIDECORE_VERSION).cpio

so, a few things: works across architectures, you can have any distro you want, and you can have different versions: default is latest.

I've used this from osx, linux, and windows to risc-v, and now it works from plan 9 to vmx guest. Note that, also, the guest can be freebsd.

Now we get to the kind-of-crude part, sorry about this, the plan 9 support is a little unbaked.
Part of the issue is ... ptys. Anyway ...

I run the rc script:
linux

# this gets you to a non-interactive shell with no prompt. Sorry.
# all the shells stat fd 0, and, if it is not a tty, assume they are non-interactive.
# the golang cpu sets up the namespace in /tmp/cpu, not /mnt/term. 
# Step 1: get device mounted into /tmp/cpu
mount --bind /dev /tmp/cpu/dev
# Step 2: chroot into the namespace provided by the sidecore command
chroot /tmp/cpu bash -i
# Step 3: run python3
root@(none):/# python3 -c 'print("Hello!")'
python3 -c 'print("Hello!")'
# What is this stuff? I don't know.
2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support for 'exclusive' mode.
2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support for 'exclusive' mode.
2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
2025/06/13 07:11:38 [ERROR] failing create to indicate lack of support for 'exclusive' mode.
2025/06/13 07:11:38 [ERROR] Error Creating: permission denied
# python output
Hello!

I have a cpio container image that provides python3 and numpy

The big picture here: you can have linux programs easily. It should be as easy to run these programs, in a vm, as it is to run any command. You can, e.g., 
du -a | linux wc # but why? :-)
and have that work.

My next step is to finish up the "VMs via libthread" work, then get back to smp guests, but I thought this initial work might be of interested.

BTW, the golang root file system I use has been used in google and bytedance firmware for 5+ years, and is load bearing on a few million data center nodes. The golang cpu/cpud is an integral part of google and ARM confidential compute stacks. It's pretty real, and now it's on plan 9 too.

Reply via email to