On Thu, 11 Dec 2025, Marek Marczykowski-Górecki wrote:
> On Wed, Dec 10, 2025 at 01:58:44PM -0800, Stefano Stabellini wrote:
> > On Wed, 10 Dec 2025, Marek Marczykowski-Górecki wrote:
> > > > > > > +    mkfs.ext4 -d . ../domU-rootfs.img 1024M
> > > > > > 
> > > > > > Do we really need 1GB? I would rather use a smaller size if 
> > > > > > possible.
> > > > > > I would rather use as little resources as possible on the build 
> > > > > > server
> > > > > > as we might run a few of these jobs in parallel one day soon.
> > > > > 
> > > > > This will be a sparse file, so it won't use really all the space. But
> > > > > this size is the upper bound of what can be put inside.
> > > > > That said, it's worth checking if sparse files do work properly on all
> > > > > runners in /build. AFAIR some older docker versions had issues with 
> > > > > that
> > > > > (was it aufs not supporting sparse files?).
> > > > 
> > > > I ran the same command on my local baremetal Ubuntu dev environment
> > > > (arm64) and it created a new file of the size passed on the command
> > > > line (1GB in this case). It looks like they are not sparse on my end. If
> > > > the result depends on versions and configurations, I would rather err on
> > > > the side of caution and use the smallest possible number that works.
> > > 
> > > Hm, interesting. What filesystem is that on?
> > > 
> > > On my side it's definitely sparse (ext4):
> > > 
> > >     [user@disp8129 Downloads]$ du -sch
> > >     12K   .
> > >     12K   total
> > >     [user@disp8129 Downloads]$ mkfs.ext4 -d . ../domU-rootfs.img 1024M
> > >     mke2fs 1.47.2 (1-Jan-2025)
> > >     Creating regular file ../domU-rootfs.img
> > >     Creating filesystem with 262144 4k blocks and 65536 inodes
> > >     Filesystem UUID: f50a5dfe-4dcf-4f3e-82d0-3dc54a788ab0
> > >     Superblock backups stored on blocks: 
> > >         32768, 98304, 163840, 229376
> > > 
> > >     Allocating group tables: done                            
> > >     Writing inode tables: done                            
> > >     Creating journal (8192 blocks): done
> > >     Copying files into the device: done
> > >     Writing superblocks and filesystem accounting information: done
> > > 
> > >     [user@disp8129 Downloads]$ ls -lhs ../domU-rootfs.img 
> > >     33M -rw-r--r--. 1 user user 1.0G Dec 10 21:45 ../domU-rootfs.img
> > 
> > I went and check two of the runners, one ARM and one x86, and it looks
> > like they support sparse outside and inside containers. They should have
> > all the same configuration so I think we can assume they support sparse
> > files appropriately.
> > 
> > So it looks like it OK. But please could you add an in-code comment to
> > highlight that the file created will be sparse?
> 
> Sure.
> 
> > > > > > Moreover this script will be run inside a container which means this
> > > > > > data is probably in RAM.
> > > > > 
> > > > > Are runners configured to use tmpfs for /build? I don't think it's the
> > > > > default.
> > > > 
> > > > I don't know for sure, they are just using the default. My goal was to
> > > > make our solution more reliable as defaults and configurations might
> > > > change.
> > > > 
> > > > 
> > > > > > The underlying rootfs is 25M on both ARM and x86. This should be at 
> > > > > > most
> > > > > > 50M.
> > > > > 
> > > > > Rootfs itself is small, but for driver domains it needs to include
> > > > > toolstack too, and xen-tools.cpio is over 600MB (for debug build).
> > > > > I might be able to pick just the parts needed for the driver domain 
> > > > > (xl
> > > > > with its deps, maybe some startup scripts, probably few more files), 
> > > > > but
> > > > > it's rather fragile.
> > > > 
> > > > My first thought is to avoid creating a 1GB file in all cases when it
> > > > might only be needed for certain individual tests. Now, I realize that
> > > > this script might end up only used in driver domains tests but if not, 
> > > 
> > > Indeed this script is specifically about driverdomains test.
> > > 
> > > > I
> > > > would say to use the smallest number depending on the tests, especially
> > > > as there seems to be use a huge difference, e.g. 25MB versus 600MB.
> > > > 
> > > > My second thought is that 600MB for just the Xen tools is way too large.
> > > > I have alpine linux rootfs'es with just the Xen tools installed that are
> > > > below 50MB total. I am confused on how we get to 600MB. It might be due
> > > > to QEMU and its dependencies but still going from 25MB to 600MB is
> > > > incredible!
> > > 
> > > Indeed it's mostly about QEMU (its main binary itself takes 55MB),
> > > including all bundled firmwares etc (various flavors of edk2 alone take
> > > 270MB). There is also usr/lib/debug which takes 85MB.
> > > But then, usr/lib/libxen* combined takes almost 50MB.
> > > 
> > > OTOH, non-debug xen-tools.cpio takes "just" 130MB.
> > 
> > Can we use the non-debug xen-tools.cpio 
> 
> I can use non-debug one. While debug build of hypervisor changes quite a
> lot in terms of test output details, the purpose of this test is mostly
> to test toolstack and frontend drivers - and here debug build doesn't
> change much.
> 
> > and also can we remove all the
> > bundled firmware? Do we really need EDK2 for instance?
> > 
> > I don't think it is worth doing an in-details analysis of what binaries
> > to keep and what to remove, but at least removing the unnecessary
> > in-guest firmware and ideally chosing a non-debug build sounds
> > reasonable?
> 
> Excluding QEMU _for now_ makes sense. But there might be a day when we'd
> like to test QEMU backends in a driver domain and/or a domU booted via
> UEFI (IIUC such configuration has PV frontend in EDK2 - at least for the
> disk - and it makes sense testing if it works with driver domains).

Ok, in that case, let's go with excluding QEMU and EDK2. While there
might be cases in the future where one or both are needed I don't think
is a good idea to increase the rootfs size for all tests including the
ones where they are not needed.

Reply via email to