On 9/5/25 12:37, Andreas Hasenack wrote: > Here are some more thoughts from me. This took me a long time to write, > read, and rewrite. Consider this me brainstorming on pros and cons. I > like apparmor and am happy to see a push to confine more applications, > and thanks for offering a strategy for doing that. > thank you for taking the time, it is helpful
>> The number of policies in this package is very large. When no policy cache >> exists (as on first >> installation), building it can be very long. Even when a cache, loading all >> policies is not >> instantaneous. > > The upgrade took double the time, and there were no changes. I just > repeated "dpkg -i" with the same package. I also noticed the package strange, definitely something to dig into > doesn't use dh_apparmor, so no dh_apparmor snippets are in the rendered > postinst, maybe that is where some debhelper smarts are missing. I > didn't investigate further. > most of the smarts are actually in the parser. With that said, having the profiles split out from the main apparmor package it does make sense to look at using dh_apparmor >> - It allows the AppArmor team to review carefully profiles, maintain them, >> ensure their coherency and >> how they interact with each other. >> - It allows to decouple profiles from the application maintainers, that >> don't necessarily have the >> necessary AppArmor knowledge. > > But I think we would want coupling. Without it, the profile can evolve > in one direction, and the application in another, and the confinement sadly from experience that happens when coupled too. Coupling has also lead to a different kind of drift where the upstream profile is being updated, differently from the Ubuntu one. This can be that the Ubuntu profile is getting revised and better suited to Ubuntu. But often the upstream version is evolving faster. We have also had the problem of Ubuntu package versions being updated in less than secure ways to just make problems go away. We are certainly partly to blame for this. There needs to be an active sync happening, and we need tooling, and tracking to make sure it is happening. > will break. You could have an old version of the profile installed and a > new version of the application package installed. Users could have yes, and again from experience the reverse is also true. Where it is the profile doesn't get updated in the package, but the upstream profile has been. > pinned an application package to a specific version. Users could want a > profile fix from bin:apparmor.d-N+1, but keep another profile shipped in > bin:apparmor.d-N for another application because in N+1 it broke their > use case. > yes. A very valid point. >> - That also allows updating profiles without needing to update the > application package. > > Conversely, you would be updating a package that ships 1500 profiles. > You would be fixing a bug for one profile, and could be introducing a > bug in another (bugs happen). > possible but much less likely than the reverse. With profiles the bugs tend to stay localized to the given profile, unless the bug is introduced in the abstractions/tunables, or in cross domain rules. abstraction/tunable bugs will affect all profiles, whether packaged together or separately. the cross domain rules case strongly favors updating the profiles together. > I think at the core I have two objections to this whole approach: > > 1) all profiles loaded even when not needed, leading to the problems in > comment #6. You explained several optimizations, but to me the best > optimization is to not load what is not needed :) fair point. With that said the outlined optimizations are still needed. Even if every profile was split out into the various packages you still are going to have 100s of profiles to load. > 2) decoupling with the application: high risk of the profile being meant for one version of the app, but a later one has different requirements that do not match the profile anymore. This discrepancy looks easier and quicker to catch if the profile is together with the application. Tee risk of updates to this single package approach also seems much higher. yes there is more risk here. Experience has shown this to not be as much of problem as one might fear, and that the coupling has caused its only drift problems. Profiles really don't exist in isolation anymore, especially when more of the system becomes confined. Cross domain interactions favour moving policy as a unit. Application updates obviously favour keeping the profile with the application. Another variable not discussed is confinement models. Switching confinement models, again favours keeping policy as a unit, or at least tightly synced due to cross domain interactions. Are we going to be using different confinement models. Yes, we really need to move towards this. There will be standard confinement, a looser classic/developer environment, and even more restrictive secure or MLS style environments. Using things like conditionals we can certainly still split profiles out into various application packages. But switching between confinement models does favor keeping policy together. The reality is there will have to be some kind of mix, and we need to figure out how to best keep the profiles in sync. Making sure updates are flowing into upstream, and from upstream back into Ubuntu, and where appropriate, having the Ubuntu versions keep a delta, etc. > > Now, you make a good point about package maintainers not necessarily > having the apparmor knowledge, or even a desire to confine their > application. Us suddenly injecting an apparmor profile into their yeah, we have had very bad luck with this over the years. > package is rude and disruptive. And we would also have potentially up to > 1500 new delta pieces added to debian packages. > yep. > How can we crack this nut? > A hybrid approach, with better and more tooling. I wouldn't say no to a lot more people working on the problem either ;) > Have you guys thought of ways to still ship all profiles in a separate > binary package, but not load them unless they are needed? Unless the > application they are meant to confine is installed? Can we play some yes. There are a couple ways to do this. Probably the easiest is setting disabled symlinks, but having a locals tunables directory, where a boolean file can be dropped in also works. Possible but not as good solutions involve installing the profiles to an alternate source location, and having packages either copy them, or make symlinks to when the package is installed. > tricks with triggers? > indeed triggers are interesting. We were already looking at them to cause policy compiles on kernel install, so that we hopeful can have policy cached at boot. But I hadn't considered using them to enable disable profiles for a given package. > I guess similar problems and discussions were had in the past about the > kernel modules package (we have two binary packages for kernel modules > IIRC), and linux-firmware (which also installs a whole bunch of binary > blobs regardless if you have that hardware or not: you *could* have it > in the future). But none of these are loaded by default: they are just > files available on disk, in case they are needed. > right > Some other thoughts: > a) a promotion plan: what happens once a profile matures, and can be shipped > with the application? What are the conditions? What packaging changes will be > needed then? We will have to add careful breaks/replaces, following > https://wiki.debian.org/PackageTransition to avoid conflicts like in comment > #5 So we don't have a good metric to determine when a profile is mature, but it certainly will be tied to bugs/feedback, how often it is updated. The biggest condition is the support of the package maintainer. Other conditions would be around to what degree, a profile/package is a leaf, or a node (something with lots a dependencies cross/domain interactions). Profiles moving out of the main src/binary are going to need an annotation about which package installs them. It might be possible that the breaks/replaces could serve as the annotation. Ideally the upstream version of the profile will remain in the source tarball, and when a profile is moved, this entails dropping it from the install list, adding the necessary breaks/replaces. The other part I want is some tooling that we can run that will check the source version not being installed, again the profiles that have been split out into other packages, so that we can periodically (maybe every update of the source package), run a sync and use that to feed updates back to upstream, and submit updates (when needed) to the other packages. > b) or is the plan to always ship the profile in the distro via bin:apparmor.d, to be available in case the application package is installed, and never ship it in the application package itself? Counting on the fact that the current installation times can be made faster and have it consume less memory? this is a possibility, though probably in a slightly more flexible incarnation, and ideally still not loading profiles that we don't need to. The reality is optimizations are only going to get us so far. Loading anything that isn't needed takes more cpu and memory. There are practicalities to consider, but we should be working towards the best we can achieve, within the constraints we have. In this scenario we would use an overlay. Something we want to introduce anyways. The overlay would becomes something like local profiles:application packaged profiles:base apparmor profiles. This would allow us to install a base source profile, and still allow applications to install profile. Ideally we would still be coordination/syncing between application and the apparmor packaged version of a profile but giving an overlay layer to packaging opens up some flexibility. To disable profiles, either disable symlinks, or whiteouts in the overlay could be used. I am not sure which is the best mechanism within the Ubuntu/debian packaging, but I would assume we could either use triggers, or dh_apparmor depending on the situation. > c) testing plan: how can the profiles from src:apparmor.d be tested? We would > have to have an autopkgtest in src:apparmor.d for package bin:FOO that would > install *both* bin:apparmor.d and bin:FOO, and from your comments looks like > that fails due to OOM, going back to the optimization problem. yes they can be tested. We have been running autopkgtests for the profiles in apparmor.d, and yes there an OOM issue for some packages with the current defaults. The OOMs can be dealt with by increasing memory, or by not loading the whole profile set for a given test. We are at a painful less than ideal stage atm. But with a combination of optimizations and packaging work we should be able to fix the OOM issues. > d) what about more restricted systems like raspberry PIs, are they of scope for this package, at this stage? definitely opt in to the pain atm. Long term they are in scope, but we need to address the core issues first. Hopefully between the current set of optimizations and getting the packaging sorted out that will be enough for raspberry PIs. But some restricted systems really are going to need more work, more optimization, new options for dividing policy, shipping precompiled policy (another form of optimization not previously discussed). > e) What will SRUs look like for src:apparmor.d? How many profiles would you > be updating in one go? How many applications would have to be tested > separately? Ideally a small set. SRUs are painful. Doing an SRU per profile that needs to be updated would just be crazy, but packing to much into an SRU is also bad. Testing would depend on which profiles are being updated. A leaf profile/application you might get away with testing just a single application. But if we update some more core, say systemd (as a worst case scenario, that isn't actually confined by the current apparmor.d packaging) there would have to be extensive testing. > f) What happens if I have a host spawning dozens of LXD containers, and all > those containers install bin:apparmor.d? "Don't do it"? :) > indeed. atm this is opt into the pain. Long term another optimization comes into play. The kernel load does a dedup, check. Currently this is just dropping duplicate loads, saving on the whole replacement dance. However we are doing fairly fine grained reference counting in the kernel with an eye towards sharing between different profiles. Currently we are at the point where we are close to having profiles loaded together being able to share components. Once we get there, dedup can be extended and the container could pick up a reference count to units already loaded. > I also understand this is following an upstream project, which has all > these profiles in a git repository/tarball, and having one source debian > package mimicking that makes sense. But even with optimizations, unless > they are really fantastic, I don't see right now what this will look well cumulatively they will be, I have some educated guesses, and size will be seeing a bigger improvement than time, but any one increment wont be enough. Its going to take some time, and like you said the best optimization is just don't load it if isn't used. > like in the long term. > So I think distro packaging is different than the upstream source. As a distro we do packaging however makes best sense for us. This may mean more packaging work on our end. With all that said, mimicking the upstream packaging here was deliberate, but only as a first step. The upstream packaging philosophy is trying to provide a base for as many distros as possible with as little work for the upstream as possible. A distro will to put some work in can and should improve the packaging and make it fit the distros, needs. Hopefully we will be able to feed ñsome of the work we do Ubuntu and even debian side can be fed back into the upstream side to improve its work as well. > Now, I'm not the final word on this. This just appeared on my radar for no, but your feedback is very important. You come at it from a different angle than we do, which gives a different set of data points to consider > sponsorship reasons, and I have a passion for application confinement, > having written some apparmor profiles in the recent past. I truly > welcome others to join the discussion, and have no objections to be > proven wrong. > -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to apparmor in Ubuntu. https://bugs.launchpad.net/bugs/2121409 Title: [FFE] add a new apparmor.d package containing several apparmor profiles Status in apparmor package in Ubuntu: Triaged Bug description: ## FFE ## This is a Feature Freeze Exception request for questing for the apparmor package and for a new source package called apparmor.d: I'd like to add a new source package called apparmor.d which contains over 1500 profiles from the upstream project apparmor.d [1] These profiles will be added in "complain" mode, which means that for a given action, if the profile rules do not grant permission the action will be allowed, but the violation will be logged with a tag of the access being ALLOWED. This is done because we want to test these profiles and enable others to test and add new rules to eventually improve the profiles. By adding these profiles in a new package which is not installed by default, regular users will not be affected. But users that would like to test and contribute to the profiles can install it. We want to add these profiles, even in complain mode, as a new package (and not part of the apparmor package) because labeling certain binaries could cause issues with existing policy, specially those that use "peer". Additionally, the large amount of profiles do take a while to compile by the parser in the first boot. After that, a cached version of the profiles can be loaded directly into the kernel by the parser which takes considerably less time. Note again that apparmor.d will not be installed by default, so this will only affect users that choose to install it. The benefits of this change is the ability to increase the amount of testing for these profiles, which will then enable us to eventually ship them in enforce mode. More profiles means more confined applications, which could lead to higher security. This is the first step towards that. This FFE also includes the apparmor package because we want to change the suggestion from the apparmor-profiles-extra package, which is no longer maintained and will be deprecated in the future, to the new apparmor.d. This is the PPA containing a built version of apparmor and apparmor.d: https://launchpad.net/~georgiag/+archive/ubuntu/apparmor.dinapparmor5/ These are the installation logs: georgia@sec2-questing-amd64:~/qrt-test-apparmor$ sudo apt install apparmor.d The following packages were automatically installed and are no longer required: apg libllvm19 linux-headers-6.15.0-3-generic xbitmaps cpp-14 libopengl0 linux-modules-6.15.0-3-generic xinit cpp-14-x86-64-linux-gnu libsframe1 linux-tools-6.15.0-3 xorg gcc-14-base libxcb-damage0 linux-tools-6.15.0-3-generic libclang1-19 libxkbcommon-x11-0 x11-apps libglu1-mesa linux-headers-6.15.0-3 x11-session-utils Use 'sudo apt autoremove' to remove them. Upgrading: apparmor Installing: apparmor.d Summary: Upgrading: 1, Installing: 1, Removing: 0, Not Upgrading: 86 Download size: 1,116 kB Space needed: 3,418 kB / 6,269 MB available Continue? [Y/n] WARNING: The following packages cannot be authenticated! apparmor apparmor.d Install these packages without verification? [y/N] y Get:1 http://192.168.122.1/debs/testing questing/ apparmor 5.0.0~alpha1-0ubuntu5 [853 kB] Get:2 http://192.168.122.1/debs/testing questing/ apparmor.d 0.015-1ubuntu1 [264 kB] Fetched 1,116 kB in 0s (20.6 MB/s) Preconfiguring packages ... (Reading database ... 240702 files and directories currently installed.) Preparing to unpack .../apparmor_5.0.0~alpha1-0ubuntu5_amd64.deb ... Unpacking apparmor (5.0.0~alpha1-0ubuntu5) over (5.0.0~alpha1-0ubuntu4) ... Selecting previously unselected package apparmor.d. Preparing to unpack .../apparmor.d_0.015-1ubuntu1_amd64.deb ... Unpacking apparmor.d (0.015-1ubuntu1) ... Setting up apparmor (5.0.0~alpha1-0ubuntu5) ... Installing new version of config file /etc/apparmor.d/hostname ... Reloading AppArmor profiles Skipping profile in /etc/apparmor.d/disable: brave Skipping profile in /etc/apparmor.d/disable: chrome Skipping profile in /etc/apparmor.d/disable: chromium Skipping profile in /etc/apparmor.d/disable: dig Skipping profile in /etc/apparmor.d/disable: element-desktop Skipping profile in /etc/apparmor.d/disable: epiphany Skipping profile in /etc/apparmor.d/disable: firefox Skipping profile in /etc/apparmor.d/disable: flatpak Skipping profile in /etc/apparmor.d/disable: foliate Skipping profile in /etc/apparmor.d/disable: free Skipping profile in /etc/apparmor.d/disable: fusermount3 Skipping profile in /etc/apparmor.d/disable: hostname Skipping profile in /etc/apparmor.d/disable: locale Skipping profile in /etc/apparmor.d/disable: loupe Skipping profile in /etc/apparmor.d/disable: lsblk Skipping profile in /etc/apparmor.d/disable: lsusb Skipping profile in /etc/apparmor.d/disable: msedge Skipping profile in /etc/apparmor.d/disable: nslookup Skipping profile in /etc/apparmor.d/disable: openvpn Skipping profile in /etc/apparmor.d/disable: opera Skipping profile in /etc/apparmor.d/disable: os-prober Skipping profile in /etc/apparmor.d/disable: plasmashell Skipping profile in /etc/apparmor.d/disable: signal-desktop Skipping profile in /etc/apparmor.d/disable: slirp4netns Skipping profile in /etc/apparmor.d/disable: steam Skipping profile in /etc/apparmor.d/disable: systemd-coredump Skipping profile in /etc/apparmor.d/disable: systemd-detect-virt Skipping profile in /etc/apparmor.d/disable: thunderbird Skipping profile in /etc/apparmor.d/disable: transmission Skipping profile in /etc/apparmor.d/disable: unix-chkpwd Warning: found usr.sbin.sssd in /etc/apparmor.d/force-complain, forcing complain mode Warning from /etc/apparmor.d (/etc/apparmor.d/usr.sbin.sssd line 69): Caching disabled for: 'usr.sb in.sssd' due to force complain Skipping profile in /etc/apparmor.d/disable: virtiofsd Skipping profile in /etc/apparmor.d/disable: wg Skipping profile in /etc/apparmor.d/disable: wg-quick Skipping profile in /etc/apparmor.d/disable: who Setting up apparmor.d (0.015-1ubuntu1) ... Processing triggers for systemd (257.7-1ubuntu3) ... Processing triggers for man-db (2.13.1-1) ... Processing triggers for procps (2:4.0.4-8ubuntu2) ... georgia@sec2-questing-amd64:~/qrt-test-apparmor$ systemctl status apparmor \u25cf apparmor.service - Load AppArmor profiles Loaded: loaded (/usr/lib/systemd/system/apparmor.service; enabled; preset: enabled) Active: active (exited) since Fri 2025-08-29 12:09:41 -03; 21min ago Invocation: 7acd3f71e5084f50a7893334f2c2addf Docs: man:apparmor(7) https://gitlab.com/apparmor/apparmor/wikis/home/ Process: 13802 ExecReload=/lib/apparmor/apparmor.systemd reload (code=exited, status=0/SUCCESS) Main PID: 535 (code=exited, status=0/SUCCESS) Mem peak: 156.1M (swap: 268K) CPU: 5min 18.046s Aug 29 12:29:57 sec2-questing-amd64 apparmor.systemd[15293]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:02 sec2-questing-amd64 apparmor.systemd[15328]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:05 sec2-questing-amd64 apparmor.systemd[15373]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:08 sec2-questing-amd64 apparmor.systemd[15437]: Warning: found usr.sbin.sssd in /etc/> Aug 29 12:30:08 sec2-questing-amd64 apparmor.systemd[15437]: Warning from /etc/apparmor.d (/etc/ap> Aug 29 12:30:13 sec2-questing-amd64 apparmor.systemd[15456]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:19 sec2-questing-amd64 apparmor.systemd[15483]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:19 sec2-questing-amd64 apparmor.systemd[15484]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:19 sec2-questing-amd64 apparmor.systemd[15492]: Skipping profile in /etc/apparmor.d/d> Aug 29 12:30:31 sec2-questing-amd64 systemd[1]: Reloaded apparmor.service - Load AppArmor profiles. For testing, I ran the QA Regression Tests [2]: Steps: $ git clone https://git.launchpad.net/qa-regression-testing $ ./scripts/make-test-tarball ./scripts/test-apparmor.py Copying: test-apparmor.py Copying: testlib.py Copying: install-packages Copying: packages-helper Copying: apparmor/ Test files: /tmp/qrt-test-apparmor.tar.gz To run, first install the apparmor.d package introduced in this FFE, then copy the tarball somewhere, then do: $ tar -zxf qrt-test-apparmor.tar.gz $ cd ./qrt-test-apparmor $ sudo ./install-packages test-apparmor.py $ ./test-apparmor.py -v This script runs various tests against the installed apparmor package The result was: FAILED: disconnected_mount_complain socketpair make: *** [Makefile:487: alltests] Error 1 ---------------------------------------------------------------------- Ran 62 tests in 3949.185s FAILED (failures=1, skipped=4) Note that these failures are not related to the apparmor.d package and are also reproducible with apparmor version 5.0.0~alpha1-0ubuntu4 from the archive. [1] https://github.com/roddhjav/apparmor.d [2] https://git.launchpad.net/qa-regression-testing/tree/scripts/test-apparmor.py To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2121409/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : [email protected] Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp

