On Tue, Apr 07, 2020 at 02:33:20PM +0200, Pino Toscano wrote: > On Tuesday, 7 April 2020 14:18:47 CEST Richard W.M. Jones wrote: > > On Tue, Apr 07, 2020 at 01:25:02PM +0200, Pino Toscano wrote: > > > The important thing is still that that you need to have space for the > > > temporary files somewhere: be it /var/tmp, /mnt/scratch, whatever. > > > Because of this, and the fact that usually containers are created > > > fresh, the cache of the supermin appliance starts to make little sense, > > > and then a very simple solution is to point libguestfs to that extra > > > space: > > > > > > $ dir=$(mktemp --tmpdir -d > > > /path/to/big/temporary/space/libguestfs.XXXXXX) > > > $ export LIBGUESTGS_CACHEDIR=$dir > > > $ export TMPDIR=$dir # optionally > > > $ libguestfs-test-tool > > > [etc] > > > $ rm -rf $dir > > > > > > Easy to use, already doable, solves all the issues. > > > > So AIUI there are a few problems with this (although I'm still > > investigating and trying to make a local reproducer): > > > > - The external large space may be on NFS, with the usual problems > > there like root_squash, no SELinux labels, slowness. This means > > it's not suitable for the appliance, but might nevertheless be > > suitable for overlay files. > > If that is the only big storage space attached to a container, I do > not see any alternative than use it, with all the caveats associated.
I have to aggree with this. You cannot avoid situations where the appliance is on a network drive. If there are any inherent risks the best you can do is let user know about those (documentation?). > > Also, if we take as possible scenario the situation where /var/tmp is > not that big, then we need to consider that may not be big enough to > even store the cached supermin appliance (which is more than > 300/350MB). > > > - The external large space may be shared with other containers, and > > I'm not convinced that our locking in supermin will be safe if > > multiple parallel instances start up at the same time. We > > certainly never tested it, and don't currently advise it. > > That's why my suggestion above creates a specific temporary directory > for each container: even with a shared /var/tmp, there will not be any > cache stepping up on each other toes. This is something that this > separate cachedir for virt-v2v does not solve at all. Currently we don't share the temporary volume between instances in Kubevirt, but the assumption that this can happen is valid and should be considered. > > > > This whole problem started from a QE report on leftover files after > > > failed migrations: bz#1820282. > > > > (I should note also there are two bugs which I personally think we can > > solve with the same fix, but they are completely different bugs.) > > I still do not understand how these changes have anything to do with > bug 1814611, which in an offline discussion we found out that has > mostly two causes: > - the way the NFS storage is mounted over the /var/tmp in the > container, so what you create as root is not really with the UID/GID > expected > - the fixed appliance in the container was not actually used, and thus > a rebuilt of the the supermin appliance was attempted, failing due > to the first issue I am still not convinced this is the case. Based on the logs I shared in private email I still believe that the fixed appliance was used properly. You assumed that the appliance is not used because the cache directory is being created, but as I also pointed out the cache directory created in all situations because qemu temporary files are stored there [1][2]. > > Can you please explain me exactly how switching the location of > temporary files (that were not mentioned in the bug at all) will fix > this situation? For the particular BZ 1814611, if we keep the fixed appliance we can move the cache directory to something else than /var/tmp (just for the qemu files). We would keep /var/tmp only for overlay files. But this is more of a hack than intended usage I guess. > > > > What this report doesn't say, however, > > > is that beside the mentioned files that virt-v2v creates, there are > > > also leftover files that libguestfs itself creates. These files are > > > usually downloaded from the guest for the inspection, and generally not > > > that big compared to e.g. the overlays that virt-v2v creates. > > > Nonetheless, an abrupt exit of virt-v2v will leave those in place, and > > > they will still slowly fill up the space on /var/tmp (or whatever is > > > the location of $LIBGUESTFS_CACHEDIR). > > > > I guess that small files being left around aren't really a problem. > > The problem they have is large files being left around, and I can > > understand why that would be an issue and not the small files. > > Nobody is saying that the leftover files are not a problem. I'm saying > that also the small files are a sort of problem -- sure, less critical, > however still there and ready to show up any time, especially if the > concern is the space of /var/tmp. Isn't that the reason why things like tmpreaper exist? Tomas [1] https://github.com/libguestfs/libguestfs/blob/c2c11382bbeb4500f3388a31ffd08cfc18b0de40/lib/qemu.c#L162 [2] https://github.com/libguestfs/libguestfs/blob/c2c11382bbeb4500f3388a31ffd08cfc18b0de40/lib/launch-direct.c#L417 _______________________________________________ Libguestfs mailing list Libguestfs@redhat.com https://www.redhat.com/mailman/listinfo/libguestfs