Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-13 Thread Johannes Schauer Marin Rodrigues
Hi,

Quoting Barak A. Pearlmutter (2024-05-13 10:47:43)
> > I'd like to hear some arguments *in favour* of making this change.
> > Alignment with systemd-upstream, reduced package maintenance burden
> > are two that I can think of, but perhaps I've missed more. These two,
> > IMHO, are significantly outweighed by the risks.
> Let me see if I understand the arguments being made in favor.

thank you, I'd also like to understand them.

> 1. Compatibility with upstream. This means all the upstream logic is sort of
>imported by reference, so the below is mainly the upstream logic, as I
>understand it.

Yes, I also think that there is value in doing the same thing that upstream or
other distros are doing. There is a cost if Debian decides to deviate from what
others decided to be the default.

> 2. Defend the system against buggy programs that leave debris in /var/tmp/,
>and against debris left there when programs are terminated prematurely.
>These are programs which use /var/tmp/ internally, but not as part of
>their API, so the user would have no particular way of knowing that they
>are leaving things there, would have no particular reason to check for and
>delete such files, and might not be able to easily recognize which files
>should be removed.

But I do not understand this as an advantage. In my mind it is quite the
opposite. The buggy programs which leave files in /tmp will now have their bugs
not noticed anymore because the files get cleaned up by systemd. On the other
hand, we are now introducing new bugs in programs which should do an flock on
the temporary directory but do not do so yet. Imagine I would not've read
debian-devel. How would I as the mmdebstrap author have noticed that my tool as
a user of /tmp should set up this flock? I imagine the bug report of somebody
who has a weird problem with the chroot but somehow they are unable to
reproduce it because it depends on the cleanup timing. Are the bugs we are
introducing by regularly cleaning up /tmp not potentially super hard to
diagnose and might thus just not get fixed? Is there an effort to go around and
identify programs we ship with long lasting use of /tmp and is filing bugs so
that they are performing an flock? And at the same time we are now ignoring the
bugs in programs who leave files in /tmp and forget to clean them up. Is this
not a disadvantage of this change instead of an advantage?

> I looked at the upstream bug report
> https://github.com/systemd/systemd/issues/32674 which suggests that deletion
> of directory trees in /var/tmp/ be atomic, and trigger only when everything
> in the tree meets criteria for deletion. I added a comment suggesting that
> the policy be tweaked in two ways. (a) Use system-up time rather than
> wall-clock time for measuring file age, to address the "suspending or
> shutting down for over 30 days breaks running data processing scripts that
> uses /var/tmp/ for intermediate files" issue. I sort of have an invariant in
> my head, which is that suspending the computer doesn't break things, and also
> the whole point of /var/tmp/ is that files there are preserved across boots.
> And (b) check if a file is open by some process, the same way fuser(1) does,
> and if so, don't delete it. Could also preserve directories which are the
> current directory of some process, if you want to be even more user friendly.
> 
> The only response I got was "don't use temporary directories for things that
> you cannot afford to lose and recreate", which I don't really understand.
> It's like saying "you should have good backups, so it's not a problem to just
> delete anything in /home older than two days". Bottom line, it's not clear to
> me that upstream has really thought this through with users in mind. I'd
> suggest that Debian may wish to tweak the defaults on this stuff pretty hard
> to be more user friendly.

Thank you, the suspend issue is another issue that is created by this change.
If we want to try and weigh cost against benefit, do the benefits really
outweigh the cost? How costly is it to carry a patch in Debian and deviate from
upstream versus all the problems that participants of this thread now listed?
My gut feeling is, that the cost of these hard to debug problems is far greater
than continuing to deviate from upstream and carry a Debian-specific patch, no?

> - as discussed earlier, add /tmp/00-README and /var/tmp/00-README to explain
> this old-file-deletion policy

I think this is a really good idea.

Thanks!

cheers, josch

signature.asc
Description: signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-13 Thread Barak A. Pearlmutter
> I'd like to hear some arguments *in favour* of making this change.
> Alignment with systemd-upstream, reduced package maintenance burden
> are two that I can think of, but perhaps I've missed more. These two,
> IMHO, are significantly outweighed by the risks.

Let me see if I understand the arguments being made in favor.

1. Compatibility with upstream. This means all the upstream logic is
sort of imported by reference, so the below is mainly the upstream
logic, as I understand it.

2. Defend the system against buggy programs that leave debris in
/var/tmp/, and against debris left there when programs are terminated
prematurely. These are programs which use /var/tmp/ internally, but
not as part of their API, so the user would have no particular way of
knowing that they are leaving things there, would have no particular
reason to check for and delete such files, and might not be able to
easily recognize which files should be removed.

3. Defend the system against forgetful users, who create files in
/var/tmp/ but neglect to delete them when they're no longer needed.

Unfortunately 2 & 3 cannot be mechanically distinguished, so even if
you wanted to have separate policies for these two classes of files,
it's not really technically possible. So upstream is optimizing for
case (2), and suggests that /var/tmp/ be sort of reserved for programs
and scripts that are aware of this policy, and users not manually
create files there.

I hope that's an accurate characterization.

I looked at the upstream bug report
https://github.com/systemd/systemd/issues/32674 which suggests that
deletion of directory trees in /var/tmp/ be atomic, and trigger only
when everything in the tree meets criteria for deletion. I added a
comment suggesting that the policy be tweaked in two ways. (a) Use
system-up time rather than wall-clock time for measuring file age, to
address the "suspending or shutting down for over 30 days breaks
running data processing scripts that uses /var/tmp/ for intermediate
files" issue. I sort of have an invariant in my head, which is that
suspending the computer doesn't break things, and also the whole point
of /var/tmp/ is that files there are preserved across boots. And (b)
check if a file is open by some process, the same way fuser(1) does,
and if so, don't delete it. Could also preserve directories which are
the current directory of some process, if you want to be even more
user friendly.

The only response I got was "don't use temporary directories for
things that you cannot afford to lose and recreate", which I don't
really understand. It's like saying "you should have good backups, so
it's not a problem to just delete anything in /home older than two
days". Bottom line, it's not clear to me that upstream has really
thought this through with users in mind. I'd suggest that Debian may
wish to tweak the defaults on this stuff pretty hard to be more user
friendly.

Here are a couple cheap tweaks I'd suggest. My hope is that these
would avoid some of the worst case scenarios discussed while still
satisfying the goals 1/2/3 above, and being super easy to implement.

- lengthen the reap time for /var/tmp/ to eight weeks, since Europeans
often take six-week vacations.

- make a "tempdir" command, philosophically similar to tempfile in
debianutils, which creates a fresh directory in /var/tmp/ and drops
the user into a shell with that as the current directory, directory
flocked until that subshell terminates. Could give an optional
directory argument to lock so you can get back into a directory, and
maybe have options to run a program there before or instead of the
subshell.

- following a resume-from-suspend or boot, shut off the
delete-old-files-in-var-tmp mechanism for a while, maybe eight hours
or something like that. Maybe shorten the delay if it doesn't get to
run across multiple resume-or-boot cycles for a week or two.

- as discussed earlier, add /tmp/00-README and /var/tmp/00-README to
explain this old-file-deletion policy



Re: Any volunteers for lintian co-maintenance?

2024-05-13 Thread Andrius Merkys

Hi Nilesh,

On 2024-05-10 21:04, Nilesh Patra wrote:

On Fri, May 10, 2024 at 05:58:24PM +0300, Andrius Merkys wrote:

Do you mean bugs on bugs.d.o, or are there other issues?


As you may have seen in the other emails, there are performance issues. Other
than that, there are 2 open RC bugs right now (fixed in salsa but not uploaded
I don't make uploads for lintian). A pile of MRs are pending for reviews
at many points in time.


Thanks for the explanation.


I personally feel motivated to implement new features in lintian or go after
low hanging fruits, but I am somewhat driven away by the need to understand
lintian's internals. Is there a documentation on the data/control flow, or
class diagrams? This would help me a lot.


Not that I know of, I suppose Axel/Bastian may be able answer this.


I see. I believe the documentation would help attracting more 
volunteers. AFAIK, lintian's code is clearly written and nicely 
organised, and that is enough for fixing localised issues or introducing 
new features similar to already existing ones. However, before making 
larger changes one has to familiarise themself with the code, and that 
takes time, especially for people not well-versed in Perl.


Best wishes,
Andrius



Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Theodore Ts'o
On Sun, May 12, 2024 at 04:27:06PM +0200, Simon Josefsson wrote:
> Going into detail, you use 'gzip -9n' but I use git-archive defaults
> which is the same as -n aka --no-name.  I agree adding -9 aka --best is
> an improvement.  Gnulib's maint.mk also add --rsyncable, would you agree
> that this is also an improvement?

I'm not convinced --rsyncable is an improvement.  It makes the
compressed object slightly larger, and in exchange, if the compressed
object changes slightly, it's possible that when you rsync the changed
file, it might be more efficient.  But in the case of PGP signed
release tarballs, the file is constant; it's never going to change,
and even if there are slight changes between say, e2fsprogs v1.47.0
and e2fsprogs v1.47.1, in practice, this is not something --rsyncable
can take advantage of, unless you manually copy
e2fsprogs-v1.47.0.tar.gz to e2fsprogs-v1.47.1.tar.bz, and then rsync
e2fsprogs-v1.471.tar.g and I don't think anyone is doing this,
either automatically or manually.

That being said, --rsyncable is mostly harmless, so I don't have
strong feelings about changing it to add or remove in someone's
release workflow.

> Right, there is no requirement for orig.tar.gz to be filtered.  But then
> the outcome depends on upstream, and I don't think we can convince all
> upstreams about these concerns.  Most upstream prefer to ship
> pre-generated and vendored files in their tarballs, and will continue to
> do so.

Well, your blog entry does recognize some of the strong reasons why
upstreams will probably want to continue shipping them.  First of all,
not all compilation targets are guaranteed to have autoconf, automake,
et. al, installed.  E2fsprogs is portable to Windows, MacOS, AIX,
Solaris, HPUX, NetBSD, FreeBSD, and GNU/Hurd, in addition to Linux.
If the package subscribes to the 'all the world's Linux, and nothing
else exists/we have no interest in supporting anything elss', I'd ask
the question, why are they using autoconf in the first place?  :-)

Secondly, i have gotten burned with older versions of either autoconf
or the aclocal macros changing in incompatible ways between versions.
So my practice is to check into git the configure script as generated
by autoconf on Debian testing, which is my development system; and if
it fails on anything else, or when a new version of autoconf or
automake, etc. causes my configure script to break, I can curse, and
fix it myself instead of inflicting the breakage on people who are
downloading and trying to compile e2fsprogs.

 Let's assume upstream doesn't ship minimized tarballs that are
> free from vendored or pre-generated files.  That's the case for most
> upstream tarballs in Debian today (including e2fsprogs, openssh,
> coreutils).  Without filtering that tarball we won't fulfil the goals I
> mentioned in the beginning of my post.  The downsides with not filtering
> include (somewhat repeating myself):
>
> ...

Your arguments are made in a very general way --- there are potential
problems for _all_ autogenerated or vendored files.  However, I think
it's possible to simply things by explicitly restricting the problem
domain to those files auto-generated by autoconf, automake, libtool,
etc.  For example, the argument that this opens things up for bugs
could be fixed by having common code in a debhelper script that
re-generates all of the autoconf and related files.  This address your
"tedious" and "fragile" argument.

And if you are always regenerating those files, you don't need to
audit the code, since they are going to them, no?  And the generated
files from autoconf and friends have well understood licensing
concerns.

And by the way, all of your concerns about vendored files, and all of
my arguments for why it's no big deal apply to gnulib source files,
too, no?  Why are you so insistent on saying that upstream must never,
ever ship vendored files --- but I don't believe you are making this
argument for gnulib?

Yes, it's simpler if we have procrustean rules of the form "everything
MUST be shared libraries", and "never, EVER have generated or vendored
files".  However, I think we're much better off if we have targetted
solution which fix the 80 to 90% of the cases.  We agree that gnulib
isn't going to be a shared library; but the argument in favor of it
means that there are exception, and I think we need to have similar
accomodations files like configure, config.{guess,sub}.

Upstream *is* going to be shipping those files, and I don't think it's
worth it to deviate from upstream tarballs just to filter out those
files, even if it makes somethings simpler from your perspective.  So
I do hear your arguments; it's just on balance, my opinion is that it's
not worth it.

Cheers,

- Ted



Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Russ Allbery
Ansgar   writes:

> In ecosystems like NPM, Cargo, Golang, Python and so on pinning to
> specific versions is also "explicitly intended to be used"; they just
> sometimes don't include convenience copies directly as they have tooling
> to download these (which is not allowed in Debian).

Yeah, this is a somewhat different case that isn't well-documented in
Policy at the moment.

> (Arguably Debian should use those more often as keeping all software at
> the same dependency version is a futile effort IMHO...)

There's a straight tradeoff with security effort: more security work is
required for every additional copy of a library that exists in Debian
stable.  (And, of course, some languages have better support for having
multiple simultaneously-installed versions of the same library than
others.  Python's support for this is not great; the ecosystem expectation
is that one uses separate virtualenvs, which don't really solve the Debian
build dependency problem.)

-- 
Russ Allbery (r...@debian.org)  



Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Ansgar 


Hi,

On Sun, 2024-05-12 at 08:41 -0700, Russ Allbery wrote:
> "Theodore Ts'o"  writes:
> > And yet, we seem to have given a pass for gnulib, probably because it
> > would be too awkward to enforce that rule *everywhere*, so apparently
> > we've turned a blind eye.
> 
> No, there's an explicit exception for cases like gnulib.  Policy 4.13:
> 
>     Some software packages include in their distribution convenience
>     copies of code from other software packages, generally so that users
>     compiling from source don’t have to download multiple packages. Debian
>     packages should not make use of these convenience copies unless the
>     included package is explicitly intended to be used in this way.

In ecosystems like NPM, Cargo, Golang, Python and so on pinning to
specific versions is also "explicitly intended to be used"; they just
sometimes don't include convenience copies directly as they have
tooling to download these (which is not allowed in Debian).

(Arguably Debian should use those more often as keeping all software at
the same dependency version is a futile effort IMHO...)

Gnulib is just older and targeted at the C ecosystem which still has
worse tooling that pretty much everything else.

Ansgar




Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Russ Allbery
"Theodore Ts'o"  writes:

> The best solution to this is to try to promote people to put those
> autoconf macros that they are manually maintaining that can't be
> supplied in acinclude.m4, which is now included by default by autoconf
> in addition to aclocal.m4.

Or use a subdirectory named something like m4, so that you can put each
conceptually separate macro in a separate file and not mush everything
together, and use:

AC_CONFIG_MACRO_DIR([m4])

(and set ACLOCAL_AMFLAGS = -I m4 in Makefile.am if you're also using
Automake).

> Note that how we treat gnulib is a bit differently from how we treat
> other C shared libraries, where we claim that *all* libraries must be
> dynamically linked, and that include source code by reference is against
> Debian Policy, precisely because of the toil needed to update all of the
> binary packages should some security vulnerability gets discovered in
> the library which is either linked statically or included by code
> duplication.

> And yet, we seem to have given a pass for gnulib, probably because it
> would be too awkward to enforce that rule *everywhere*, so apparently
> we've turned a blind eye.

No, there's an explicit exception for cases like gnulib.  Policy 4.13:

Some software packages include in their distribution convenience
copies of code from other software packages, generally so that users
compiling from source don’t have to download multiple packages. Debian
packages should not make use of these convenience copies unless the
included package is explicitly intended to be used in this way.

-- 
Russ Allbery (r...@debian.org)  



Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Simon Josefsson
"Theodore Ts'o"  writes:

>> 1) Use upstream's PGP signed git-archive tarball.
>
> Here's how I do it in e2fsprogs which (a) makes the git-archive
> tarball be bit-for-bit reproducible given a particular git commit ID,
> and (b) minimizes the size of the tarball when stored using
> pristine-tar:
>
> https://github.com/tytso/e2fsprogs/blob/master/util/gen-git-tarball

Wow, written five years ago and basically the same thing that I suggest
(although you store pre-generated ./configure scripts in git).

Going into detail, you use 'gzip -9n' but I use git-archive defaults
which is the same as -n aka --no-name.  I agree adding -9 aka --best is
an improvement.  Gnulib's maint.mk also add --rsyncable, would you agree
that this is also an improvement?  Thus what I'm arriving at is this:

git archive --prefix=inetutils-$(git describe)/ HEAD |
   gzip --no-name --best --rsyncable > -o inetutils-$(git describe)-src.tar.gz

>> To reach our goals in the beginning of this post, this upstream tarball
>> has to be filtered to remove all pre-generated artifacts and vendored
>> code.  Use some mechanism, like the debian/copyright Files-Excluded
>> mechanism to remove them.  If you used a git-archive upstream tarball,
>> chances are higher that you won't have to do a lot of work especially
>> for pre-generated scripts.
>
> Why does it *has* to be filtered?  For the purposes of building, if
> you really want to nuke all of the pre-generated files, you can just
> move them out of the way at the beginning of the debian/rules run, and
> then move them back as part of "debian/rules clean".  Then you can use
> autoreconf -fi to your heart's content in debian/rules (modulo
> possibly breaking things if you insist on nuking aclocal.m4 and
> regenerating it without taking proper care, as discussed above).
>
> This also allows the *.orig.tar.gz to be the same as the upstream
> signed PGP tarball, which you've said is the ideal, no?

Right, there is no requirement for orig.tar.gz to be filtered.  But then
the outcome depends on upstream, and I don't think we can convince all
upstreams about these concerns.  Most upstream prefer to ship
pre-generated and vendored files in their tarballs, and will continue to
do so.  Let's assume upstream doesn't ship minimized tarballs that are
free from vendored or pre-generated files.  That's the case for most
upstream tarballs in Debian today (including e2fsprogs, openssh,
coreutils).  Without filtering that tarball we won't fulfil the goals I
mentioned in the beginning of my post.  The downsides with not filtering
include (somewhat repeating myself):

- Opens up for bugs causing pre-generated files not being re-generated
  even when they are used to build the package.  I think this is fairly
  common in Debian packages.  Making sure all pre-generated files are
  re-generated during build -- or confirming that the file is not used
  at all -- is tedious and fragile work.  Work that has to be done for
  every release.  Are you certain that ./configure is re-generated?  If
  it is not present you would notice.

- Auditing the pre-generated and vendored files for malicious content
  takes more time than not having to audit those files.  Especially if
  those files are not stored in upstream git.

- Pre-generated and vendored files trigger licensing concerns and
  require tedious work that doesn't improve the binary package
  deliverable.  Consider files like texinfo.tex for example, wouldn't it
  be better to strip that out of tarballs and not have to add it to
  debian/copyright?  If some code in a package, let's say getopt.c, is
  not used during build of the package, the license of that file doesn't
  have to be mentioned in debian/copyright if I understand correctly:
  https://www.debian.org/doc/debian-policy/ch-archive.html#s-pkgcopyright
  If in a few releases later, that file starts to get used during
  compilation, the package maintainer will likely not notice.  If it was
  filtered, the maintainer would notice.

The best is when upstream ship a tarball consistent with what I dream
*.orig.tar.gz should be: free of vendored and pre-generated files.
Debian package maintainers can take action before this happens, to reach
nice properties within Debian.  Maybe some upstream will adapt.

>> There is one design of gnulib that is important to understand: gnulib is
>> a source-only library and is not versioned and has no release tarballs.
>> Its release artifact is the git repository containing all the commits.
>> Packages like coreutils, gzip, tar etc pin to one particular commit of
>> gnulib.
>
> Note that how we treat gnulib is a bit differently from how we treat
> other C shared libraries, where we claim that *all* libraries must be
> dynamically linked, and that include source code by reference is
> against Debian Policy, precisely bec

Re: De-vendoring gnulib in Debian packages

2024-05-12 Thread Theodore Ts'o
On Sat, May 11, 2024 at 04:09:23PM +0200, Simon Josefsson wrote:
>The current approach of running autoreconf -fi is based on a
>misunderstanding: autoreconf -fi is documented to not replace certain
>files with newer versions:
>https://lists.nongnu.org/archive/html/bug-gnulib/2024-04/msg00052.html

And the root cause of *this* is because historically, people put their
own custom autoconf macros in aclocal.m4, so if autoreconf -fi
overwrote aclocal.m4, things could break.  This also means that
programmtically always doing "rm -f aclocal.m4 ; aclocal --install"
will break some packages.

The best solution to this is to try to promote people to put those
autoconf macros that they are manually maintaining that can't be
supplied in acinclude.m4, which is now included by default by autoconf
in addition to aclocal.m4.  Personally, I think the two names are
confusing and if it weren't for historical reasons, perhaps should
have been swapped, but oh, well

(For example, I have some custom local autoconf macros needed to
support MacOS in e2fsprogs's acinclude.m4.)

> 1) Use upstream's PGP signed git-archive tarball.

Here's how I do it in e2fsprogs which (a) makes the git-archive
tarball be bit-for-bit reproducible given a particular git commit ID,
and (b) minimizes the size of the tarball when stored using
pristine-tar:

https://github.com/tytso/e2fsprogs/blob/master/util/gen-git-tarball

> To reach our goals in the beginning of this post, this upstream tarball
> has to be filtered to remove all pre-generated artifacts and vendored
> code.  Use some mechanism, like the debian/copyright Files-Excluded
> mechanism to remove them.  If you used a git-archive upstream tarball,
> chances are higher that you won't have to do a lot of work especially
> for pre-generated scripts.

Why does it *has* to be filtered?  For the purposes of building, if
you really want to nuke all of the pre-generated files, you can just
move them out of the way at the beginning of the debian/rules run, and
then move them back as part of "debian/rules clean".  Then you can use
autoreconf -fi to your heart's content in debian/rules (modulo
possibly breaking things if you insist on nuking aclocal.m4 and
regenerating it without taking proper care, as discussed above).

This also allows the *.orig.tar.gz to be the same as the upstream
signed PGP tarball, which you've said is the ideal, no?

> There is one design of gnulib that is important to understand: gnulib is
> a source-only library and is not versioned and has no release tarballs.
> Its release artifact is the git repository containing all the commits.
> Packages like coreutils, gzip, tar etc pin to one particular commit of
> gnulib.

Note that how we treat gnulib is a bit differently from how we treat
other C shared libraries, where we claim that *all* libraries must be
dynamically linked, and that include source code by reference is
against Debian Policy, precisely because of the toil needed to update
all of the binary packages should some security vulnerability gets
discovered in the library which is either linked statically or
included by code duplication.

And yet, we seem to have given a pass for gnulib, probably because it
would be too awkward to enforce that rule *everywhere*, so apparently
we've turned a blind eye.

I personally think the "everything must be dynamically linked" to be
not really workable in real life, and should be an aspirational goal
--- and the fact that we treat gnulib differently is a great proof
point about how the current debian policy is not really doable in real
life if it were enforced strictly, everywhere, with no exceptions

Certainly for languages like Rust, it *can't* be enforced, so again,
that's another place where that rule is not enforced consistently; if
it were, we wouldn't be able to ship Rust programs.

- Ted



Re: Running pybuild tests with search path for entry_points()

2024-05-11 Thread Sebastiaan Couwenberg

On 5/11/24 8:47 PM, John Paul Adrian Glaubitz wrote:

export PYTHONPATH = $(CURDIR)


Try setting it to the build_dir path:

 PYTHONPATH=$(shell pybuild --print build_dir --interpreter python3)

Kind Regards,

Bas

--
 GPG Key ID: 4096R/6750F10AE88D4AF1
Fingerprint: 8182 DE41 7056 408D 6146  50D1 6750 F10A E88D 4AF1



Re: De-vendoring gnulib in Debian packages

2024-05-11 Thread Paul Eggert

On 2024-05-11 07:09, Simon Josefsson via Gnulib discussion list wrote:

I would assume that (some stripped down
version of) git is a requirement to do any useful work on any platform
these days, so maybe it isn't a problem


Yes, my impression also is that Git has migrated into the realm of 
cc/gcc in that everybody has it, so it can depend indirectly on a 
possibly earlier version of itself.


Although it is worrisome that our collective trusted computing base 
keeps growing, let's face it, if there's a security bug in Git we're all 
in big trouble anyway.




Re: De-vendoring gnulib in Debian packages

2024-05-11 Thread Simon Josefsson
Bruno Haible  writes:

> Simon Josefsson wrote:
>> Finally, while this is somewhat gnulib specific, I think the practice
>> goes beyond gnulib
>
> Yes, gnulib-tool for modules written in C is similar to
>
>   * 'npm install' for JavaScript source code packages [1],
>   * 'cargo fetch' for Rust source code packages [2],
>
> except that gnulib-tool is simpler: it fetches from a single source location
> only.
>
> How does Debian handle these kinds of source-code dependencies?

I don't know the details but I believe those commands are turned into
local requests for source code, either vendored or previously packaged
in Debian.  No network access during builds.  Same for Go packages,
which I have some experience with, although for Go packages they lose
the strict versioning so if Go package X declare a depedency on package
Y version Z then on Debian it may build against version Z+1 or Z+2 which
may in theory break and was not upstream's intended or supported
configuration.  We have a circular dependency situation for some core Go
libraries in Debian right now due to this.

I think fundamentally the shift that causes challenges for distributions
may be dealing with packages dependencies that are version >= X to
package dependencies that are version = X.  If there is a desire to
support that, some new patterns of the work flow is needed.  Some
package maintainers reject this approach and refuse to co-operate with
those upstreams, but I'm not sure if this is a long-term winning
strategy: it often just lead to useful projects not being available
through distributions, and users suffers as a result.

/Simon


signature.asc
Description: PGP signature


Re: De-vendoring gnulib in Debian packages

2024-05-11 Thread Bruno Haible
Simon Josefsson wrote:
> Finally, while this is somewhat gnulib specific, I think the practice
> goes beyond gnulib

Yes, gnulib-tool for modules written in C is similar to

  * 'npm install' for JavaScript source code packages [1],
  * 'cargo fetch' for Rust source code packages [2],

except that gnulib-tool is simpler: it fetches from a single source location
only.

How does Debian handle these kinds of source-code dependencies?

Bruno

[1] 
https://nodejs.org/en/learn/getting-started/an-introduction-to-the-npm-package-manager
[2] https://doc.rust-lang.org/cargo/commands/cargo-fetch.html





Re: How to create a custom Debian ISO

2024-05-11 Thread Hans
Am Samstag, 11. Mai 2024, 10:21:55 CEST schrieb Aditya Garg:
> Hello
> 
> I wanted to create a custom ISO of Debian, with the following
> customisations:
> 
> 1. I want to add a custom kernel that supports my Hardware.
> 2. I want to add my own Apt repo which hosts various software packages to
> support my hardware.
> 
> I am not able to get any good documentation for the same. Please help.

Hi Aditya,

mayebe you want to take a look at bootcdwrite. I have good experience made 
with bootcdwrite. Using this, you can create a boootable live iso with all 
your persoanl settings (including ~/home/* and users, including all personal 
settings.).

This ISO can be installed, too. Just bootr it, and you can install it from the 
live system.  

The ISO can be greater than 4,7GB, so it can be installen on an USB-stick.

For myself, i am using it for creating KALI-Linux (with all my settings, 
modules, exploits, settings etc. etc. etc. etc.). This ISO is about 30GB big 
and is an exact image of my installed system.

Doing so, I can boot it whereever I wan and have everything available.

If you want to do the same, just a hint: If your installed system resides on 
encrypted devices, you have to notice some special settings, otherwise it will 
not boot. Fell free, to ask for it.

Hope this helps.

Oh yes, another way is, just to create a livesystem with filesystem.squashfs. 
Then edit the filesystem.squashfs (it can be unpacked, edited and hen 
repacked). This is a little bit fiddly, but very versatile.

Last but not least, I believe (but here I am not sure)· you may build your own 
standard debian installer ISO and put in your own package versions (if this is 
what you want, then preferly ask the installer-crew - they know much better 
than me, because I never used an own build Debian-installer-ISO).

Have fun!

Best

Hans




Re: How to create a custom Debian ISO

2024-05-11 Thread Xingyou Chen

On 5/11/24 16:21, Aditya Garg wrote:

Hello

I wanted to create a custom ISO of Debian, with the following customisations:

1. I want to add a custom kernel that supports my Hardware.
2. I want to add my own Apt repo which hosts various software packages to 
support my hardware.

I am not able to get any good documentation for the same. Please help.


simple-cdd and underlying debian-cd works fine, you can add extra repo, 
and then extra packages to be included in final ISO, and even custom 
files or build steps.


These are tiny utilities, with one page intro and in source comments 
stating their usage, also ultimate self-explaining code.




Re: How to create a custom Debian ISO

2024-05-11 Thread Marvin Renich
* Aditya Garg  [240511 05:15]:
> Hello
> 
> I wanted to create a custom ISO of Debian, with the following customisations:
> 
> 1. I want to add a custom kernel that supports my Hardware.
> 2. I want to add my own Apt repo which hosts various software packages to 
> support my hardware.
> 
> I am not able to get any good documentation for the same. Please help.

[Redirecting to debian-user, dropping -project, M-F-T set to debian-user only]

First, please don't double-post the same message within a few minutes.
Give your message at least a half hour to show up before you decide it
wasn't received.

Second, neither debian-devel nor debian-project are appropriate lists
for this question.  You should use debian-u...@lists.debian.org or some
other user-oriented forum.  Also, posting a question to multiple lists
at once (called cross-posting) is considered rude in most situations.

To give a possible answer to your question, look at the Debian Live
project:  https://www.debian.org/devel/debian-live/

The package live-build from the Debian Live project might help you do
what you want.

...Marvin



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-11 Thread Bill Allombert
Le Mon, May 06, 2024 at 11:15:35AM +0100, Barak A. Pearlmutter a écrit :
> > We have two separate issues here:
> 
> > a/ /tmp-on-tmpfs
 
Note that /tmp-on-tmpfs and cleanup-tmp-at-boot are not equivalent.

With cleanup-tmp-at-boot, if your system crashes, you can still backup
/tmp before rebooting.

Cheers,
-- 
Bill. 

Imagine a large red swirl here.



Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Bill Allombert
Le Fri, May 10, 2024 at 10:47:29PM +0500, Andrey Rakhmatullin a écrit :
> - The most paradoxical thing is the recently "discovered" combination of
>   "old lintian falsely reports a problem in certain packages", "lintian
>   runs as a part of the package acceptance process and some problems are
>   autorejects", "people are supposed to run lintian from sid for packages
>   in sid", "specifically *old* lintian runs as a part of the package
>   acceptance process" and "that lintian can't be upgraded because new one
>   is too slow". 

It would help if someone setup UDD to generate statistics about lintian tests 
and
lintian performance.
(How many package report a specific test, how much time lintian need to
run etc.)

This something that was done on lintian.debian.org, but it is defunct.
(lintian.debian.org allowed to compare running time between version
of lintian).

Cheers,
-- 
Bill. 

Imagine a large red swirl here.



Re: Solving a file conflict between package "nq" / "fq"

2024-05-10 Thread Preuße , Hilmar

On 10.05.2024 14:53, Bill Allombert wrote:

Le Mon, May 06, 2024 at 11:09:14PM +0200, Preuße, Hilmar a écrit :


Hi Bill,

thanks for the answer!


during the preparation of a new version of package "nq" (via NMU) it was
found that there exists a file conflict with package "fq" (#1005961), which
was incorrectly solved in the past. For now I unarchived and reopened the
old issue. According to the policy:




As first approximation, the oldest package win, for the simple reason that
doing the other way would break users scripts, and it is not in the
interest of Debian to encourage upstream to hijack each other program
names.



Well, then: nq is the older package and has a (slightly) higher popcon.


After that the maintainers or the ctte could agree to operate a
transition to other names.



Is there anything, which needs to be done from maintainer side?

Hilmar
--
sigfault



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Soren Stoutner
Niels,

On Friday, May 10, 2024 3:18:29 AM MST Niels Thykier wrote:
> Soren Stoutner:
> > I would like to respectfully disagree will some of the opinions expressed 
in
> > this email.
> Hi Soren
> 
> Not sure if we disagree all that much to be honest. :)

Yes, I think we do agree.

From a performance perspective, I see two big problems.

1.  Lintian runs after a potentially long build process.

2.  Lintian takes a long time to run itself.

You have done a really good job of describing point 1 above, as well as 
proposing ways to address it.  I endorse everything you have said.

For point 2, it seems the easiest way to make a significant difference would be 
if lintain could run multi-threaded.

My current development CPU has 8 physical cores hyper-threaded, which present 
to the OS as 16 logical cores.  Most of the build process is multi-threaded 
and uses all the cores to their maximum potential simultaneously.  But lintian 
is single-threaded, so it only uses one core and the other 15 sit idle.  There 
might be some lintian tests that depend on the output of other lintian tests, 
but I would imagine that most of them could be run in parallel with the 
results combined at the end.

I don’t know enough Perl to know how easy it would be to run lintian in a 
multi-threaded manner, but if this was not a difficult change it would speed up 
lintian runs dramatically.  In the case of qtwebengine-opensource-src on my 
hardware, assuming that all cores could be efficiently utilized and there are 
no 
other bottlenecks in RAM or disk access, it would drop lintian’s runtime from 
about 30 minutes to about 2 minutes.

> > First, I should say that I am painfully aware of how long it takes to run
> > lintian on large packages.  When working on qtwebengine-opensource-src it
> > takes my system (Ryzen 7 5700G) about 2 hours to build the package and
> > about half an hour to run lintian against it. I would be completely in
> > favor of any efforts that could be made in the direction of making lintian
> > more efficient, either within lintian itself or in other packages that
> > replicate some or all of lintain’s functionality in more efficient ways.
> > 
> > However, I personally find lintian to be one of the most helpful tools in
> > Debian packaging. When going through the application process I found
> > lintian to be a very useful tool in helping me learn how to produce
> > packages that conform to Debian’s standards.  The integration of lintian
> > into mentors.debian.net was very helpful to me when I first started
> > submitting packages to Debian, and it is still helpful to me when 
reviewing
> > other people’s packages that have been submitted to mentors.debian.net.
> 
> I agree that lintian has useful features as stated in my original email.
> Though not with a very strong emphasis, so I can see how you might have
> not have given that remark much thought.
> 
> After a bit more reflection, I feel lintian is currently working in
> three different areas (to simplify matters a lot).
> 
>   1) Support on Debian packaging files.
>  - You have a comma in `Architecture`, which is space separated
>  - The `foo` license in `d/copyright` is not defined
>  - The order of the `Files` stanzas are probably wrong.
>  - The `Files` stanza in `d/copyright` reference `foo` but that file
>is not in the unpacked source tree.
> 
>  => This should *not* require a assembled package to get these
> results and should provide (near) instant feedback directly
> in your editor. This area should be designed around interactivity
> and low latency as a consequence.
> 
>   2) Checking of upstream source.
>  - Missing source checks
>  - Source files with known questionable licenses
>  - Here are some dependencies that might need to be packaged.
>  - The upstream build system seems to be `waf` so you should be
>aware of this stance in Debian on `waf`, etc.
>  - Maybe: "Advice for how to approach this kind of package".
>(like "This seems like a python package; consider looking at $TOOL
>for an initial debianization. The python packaging team might be
>relevant for you if you are a new maintainer, etc.)
> 
>  => This should *not* require a assembled package to get these
> results. However, it will take some time to chew through all
> of this. It would be a "before initial packaging" and maybe
> on major upstream releases (or NEW checks).  It will be a batch
> process but maybe with support for interactivity.
> 
> 
>   3) Checking of assembled artifacts.
>  - Does the package place the systemd service in the right place?
>  - There is a trigger for shared libraries but no shared libraries.
>(etc.)
> 
>  => This (by definition) is for assembled packages. It will be a
> batch process.
> 
> 
> Part 1) is something I feel would belong in a tool that provides on-line
> / in-editor support 

Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Nilesh Patra
Hi Andrius,

On Fri, May 10, 2024 at 05:58:24PM +0300, Andrius Merkys wrote:
> Do you mean bugs on bugs.d.o, or are there other issues?

As you may have seen in the other emails, there are performance issues. Other
than that, there are 2 open RC bugs right now (fixed in salsa but not uploaded
I don't make uploads for lintian). A pile of MRs are pending for reviews
at many points in time.

> I personally feel motivated to implement new features in lintian or go after
> low hanging fruits, but I am somewhat driven away by the need to understand
> lintian's internals. Is there a documentation on the data/control flow, or
> class diagrams? This would help me a lot.

Not that I know of, I suppose Axel/Bastian may be able answer this.
 
Best,
Nilesh


signature.asc
Description: PGP signature


Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Andrey Rakhmatullin
My 1.83 RUB:

lintian is one of those things that are very important and useful when you
know how to use them, which quirks to apply and which parts to ignore, and
without that knowledge are maybe useful, maybe useless, maybe harmful, and
nobody will tell you that knowledge unless you ask directly. It's also a
mandatory part of the infra and workflows, yet it's mostly unmaintained,
somewhat bitrotten and in part a victim of unfortunate decisions of
previous maintainers. This is a very weird and paradoxical state which
also in a large part relects the state of Debian as a whole (luckily, only
in a part, not completely). 

Random examples:
- The most paradoxical thing is the recently "discovered" combination of
  "old lintian falsely reports a problem in certain packages", "lintian
  runs as a part of the package acceptance process and some problems are
  autorejects", "people are supposed to run lintian from sid for packages
  in sid", "specifically *old* lintian runs as a part of the package
  acceptance process" and "that lintian can't be upgraded because new one
  is too slow". 
- To get full lintian output you need to run it against binary .changes,
  not against a .deb, a .dsc or a source .changes. And you should run it
  with a bunch of args enabling lower-severity tags, because some of
  those are useful. Newer people don't know that even if they know about
  lintian. Those that don't know will see lintian output when they upload
  their package to mentors, and which subset they will see depends on
  which .changes they upload.
- lintian tags have descriptions (it's still unclear to me how obvious is
  that). The most straightforward ways to read them are googling them if
  you run lintian locally and clicking links if you look at e.g. mentors. 
  But lintian.debian.org is dead. There are also lintian -i and
  lintian-explain-tags but it's unclear how to learn about them, at least
  without reading all of lintian(1).
- It's impossible to know beforehand which tags you need to address now,
  which you should address now or some time in the future, which are
  irrelevant and which must not be followed because they are wrong (in
  general or are false positives). Severity is also often not correlated
  with this. My go-to advice for sponsored uploads is "fix whatever your
  sponsor asks you to fix" and I won't publish my advice for direct
  uploads which I follow myself.

As a bottom line, it's clearly not good enough for the role it currently
plays and is becoming worse instead of becoming better, but we don't have
a replacement and it needs a lot of man-hours to go back on track. 

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Andrius Merkys

Hello,

On 2024-05-10 17:35, Nilesh Patra wrote:

On Fri, May 10, 2024 at 04:06:15PM +0200, Andreas Tille wrote:
Lintian is important for me. For the past few months, I have been reviewing and
merging MRs and pushing small fixes of my own. I am not a proficient perl
programmer and hence I am not the best person to be doing so. But then, nobody
else was doing it and I decided to do at least a little bit.


Lintian is important to me likewise. It taught me a lot when I was 
learning to package, and today it is still an indispensable tool for me, 
helping to avoid rookie mistakes, typos and other mistakes.



If someone would like to dedicatedly contribute sometime there, it'd be really
great. The package is not in a very good shape right now.


Do you mean bugs on bugs.d.o, or are there other issues?

I personally feel motivated to implement new features in lintian or go 
after low hanging fruits, but I am somewhat driven away by the need to 
understand lintian's internals. Is there a documentation on the 
data/control flow, or class diagrams? This would help me a lot.


Best,
Andrius



Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Nilesh Patra
On Fri, May 10, 2024 at 04:06:15PM +0200, Andreas Tille wrote:
> > If lintian is important to you, I strongly recommend that you do put *some*
> > of your volunteer time into it.
> 
> +1
> for Soren and everybody else reading this.

Lintian is important for me. For the past few months, I have been reviewing and
merging MRs and pushing small fixes of my own. I am not a proficient perl
programmer and hence I am not the best person to be doing so. But then, nobody
else was doing it and I decided to do at least a little bit.

If someone would like to dedicatedly contribute sometime there, it'd be really
great. The package is not in a very good shape right now.

Best,
Nilesh


signature.asc
Description: PGP signature


Re: Bug#963101: cozy-audiobook-player: "Request For Package"

2024-05-10 Thread Manuel Traut
Hi,

I am currently working on this package [0].

I would need a sponsor to review and upload the package.

Thanks
Manuel

[0] https://salsa.debian.org/manut/cozy



Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Andreas Tille
Hi,

Am Fri, May 10, 2024 at 12:18:29PM +0200 schrieb Niels Thykier:
> Soren Stoutner:
> > I would like to respectfully disagree will some of the opinions expressed 
> > in this email.
> 
> Hi Soren
> 
> Not sure if we disagree all that much to be honest. :)

I think we all agree that we need some policy checking tool and lintian
is the only available tool (at least after linda was removed in 2008 for
good).
 
> > However, I personally find lintian to be one of the most helpful tools in 
> > Debian packaging.
> > When going through the application process I found lintian to be a very 
> > useful tool in
> > helping me learn how to produce packages that conform to Debian’s 
> > standards.  The
> > integration of lintian into mentors.debian.net was very helpful to me when 
> > I first started
> > submitting packages to Debian, and it is still helpful to me when reviewing 
> > other people’s
> > packages that have been submitted to mentors.debian.net.
> > 
> 
> I agree that lintian has useful features as stated in my original email.

>From my point of view lintian has saved me *lots* of uploads that would
have not compliant with Debian policy / my own quality standards.  So I
run lintian per default on *every* package build process (and in Salsa
CI).

Admittedly most of my packages are not *that* large (with a few
excetions) that I was never frustrated about a slow lintian (on my
_multi-tasking_ Debian system ...)

> Though not with a very strong emphasis, so I can see how you might have not
> have given that remark much thought.
> 
> After a bit more reflection, I feel lintian is currently working in three
> different areas (to simplify matters a lot).
> 
>  1) Support on Debian packaging files.
> - You have a comma in `Architecture`, which is space separated
> - The `foo` license in `d/copyright` is not defined
> - The order of the `Files` stanzas are probably wrong.
> - The `Files` stanza in `d/copyright` reference `foo` but that file
>   is not in the unpacked source tree.
> 
> => This should *not* require a assembled package to get these
>results and should provide (near) instant feedback directly
>in your editor. This area should be designed around interactivity
>and low latency as a consequence.

ACK
 
>  2) Checking of upstream source.
> - Missing source checks
> - Source files with known questionable licenses
> - Here are some dependencies that might need to be packaged.
> - The upstream build system seems to be `waf` so you should be
>   aware of this stance in Debian on `waf`, etc.
> - Maybe: "Advice for how to approach this kind of package".
>   (like "This seems like a python package; consider looking at $TOOL
>   for an initial debianization. The python packaging team might be
>   relevant for you if you are a new maintainer, etc.)
> 
> => This should *not* require a assembled package to get these
>results. However, it will take some time to chew through all
>of this. It would be a "before initial packaging" and maybe
>on major upstream releases (or NEW checks).  It will be a batch
>process but maybe with support for interactivity.

ACK
 
>  3) Checking of assembled artifacts.
> - Does the package place the systemd service in the right place?
> - There is a trigger for shared libraries but no shared libraries.
>   (etc.)
> 
> => This (by definition) is for assembled packages. It will be a
>batch process.
> 
> 
> Part 1) is something I feel would belong in a tool that provides on-line /
> in-editor support (see my post script for details). This is partly why
> expanded `debputy` to into this field. You having a 2½ hour feedback loop
> here is crazy - the `acl2` one having 9+ hours is complete madness.

I confirm it would be a great enhancement to have some checker *before*
the actual build would start.  You mentioned `debputy` and I admit I
need to check this out in the near future.  If I imagine some policy
checking debhelper-like tool that is fired up after dh_clean step I'd
be all for it and it really could save some time.

However, I'd consider it unfair against lintian to blame it about a
missing feature if it was never written for this.  If you see any chance
that this could be implemented - idealy by re-using the same rule set
lintian is using if possible to keep maintenance burden of those rules
lower this would be really some great enhancement.

> Part 2) is ideally something you would run before attempting to package a
> new upstream source tree. Many of these things have a high impact on whether
> you want to continue with the packaging (oh, I need to packag

Re: Solving a file conflict between package "nq" / "fq"

2024-05-10 Thread Bill Allombert
Le Mon, May 06, 2024 at 11:09:14PM +0200, Preuße, Hilmar a écrit :
> Hi all,
> 
> during the preparation of a new version of package "nq" (via NMU) it was
> found that there exists a file conflict with package "fq" (#1005961), which
> was incorrectly solved in the past. For now I unarchived and reopened the
> old issue. According to the policy:
> 
> "Two different packages must not install programs with different
> functionality but with the same filenames. (...) If this case happens, one
> of the programs must be renamed. The maintainers should report this to the
> debian-devel mailing list and try to find a consensus about which program
> will have to be renamed. (...)"
> 
> Hence I contact this list. Is there a formal process to generate decisions /
> consensus? Please note that I'm not the maintainer of "nq" and I'm not in
> the position to rename binaries to solve file conflicts.

As first approximation, the oldest package win, for the simple reason that
doing the other way would break users scripts, and it is not in the
interest of Debian to encourage upstream to hijack each other program
names.

After that the maintainers or the ctte could agree to operate a
transition to other names.

Cheers,
-- 
Bill. 

Imagine a large red swirl here.



Re: new upstream version fails older tests of rdepends packages

2024-05-10 Thread Bill Allombert
Le Wed, May 08, 2024 at 08:41:47PM +0200, Paul Gevers a écrit :
> Hi,
> 
> On 08-05-2024 6:06 p.m., Bill Allombert wrote:
> > Agreed, but gap does not actually breaks anything, it is just the tests
> > in testing that are broken. So I can do that but that seems a bit 
> > artificial.
> 
> Aha, that wasn't at all clear to me. If you don't want to do the artificial
> thing (which is fine, except now you depend on members of the release team),
> I'll manually schedule the tests. Maybe tomorrow.

Thanks a lot, this fixed the issue, all packages have migrated to testing now. 
I still think this is a much better outcome than a new upload with
spurious Breaks:.

Cheers,
-- 
Bill. 

Imagine a large red swirl here.



Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Niels Thykier

Soren Stoutner:

I would like to respectfully disagree will some of the opinions expressed in 
this email.



Hi Soren

Not sure if we disagree all that much to be honest. :)


First, I should say that I am painfully aware of how long it takes to run 
lintian on large
packages.  When working on qtwebengine-opensource-src it takes my system (Ryzen 
7
5700G) about 2 hours to build the package and about half an hour to run lintian 
against it.
I would be completely in favor of any efforts that could be made in the 
direction of making
lintian more efficient, either within lintian itself or in other packages that 
replicate some or
all of lintain’s functionality in more efficient ways.

However, I personally find lintian to be one of the most helpful tools in 
Debian packaging.
When going through the application process I found lintian to be a very useful 
tool in
helping me learn how to produce packages that conform to Debian’s standards.  
The
integration of lintian into mentors.debian.net was very helpful to me when I 
first started
submitting packages to Debian, and it is still helpful to me when reviewing 
other people’s
packages that have been submitted to mentors.debian.net.



I agree that lintian has useful features as stated in my original email. 
Though not with a very strong emphasis, so I can see how you might have 
not have given that remark much thought.


After a bit more reflection, I feel lintian is currently working in 
three different areas (to simplify matters a lot).


 1) Support on Debian packaging files.
- You have a comma in `Architecture`, which is space separated
- The `foo` license in `d/copyright` is not defined
- The order of the `Files` stanzas are probably wrong.
- The `Files` stanza in `d/copyright` reference `foo` but that file
  is not in the unpacked source tree.

=> This should *not* require a assembled package to get these
   results and should provide (near) instant feedback directly
   in your editor. This area should be designed around interactivity
   and low latency as a consequence.

 2) Checking of upstream source.
- Missing source checks
- Source files with known questionable licenses
- Here are some dependencies that might need to be packaged.
- The upstream build system seems to be `waf` so you should be
  aware of this stance in Debian on `waf`, etc.
- Maybe: "Advice for how to approach this kind of package".
  (like "This seems like a python package; consider looking at $TOOL
  for an initial debianization. The python packaging team might be
  relevant for you if you are a new maintainer, etc.)

=> This should *not* require a assembled package to get these
   results. However, it will take some time to chew through all
   of this. It would be a "before initial packaging" and maybe
   on major upstream releases (or NEW checks).  It will be a batch
   process but maybe with support for interactivity.


 3) Checking of assembled artifacts.
- Does the package place the systemd service in the right place?
- There is a trigger for shared libraries but no shared libraries.
  (etc.)

=> This (by definition) is for assembled packages. It will be a
   batch process.


Part 1) is something I feel would belong in a tool that provides on-line 
/ in-editor support (see my post script for details). This is partly why 
expanded `debputy` to into this field. You having a 2½ hour feedback 
loop here is crazy - the `acl2` one having 9+ hours is complete madness.


Part 2) is ideally something you would run before attempting to package 
a new upstream source tree. Many of these things have a high impact on 
whether you want to continue with the packaging (oh, I need to package 
20 dependencies first. It has non-free content, etc.). The fact that you 
need to build a package only to discover that your package cannot be 
distributed seems backwards to me. I feel this workflow should work from 
the basis of:


  $ git clone $UPSTREAM source-dir # (or tar xf ...)
  $ check-upstream-code source-dir

Note: This is not an area I am going to tackle. But if I was going into 
it, that would be my "vision" for the starting point.


Part 3) is where I feel lintian still has an area to be in (which also 
matches its mission statement). It could also include a subset of the 
results from part 1+2 as a "all-in-one-inclusive" wrapping to simplify 
archive-wide QA or sponsoring checks. But as I see it, most 
(non-sponsor) users would ideally get their 1) and 2) feedback from the 
more specialized tools.


These are the ballparks I would split lintian into given infinite 
developer time and resources. Ideally not a lot "smaller" than this to 
avoid drowning people with the "Run these 1000 tools"-problem to avoid a 
repeat of `check-all-the-things`. This is also why I am not again 
lintian aggregating from the other areas. For some of its users (such as 
sponsors) it will be a useful 

Re: Open "NMU diff for 64-bit time_t transition" bugs

2024-05-10 Thread Sebastian Ramacher
Hi

On 2024-05-10 08:29:28 +0300, Andrius Merkys wrote:
> I care for tree packages which still have open "NMU diff for 64-bit time_t
> transition" bugs: libccp4, macromoleculebuilder and rdkit. All of them have
> NMU diffs applied in experimental, but not in unstable yet. What should I do
> about them - apply NMU diffs to unstable as well, or wait for someone
> performing the time_t64 transition to do that?

You can also check iif the changes were applied in Ubuntu. For libccp4,
there was an upload with this changelog entry:

libccp4 (8.0.0-2ubuntu1) noble; urgency=medium

  * Rename libccp4c0 package as although it is not affected by the
time_t transition to 64-bits, it is affected by the LFS migration
which is implied by the time_t changes.

If in doubt, please also try to reach out to Steve Langasek and the
others that were driving the t64 transition.

Cheers
-- 
Sebastian Ramacher



Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Hakan Bayındır
I also think that Lintian is one of the most important tools in Debian 
packaging ecosystem. I'm not a Debian Developer, but have built packages 
for our Debian derivative distribution (Pardus, which I tech-led it for 
some time). The first step was to get the package "Lintian-clean (TM)" 
before even testing it.


I would love to help to make Lintian faster, but unfortunately I don't 
know any Perl, so touching an advanced package like this will take a lot 
of time (learn Perl -> get somewhat proficient -> start hacking 
Lintian). I might be able to profile it though to understand its pain 
points, which I'll try to give a go.


Cheers,

H.

On 9.05.2024 ÖS 11:57, Soren Stoutner wrote:
I would like to respectfully disagree will some of the opinions 
expressed in this email.



First, I should say that I am painfully aware of how long it takes to 
run lintian on large packages.  When working on qtwebengine-opensource- 
src it takes my system (Ryzen 7 5700G) about 2 hours to build the 
package and about half an hour to run lintian against it.  I would be 
completely in favor of any efforts that could be made in the direction 
of making lintian more efficient, either within lintian itself or in 
other packages that replicate some or all of lintain’s functionality in 
more efficient ways.



However, I personally find lintian to be one of the most helpful tools 
in Debian packaging.  When going through the application process I found 
lintian to be a very useful tool in helping me learn how to produce 
packages that conform to Debian’s standards.  The integration of lintian 
into mentors.debian.net was very helpful to me when I first started 
submitting packages to Debian, and it is still helpful to me when 
reviewing other people’s packages that have been submitted to 
mentors.debian.net.



As I type this email I am building an update to qtwebengine-opensource- 
src.  So far, lintian has caught two problems with this release that I 
would have otherwise missed.  I admit that I am fairly new as a Debian 
Developer, and perhaps as I gain greater experience I would get to the 
point where lintian never catches things I miss.  But I don’t personally 
expect that to ever happen, because there are too many corner cases or 
opportunities for typos that computers are much better at catching than 
humans.



I do understand that lintian is in need of a lot of work.  I personally 
have an open MR against the package that fixes a check that is wrong 
more often than it is right (with both false positives and false 
negatives).  The fix is relatively simple and makes the check 100% 
accurate as far as I can tell.  However, after over a year, it has yet 
to be reviewed.



https://salsa.debian.org/lintian/lintian/-/merge_requests/461 salsa.debian.org/lintian/lintian/-/merge_requests/461>



I must admit that I have been sorely tempted to get involved with 
maintaining lintian because I feel it is so important.  So far, I have 
resisted that temptation because I am already involved in a decade-long 
effort to clean up Qt WebEngine in Debian and get it to the point where 
it has proper security support.  I haven’t wanted to spread myself too 
thin and end up accomplishing nothing because I tried to do too much.  
But if lintian’s need increases or if my existing commitments decrease I 
would be happy to find myself involved with lintian maintenance.



Soren


On Thursday, May 9, 2024 12:27:49 PM MST Andreas Tille wrote:

 > Hi,

 >

 > this mail is a private response from Niels to my mail to the Debian Perl

 > team where I explicitly asked for people helping out with lintian.  So

 > far the answer from Niels is the only response.  Since he gave explicit

 > permission to quote him in public I'm doing so hereby.  Niels assumed

 > that his answer "will not help my case" - but well, I do not think that

 > hiding problems will help anybody else.

 >

 > At Tue, May 07, 2024 at 15:59:21 +0200 Andreas Tille wrote

 >

 > > Hi Perl folks,

 > > ...

 > > --> see full mail at

 > > https://lists.debian.org/debian-perl/2024/05/msg0.html

 > [ From here I simply quote Niels unchanged - I'll comment probably 
tomorrow in


 > detail ]

 >

 >

 > Hi Andreas

 >

 > You are welcome to quote me in public, though I feel it will not help 
your


 > cause. This reply is in private to you, so you can choose whether you 
want


 > to quote me.

 >

 >

 > I agree with the sentiment that important Debian tools would ideally be

 > co-maintained. However, in the passing years, I have started to feel a

 > disconnect with lintian, its direction and what I would like to see. I no

 > longer use lintian and I am fundamentally not interested in picking up

 > lintian anymore - neither as a user nor as a contributor. I have even

 > uninstalled it from my machines. For now, I still "allow" it in my 
salsa-ci


 > pipeline but my patience with it is thin.

 >

 >

 > For me, lintian fails in all roles it has. It is not a good tool for 

Re: Any volunteers for lintian co-maintenance?

2024-05-10 Thread Marc Haber
On Thu, 09 May 2024 13:57:28 -0700, Soren Stoutner 
wrote:
>However, I personally find lintian to be one of the most helpful tools in 
>Debian packaging. 

+1.

The fact that it doesn perform well on large packages is bad, but that
doesnt make it less useful for smaller packages.

Greetings
Marc
-- 

Marc Haber |   " Questions are the | Mailadresse im Header
Rhein-Neckar, DE   | Beginning of Wisdom " | 
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 6224 1600402



Re: Any volunteers for lintian co-maintenance?

2024-05-09 Thread Lucas Nussbaum
On 09/05/24 at 13:57 -0700, Soren Stoutner wrote:
> First, I should say that I am painfully aware of how long it takes to run 
> lintian on large 
> packages.  When working on qtwebengine-opensource-src it takes my system 
> (Ryzen 7 
> 5700G) about 2 hours to build the package and about half an hour to run 
> lintian against it.  
> I would be completely in favor of any efforts that could be made in the 
> direction of making 
> lintian more efficient, either within lintian itself or in other packages 
> that replicate some or 
> all of lintain’s functionality in more efficient ways.

If someone wants to work on lintian performance: the runtimes for the
UDD lintian importer (behind https://udd.debian.org/lintian/ ) are
available in the lintian_logs table:

udd=> select distinct ts, source, version, duration from lintian_logs order by 
duration desc limit 30;
 ts | source  |  
version  | duration 
+-+---+--
 2024-04-05 16:54:20.437828 | acl2| 8.5dfsg-5   
  |32879
 2024-04-26 06:20:59.082471 | linux   | 6.7.12-1
  |16472
 2024-02-29 10:39:52.6379   | gcc-14-cross-ports  | 4   
  |14616
 2024-02-29 10:39:16.350521 | gcc-14-cross-ports  | 5   
  |14580
 2024-02-29 10:35:17.939875 | gcc-11-cross-mipsen | 6+c1+nmu1   
  |14341
 2024-02-29 10:35:06.549735 | gcc-13-cross-mipsen | 2+c1
  |14330
 2024-02-29 10:34:54.908736 | gcc-14-cross| 4   
  |14318
 2024-02-29 10:34:44.720364 | gcc-12-cross-mipsen | 4+c1
  |14308
 2024-02-29 10:33:50.035058 | gcc-10-cross-mipsen | 3+c6
  |14253
 2024-05-09 11:24:34.446854 | llvm-toolchain-17   | 1:17.0.6-12 
  |13086
 2024-02-29 10:04:42.241127 | gcc-14-cross| 3   
  |12505
 2024-05-03 23:10:27.416567 | libreoffice | 4:24.2.3-1  
  |12238
 2024-02-29 09:59:52.604453 | gcc-9-cross-mipsen  | 4+c2
  |12216
 2024-05-07 01:51:54.054889 | llvm-toolchain-16   | 1:16.0.6-27 
  |11180
 2024-04-25 10:31:07.753175 | llvm-toolchain-snapshot | 
1:19~++20240421021844+e095d978ba47-1~exp1 | 9881
 2024-05-05 04:30:01.133898 | llvm-toolchain-18   | 1:18.1.5-2  
  | 9811
 2024-02-29 12:48:09.931447 | gcc-arm-none-eabi   | 15:13.2.rel1-2  
  | 9773
 2024-02-29 13:22:32.331297 | gcc-10-cross| 23  
  | 9118
 2024-05-06 22:16:07.781017 | llvm-toolchain-15   | 1:15.0.7-15 
  | 8976
 2024-04-30 10:12:54.498582 | openblas| 0.3.27+ds-2 
  | 8787
 2024-04-04 10:04:55.49545  | gcc-14  | 14-20240330-1   
  | 8307
 2024-05-07 10:03:49.089649 | ghc | 9.6.5-1~exp1
  | 8246
 2024-05-02 10:03:49.545502 | gcc-14  | 14-20240429-1   
  | 8242
 2024-02-29 12:54:28.975384 | gcc-13-cross-ports  | 17  
  | 7753
 2024-04-14 21:54:48.554806 | ghc | 9.4.7-5 
  | 7702
 2024-02-29 14:38:08.333028 | gcc-13-cross| 14  
  | 7321
 2024-02-29 15:22:27.15095  | gcc-10-cross-ports  | 24  
  | 7192
 2024-04-14 09:46:15.411926 | gcc-11  | 11.4.0-9
  | 7186
 2024-02-29 15:22:21.577515 | gcc-9-cross-ports   | 27  
  | 7156
 2024-05-06 09:45:44.77244  | llvm-toolchain-14   | 1:14.0.6-20 
  | 7155
(30 rows)

That's the time for testing the source and all binary packages on all
architectures.

Lucas



Re: Any volunteers for lintian co-maintenance?

2024-05-09 Thread Soren Stoutner
I would like to respectfully disagree will some of the opinions expressed in 
this email.

First, I should say that I am painfully aware of how long it takes to run 
lintian on large 
packages.  When working on qtwebengine-opensource-src it takes my system (Ryzen 
7 
5700G) about 2 hours to build the package and about half an hour to run lintian 
against it.  
I would be completely in favor of any efforts that could be made in the 
direction of making 
lintian more efficient, either within lintian itself or in other packages that 
replicate some or 
all of lintain’s functionality in more efficient ways.

However, I personally find lintian to be one of the most helpful tools in 
Debian packaging.  
When going through the application process I found lintian to be a very useful 
tool in 
helping me learn how to produce packages that conform to Debian’s standards.  
The 
integration of lintian into mentors.debian.net was very helpful to me when I 
first started 
submitting packages to Debian, and it is still helpful to me when reviewing 
other people’s 
packages that have been submitted to mentors.debian.net.

As I type this email I am building an update to qtwebengine-opensource-src.  So 
far, lintian 
has caught two problems with this release that I would have otherwise missed.  
I admit that 
I am fairly new as a Debian Developer, and perhaps as I gain greater experience 
I would get 
to the point where lintian never catches things I miss.  But I don’t personally 
expect that to 
ever happen, because there are too many corner cases or opportunities for typos 
that 
computers are much better at catching than humans.

I do understand that lintian is in need of a lot of work.  I personally have an 
open MR 
against the package that fixes a check that is wrong more often than it is 
right (with both 
false positives and false negatives).  The fix is relatively simple and makes 
the check 100% 
accurate as far as I can tell.  However, after over a year, it has yet to be 
reviewed.

https://salsa.debian.org/lintian/lintian/-/merge_requests/461[1]

I must admit that I have been sorely tempted to get involved with maintaining 
lintian 
because I feel it is so important.  So far, I have resisted that temptation 
because I am 
already involved in a decade-long effort to clean up Qt WebEngine in Debian and 
get it to 
the point where it has proper security support.  I haven’t wanted to spread 
myself too thin 
and end up accomplishing nothing because I tried to do too much.  But if 
lintian’s need 
increases or if my existing commitments decrease I would be happy to find 
myself involved 
with lintian maintenance.

Soren

On Thursday, May 9, 2024 12:27:49 PM MST Andreas Tille wrote:
> Hi,
> 
> this mail is a private response from Niels to my mail to the Debian Perl
> team where I explicitly asked for people helping out with lintian.  So
> far the answer from Niels is the only response.  Since he gave explicit
> permission to quote him in public I'm doing so hereby.  Niels assumed
> that his answer "will not help my case" - but well, I do not think that
> hiding problems will help anybody else.
> 
> At Tue, May 07, 2024 at 15:59:21 +0200 Andreas Tille wrote
> 
> > Hi Perl folks,
> > ...
> > --> see full mail at
> > https://lists.debian.org/debian-perl/2024/05/msg0.html
> [ From here I simply quote Niels unchanged - I'll comment probably tomorrow in
> detail ]
> 
> 

signature.asc
Description: This is a digitally signed message part.


Re: Any volunteers for lintian co-maintenance?

2024-05-09 Thread Andreas Tille
Hi,

this mail is a private response from Niels to my mail to the Debian Perl
team where I explicitly asked for people helping out with lintian.  So
far the answer from Niels is the only response.  Since he gave explicit
permission to quote him in public I'm doing so hereby.  Niels assumed
that his answer "will not help my case" - but well, I do not think that
hiding problems will help anybody else.

At Tue, May 07, 2024 at 15:59:21 +0200 Andreas Tille wrote
> Hi Perl folks,
> ...
> --> see full mail at 
> https://lists.debian.org/debian-perl/2024/05/msg0.html
>
[ From here I simply quote Niels unchanged - I'll comment probably tomorrow in 
detail ]


Hi Andreas

You are welcome to quote me in public, though I feel it will not help your
cause. This reply is in private to you, so you can choose whether you want
to quote me.


I agree with the sentiment that important Debian tools would ideally be
co-maintained. However, in the passing years, I have started to feel a
disconnect with lintian, its direction and what I would like to see. I no
longer use lintian and I am fundamentally not interested in picking up
lintian anymore - neither as a user nor as a contributor. I have even
uninstalled it from my machines. For now, I still "allow" it in my salsa-ci
pipeline but my patience with it is thin.


For me, lintian fails in all roles it has. It is not a good tool for newbies
to get help, since it can only test build artifacts. As an example, your
feedback look is a full package build followed by unpacking the package just
so lintian can tell you have a typo on line 4. That is a massive waste of
resources - notably developer time and mental bandwidth.

It also fails as an archive QA tool in my view since the FTP masters have
been unwilling to upgrade to any recent version of lintian. I think FTP
master's argument lies with the very poor performance in certain corner
cases that adversely affects larger packages (like linux). As a consequence,
people now get auto-rejects when uploading because lintian on the FTP master
server does not produce the same output as current lintian in stable or
newer.
  For the record, I support the FTP masters here since the performance was
quite horrible at some point (might be fixed, might not be) and that would
just block benign uploads. In fact, I would go so far as to say that the FTP
masters should remove lintian from their upload checks (partly because of
this, partly because only source packages are reliably checked which neuters
the original point of adding lintian to the upload queue).


The latter half (archive-wide qa + performance + trust) might be fixable
with a dedicated effort and then a lot of lobbying to restore's people trust
in lintian. But that is a lot of work, and it will not solve the former
(feedback cycles). The former requires a completely different mindset and
scope for the tooling.


To that end, I have decided to put my effort into `debputy` where I recently
added support for linting *with* quickfixes, reformatting and editor support
(the latter via LSP). I think that a much better approach to half of the
issues that lintian emits and helps both newcomers and long term
contributors to be much more productive. Especially for the editor support
related parts, where people get instant feedback both on issues and the fix,
automatic reformatting on save and completion suggestions. None of which
lintian or wrap-and-sort are capable of.

If I am successful, then lintian can specialize its efforts into issues only
visible in packaged artifacts and thereby reduce it scope a bit. In that
sense, my work might be a (minor) help to the Lintian team on the assumption
they are willing to specialize more. But even if I am not successful with
`debputy`, I cannot imagine I would consider returning to lintian. It does
not scratch my itch and years of issues (some of which are still unfixed)
have made me not want to have anything to do with the tool.

Best of luck to Axel and anyone joining him to stop the bleeding. I do hope
they are successful, since lintian still have valuable features for Debian
as a whole that can be salvaged. But I am not going to be the "hero" that
salvages that mess. If I am going to do heroics in this area, then it will
be related to `debputy` with the aim to enable us to spend less mental
bandwidth on daily packaging work.

Best regards,
Niels

PS: In my view, the bleeding of lintian's quality started long before Axel
joined the lintian maintenance team and I do not fault Axel for being unable
to stop the bleeding. In my view, only a hero could have "managed" that at
the expense of their mental health.



Re: Tool to build Debian packages not requiring root in containers ?

2024-05-09 Thread Timo Röhling

Hi Charles,

* Charles Plessy  [2024-05-08 07:27]:
I want to leverage our cluster to automate as much of the rebuilds 
as I

can, but could not find the right tool.  I tried to run sbuild in a
Singularity image and this failed.  However, I do not need the whole
power of engines like sbuild, as none of the packages involved require
root priviledges to build.
Have you tried the unshare backend for sbuild? It uses Linux 
namespaces instead of full-blown root privileges, and works really 
great for my regular packaging work. I have not tried running it 
inside a virtualization container, though.



Cheers
Timo

--
⢀⣴⠾⠻⢶⣦⠀   ╭╮
⣾⠁⢠⠒⠀⣿⡁   │ Timo Röhling   │
⢿⡄⠘⠷⠚⠋⠀   │ 9B03 EBB9 8300 DF97 C2B1  23BF CC8C 6BDD 1403 F4CA │
⠈⠳⣄   ╰╯


signature.asc
Description: PGP signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-09 Thread Stéphane Blondon
Le mar. 7 mai 2024, 20:18,  a écrit :

> Even after a reboot, I would be upset to lose the debug files that I've
> been accumulating for several days while trying to track down an
> intermittent problem with this stupid VPN...
>


At reboot, /tmp isautomatically flushed. It's the default behaviour since
years (at least on physical machines).

-- 
Stephane


Re: Tool to build Debian packages not requiring root in containers ?

2024-05-08 Thread Charles Plessy
Le Wed, May 08, 2024 at 08:02:41AM -0700, Otto Kekäläinen a écrit :
> 
> I read the docs on how Singularity is able to pull Docker images of Debian
> Sid and build on top of them, and run and exec just like Docker/Podman.
> Unfortunately it has its own Containerfile format (
> https://docs.sylabs.io/guides/3.5/user-guide/quick_start.html#singularity-definition-files)
> and the commands have their own syntax. I guess Debcraft could be extended
> to support it, but that would require at least one Singularity user as
> frequent contributor to test and develop Singularity-compatibility.
> 
> The entire code base is shell code. Perhaps you want to take a look if it
> looks hackable for you?

Hi Otto,

I looked at the code, and while it would be easy to replace the podman
commands to run containers, I wonder if there isn't a major roadblock:

The main use of Singularity containers is to provide static images for
software.  The default is that the image is read-only and has write
access to the host filesystems.  Thus, running apt upgrade in a
singularity container isn't something that is done usually.  It might
even be impossible, although I am not expert enough to make that
statement firmly.

Is there a chance debcraft can work from a static container provided by
the user?

I think that the key problem I have is that I want to use a build Debian
packages that need no root access and that do not need to install
dependencies that need root access, and I want to do that with user
privileges only.

Have a nice day,

Charles

-- 
Charles Plessy Nagahama, Yomitan, Okinawa, Japan
Debian Med packaging team http://www.debian.org/devel/debian-med
Tooting from home  https://framapiaf.org/@charles_plessy
- You  do not have  my permission  to use  this email  to train  an AI -



Re: new upstream version fails older tests of rdepends packages

2024-05-08 Thread Paul Gevers

Hi,

On 08-05-2024 6:06 p.m., Bill Allombert wrote:

Agreed, but gap does not actually breaks anything, it is just the tests
in testing that are broken. So I can do that but that seems a bit artificial.


Aha, that wasn't at all clear to me. If you don't want to do the 
artificial thing (which is fine, except now you depend on members of the 
release team), I'll manually schedule the tests. Maybe tomorrow.


Paul


OpenPGP_signature.asc
Description: OpenPGP digital signature


Avoiding /var/tmp for long-running compute (was: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default])

2024-05-08 Thread Russ Allbery
"Jonathan Dowland"  writes:

> Else-thread, Russ begs people to stop doing this. I agree people
> shouldn't! We should also work on education and promotion of the
> alternatives.

Also, helping people use better tools for managing workloads like this
that make their lives easier and have better semantics, thus improving
life for everyone.

I'm suggesting solutions that I don't have time to help implement, and of
course it will take a long time for better tools to filter into all those
clusters, so this doesn't address the immediate problem of this thread
(hence the subject change).  But based on my past experience with these
types of systems, I bet a lot of the patterns captured in software are
older ones.  Linux has a *lot* of facilities today that it didn't have, or
at least weren't widely used, five years ago.  It would be great to help
some of those improvements filter down, because they can make a lot of
these problems go away.

For example, take the case of scratch space for batch computing.  The
logical lifespan for temporary files for a batch computing job is the
lifetime of the job, whatever that may be.  (I know there are exceptions,
but here I'm just talking about defaults.)  Previously one would have to
build support into the batch job management system for creating and
managing those per-job temporary directories, and ensure the jobs support
TMPDIR or other environment variables to control where they store data,
and everyone was doing this independently.  (I've done a *lot* of this
kind of thing, once upon a time.)

But now we have mount namespaces, and systemd has PrivateTmp that builds
on top of that.  So if the job is managed by an execution manager, it can
create per-job temporary directories and it may already support (as
systemd does) the semantics of deleting the contents of those directories
on job exit, and it bind-mounts those into the process space and the
process is none the wiser.  I think all of the desirable glue may not
fully be there (controlling what underlying file system is used for
PrivateTmp, ensuring they're also excluded from normal cleanup, etc.), but
this is very close to a much better way of handling this problem that
still exposes /tmp and /var/tmp to the job so that none of the
often-crufty scientific computing software has to change.

The new capabilities that Linux now has due to namespaces are marvellous
and solve a whole lot of problems that I didn't realize were even
solvable, and right now I suspect there are huge opportunities for
substantial improvements without a whole lot of effort by just plumbing
those facilities through to higher-level layers like batch systems.  Whole
classes of long-standing problems would just disappear, or at least be
far, far easier to manage.

Substantial, substantial caveat: I have been out of this world for a
while, and maybe most of this work has already been done?  That would be
amazing.  The best possible response to this post would be for someone to
tell me I'm five years behind and the batch systems have already picked up
this work and we can just point people at them.

-- 
Russ Allbery (r...@debian.org)  



Re: new upstream version fails older tests of rdepends packages

2024-05-08 Thread Bill Allombert
On Wed, May 08, 2024 at 03:21:12PM +, Graham Inggs wrote:
> Hi Bill
> 
> On Wed, 8 May 2024 at 13:58, Bill Allombert  wrote:
> > The problem is that all the debs in testing and sid are correct, it is the 
> > autopkgtest in
> > testing which are wrong (they are relying on undocumented behaviour).
> > They are fixed in sid.
> 
> I think an upload of gap, with Breaks on the versions of the gap-*
> packages that are wrong, should allow migration.

Agreed, but gap does not actually breaks anything, it is just the tests
in testing that are broken. So I can do that but that seems a bit artificial.

Cheers,
-- 
Bill. 

Imagine a large red swirl here. 



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-08 Thread Jonathan Dowland
On Mon May 6, 2024 at 5:01 PM BST, Luca Boccassi wrote:
> On Mon, 6 May 2024 at 16:51, Barak A. Pearlmutter 
> wrote:
> > For whatever reason, a lot of people who process large data use
> > /var/tmp/FOO/ as a place to store information that should not be
> > backed up, but also should not just disappear.
>
> Then such people, assuming they actually exist, can configure their
> custom systems accordingly upon reading the release notes before
> upgrading, as they would do anyway if installing on CentOS or any
> other major OS. Hence, not an issue either.

They actually exist. I'm been one of them, I've worked with many of
them, it's an incredibly common pattern in academic computing at least,
and changing it in Debian should be a very carefully explored decision.

You've pointed out that changing the behaviour from the default, in
either direction, is trivial. The issue is not one of individual
preference but of what is default. The long-established status quo
is not to clean /var/tmp. There is serious risk here: to users data
and correspondingly to Debian's reputation for stability, which
many of us have worked hard to maintain for a very long time.

If you think we need hard data to quantify this practice, then let's
work on a plan for how to gather that going forward, rather than
dismiss this outright because we haven't collected it.

Else-thread, Russ begs people to stop doing this. I agree people
shouldn't! We should also work on education and promotion of the
alternatives.

I'd like to hear some arguments *in favour* of making this change.
Alignment with systemd-upstream, reduced package maintenance burden
are two that I can think of, but perhaps I've missed more. These two,
IMHO, are significantly outweighed by the risks.

Please hold off making this change now and let this discussion continue.


-- 
Please do not CC me for listmail.

  Jonathan Dowland
✎j...@debian.org
   https://jmtd.net



Re: new upstream version fails older tests of rdepends packages

2024-05-08 Thread Graham Inggs
Hi Bill

On Wed, 8 May 2024 at 13:58, Bill Allombert  wrote:
> The problem is that all the debs in testing and sid are correct, it is the 
> autopkgtest in
> testing which are wrong (they are relying on undocumented behaviour).
> They are fixed in sid.

I think an upload of gap, with Breaks on the versions of the gap-*
packages that are wrong, should allow migration.

Regards
Graham



Re: Tool to build Debian packages not requiring root in containers ?

2024-05-08 Thread Otto Kekäläinen
Hi!


ti 7. toukok. 2024 klo 23.01 Charles Plessy  kirjoitti:

> Le Tue, May 07, 2024 at 08:17:31PM -0700, Otto Kekäläinen a écrit :
> >
> > Can you give me an example of a package you want to build and what is
> > the starting point, and I can tell you what command to issue to
> > https://salsa.debian.org/otto/debcraft to achieve it?
> >
> > It supports running Podman in user mode (=no root permissions needed),
>
> Hi Otto,
>
> it looks really great!
>
> Do you think you can make it work with Singularity/Apptainer instead of
> Podman?  Our cluster runs only singularity 3.5.2
> (https://docs.sylabs.io/guides/3.5/user-guide/).  Debian has version
> 4.1.2 in the singularity-container package.
>
> The conversion of a Docker container to the Singularity format is
> simple, and Singularity already mounts most of the local storage to make
> it visible and writable from within the container.
>

I read the docs on how Singularity is able to pull Docker images of Debian
Sid and build on top of them, and run and exec just like Docker/Podman.
Unfortunately it has its own Containerfile format (
https://docs.sylabs.io/guides/3.5/user-guide/quick_start.html#singularity-definition-files)
and the commands have their own syntax. I guess Debcraft could be extended
to support it, but that would require at least one Singularity user as
frequent contributor to test and develop Singularity-compatibility.

The entire code base is shell code. Perhaps you want to take a look if it
looks hackable for you?


Re: new upstream version fails older tests of rdepends packages

2024-05-08 Thread Bill Allombert
Le Sat, May 04, 2024 at 02:32:22PM +0200, Paul Gevers a écrit :
> Hi,
> 
> On 04-05-2024 11:39 a.m., Jerome BENOIT wrote:
> > What would be the best way to unblock the migration of gap and gap-io ?
> 
> If gap isn't going to change (which might be the easiest solution), then
> file bugs and fix those reverse dependencies. Those bugs are RC and in due
> time will cause autoremoval.

The problem is that all the debs in testing and sid are correct, it is the 
autopkgtest in
testing which are wrong (they are relying on undocumented behaviour).
They are fixed in sid.

Cheers,
-- 
Bill. 

Imagine a large red swirl here.



Re: Y2038-safe replacements for utmp/wtmp and lastlog

2024-05-08 Thread Jun MO

On Wed, 8 May 2024 at 09:21:35 +1000, Craig Small  wrote:

> I can only speak for w.  It currently prefers what it gets from 
systemd or

> elogind, effectively
> iterating over the sessions coming from sd_get_sessions() if sd_booted()
> returns true.
>
> If sd_booted() returns false, then it uses the old utmp/utmpx files for
> now. Besides the Y2038
> issue, the utmp "API" is pretty awful with things like errors pretty much
> undetectable. There is also
> the problem about who (e.g. which process) should be writing to those 
files

> as you have pointed out
> in your email.
>
> For now w/uptime will use utmp as a fallback, but I'll be happy if this
> gets updated to something better; it's a low-priority
> for me because systemd/elogind do what I need most of the time.

Thanks for explaining.

For last(1) my concern is that it will be helped to keep the original
last(1) for back-compatibility to read old wtmp files. For w(1), utmp
is only about current sessions, so there is no need to keep years-old
utmp files. Unlike last(1), there is no something like `w -f /run/utmp'.
Actually, one can run `last -f /run/utmp', and it provides output
similar to w(1)'s except missing something like process and CPU times
for each user. And as pointed out by you, w(1) currently already prefer
using infos from systemd/elogind instead of reading from utmp.

So I now think w(1) may be little need to keep the ability to read from
utmp and am also happy to it can change to use something better.

Regards,
Jun MO



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-08 Thread Emanuele Rocca
Hi,

On 2024-05-07 09:43, Russ Allbery wrote:
> I understand your point, which is that this pattern is out there in the
> wild and Debian is in danger of breaking existing usage patterns by
> matching the defaults of other distributions.  This is a valid point, and
> I appreciate you making it.

The more general point being that if systems have certain properties,
whether by design or by accident, people tend to rely on them if these
properties are useful for whatever reasons.

In the specific case of /var/tmp in Debian, for a very long time now the
properties have been: (1) persistent, world-writable storage (2) outside
of /home (3) available by default on all systems without any
configuration. To many, these properties make for a good place where
transient-ish work can be done without the risk of losing it upon reboot
(or power loss, or similar). Not being in /home is an important one,
because for instance /home may be regularly backed up, or it may be on a
NFS share, or who knows what else, and you may not want that who knows
why.

All of that being said, I do see the value in uniformity with other
distros, also because it surely makes things easier for maintainers.
And yes, https://xkcd.com/1172/.

It's just that changes are usually a costs/benefits tradeoff -- in the
xkcd a CPU is overheating, whereas in this case the problem to fix seems
a bit less obvious.



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-08 Thread Marc Haber
On Tue, 07 May 2024 22:29:30 +0100, Richard Lewis
 wrote:
>Holger Levsen  writes:
>> I'm a bit surprised how many people seem to really rely on data in /tmp
>> to survive for weeks or even months. I wonder if they backup /tmp?
>
>I use /tmp for things that fall somewhere between "needs a backup" and
>"unimportant, can be deleted whenever".

For me there is a difference between /tmp and /var/tmp. On all my
systems, /tmp has been a tmpfs for decades, /var/tmp being used for
data that is too large for tmpfs.

Even losing /tmp would probably affect some of my running programs
including the X11 session.

Greetings
Marc
-- 

Marc Haber |   " Questions are the | Mailadresse im Header
Rhein-Neckar, DE   | Beginning of Wisdom " | 
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 6224 1600402



Re: Tool to build Debian packages not requiring root in containers ?

2024-05-08 Thread Charles Plessy
Le Tue, May 07, 2024 at 08:17:31PM -0700, Otto Kekäläinen a écrit :
> 
> Can you give me an example of a package you want to build and what is
> the starting point, and I can tell you what command to issue to
> https://salsa.debian.org/otto/debcraft to achieve it?
> 
> It supports running Podman in user mode (=no root permissions needed),

Hi Otto,

it looks really great!

Do you think you can make it work with Singularity/Apptainer instead of
Podman?  Our cluster runs only singularity 3.5.2
(https://docs.sylabs.io/guides/3.5/user-guide/).  Debian has version
4.1.2 in the singularity-container package.

The conversion of a Docker container to the Singularity format is
simple, and Singularity already mounts most of the local storage to make
it visible and writable from within the container.

The typical packages that I want to build are the r-bioc-* collection.
Together, they represent a dependency graph deep of a dozen of layers,
which makes transitions work-intensive.

With tools like debcraft I would like to prepare a set of updated
packages for which I know that the CI tests pass, and that can be
uploaded all together at the same time when I we get green light from
the Release team.  (And to rebuild all of them if in the meantime the
contents of Unstable have changed significantly).

Have a nice day,

Charles

-- 
Charles Plessy Nagahama, Yomitan, Okinawa, Japan
Debian Med packaging team http://www.debian.org/devel/debian-med
Tooting from home  https://framapiaf.org/@charles_plessy
- You  do not have  my permission  to use  this email  to train  an AI -



Re: Tool to build Debian packages not requiring root in containers ?

2024-05-07 Thread Otto Kekäläinen
Hi!

On Tue, 7 May 2024 at 15:27, Charles Plessy  wrote:
..
> I want to leverage our cluster to automate as much of the rebuilds as I
> can, but could not find the right tool.  I tried to run sbuild in a
> Singularity image and this failed.  However, I do not need the whole
> power of engines like sbuild, as none of the packages involved require
> root priviledges to build.
>
> Do you have a suggestion for a tool can run in user mode in a container
> image having access to local storage on the host, and that given a
> Debian source control file will download the dependencies and build the
> package ?

Can you give me an example of a package you want to build and what is
the starting point, and I can tell you what command to issue to
https://salsa.debian.org/otto/debcraft to achieve it?

It supports running Podman in user mode (=no root permissions needed),
it loop-mounts a local directory (local storage), creates clean build
containers on the fly similar to sbuild but is much easier and faster
to use.

Example of how to build one of your packages with just pointing it at
the source git repo:

$ debcraft build https://salsa.debian.org/med-team/altree.git
Building container 'debcraft-debian-sid' in
'/tmp/tmp.brCZRhn2lL/debcraft-container' for downloader use
mkdir: created directory '/tmp/tmp.brCZRhn2lL/debcraft-container'
STEP 1/10: FROM debian:sid
...
$ ls -1 debcraft-build-altree-1715137513.a8c999a+master
altree_1.3.2-2_amd64.build
altree_1.3.2-2_amd64.buildinfo
altree_1.3.2-2_amd64.changes
altree_1.3.2-2_amd64.deb
altree-dbgsym_1.3.2-2_amd64.deb
altree-examples_1.3.2-2_all.deb
control.log
filelist.log
lintian.log

First build is a bit slow as it needs to download all the dependencies
and create a container, but the second run of 'debcraft build' inside
the source directory will be very fast as all container cache is
reused.



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Russ Allbery
Simon Richter  writes:
> On 5/8/24 07:05, Russ Allbery wrote:

>> It sounds like that is what kicked off this discussion, but moving /tmp
>> to tmpfs also usually makes programs that use /tmp run faster.  I
>> believe that was the original motivation for tmpfs back in the day.

> IIRC it started out as an implementation of POSIX SHM, and was later
> generalized.

I believe you're correct for Linux specifically but not in general for
UNIX.  For example, I'm fairly sure this is not the case on Solaris, which
was the first place I encountered tmpfs and where tmpfs /tmp was the
default starting in Solaris 2.1 in 1992.  tmpfs was present in SunOS in
1987, so I'm pretty sure it predates POSIX shared memory.

Linux was very, very late to the tmpfs world.

> When /var runs full, the problem is probably initrd building.

I'm not quite sure what to make of this statement.  On my systems, /var
contains all sorts of rather large things, such as PostgreSQL databases,
INN spool files, and mail spools.  I have filled up /var on many systems
over the years, and it's never been by building initrd images.

> Taking a quick look around all my machines, the accumulated cruft in
> /var/tmp is on the order of kilobytes -- mostly reportbug files, and a
> few from audacity -- and these machines have not been reinstalled in the
> last ten years.

Yes, I don't think many programs use it.  I think that's a good thing; the
specific semantics of /var/tmp are only useful in fairly narrow
situations, and overfilling it is fairly dangerous.

Back in the day, /var/tmp was the thing that you used if /tmp was too
small (because it was usually tmpfs).  For example, using sort -T /var/tmp
to sort large files is an old UNIX rune.  And, of course, students would
use it because they ran out of quota in their home directories and then
get upset when their files got deleted automatically, back in the days of
shared UNIX login clusters.

-- 
Russ Allbery (r...@debian.org)  



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Luca Boccassi
On Tue, 7 May 2024 at 17:33, Sam Hartman  wrote:
>
> > "Luca" == Luca Boccassi  writes:
>
> Luca> On Mon, 6 May 2024 at 15:42, Richard Lewis
> Luca>  wrote:
> >>
> >> Luca Boccassi  writes:
> >>
> >> > Hence, I am not really looking for philosophical discussions or
> >> lists > of personal preferences or hypotheticals, but for facts:
> >> what would > break where, and how to fix it?
>
> ssh-agent appears to default to creating a socket under /tmp.
> I think respecting $XDG_RUNTIME_DIR would be better.
>
> /etc/X11/Xsession.d/90x11-common_ssh-agent also doesn't override where
> the socket ends up.
> I definitely think for session scripts like that $XDG_RUNTIME_DIR would
> be better.
>
>
> gnome-keyring's ssh-agent handles this better, although last time I
> checked, it did not support pkcs11, so I could not use it with PIV
> cards.
> (Other parts of gnome-keyring do support pkcs11).

The ssh agent provided by gnupg also behaves correctly and creates the
socket in XDG_RUNTIME_DIR. I have filed a bug for openssh-client's
ssh-agent.



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Simon Richter

Hi,

On 5/8/24 07:05, Russ Allbery wrote:


It sounds like that is what kicked off this discussion, but moving /tmp to
tmpfs also usually makes programs that use /tmp run faster.  I believe
that was the original motivation for tmpfs back in the day.


IIRC it started out as an implementation of POSIX SHM, and was later 
generalized. The connection to SHM is still there -- POSIX SHM only 
works if a tmpfs is mounted anywhere in the system. Canonically, a tmpfs 
is mounted on /dev/shm for that purpose, but if /tmp is a tmpfs, then 
/dev/shm doesn't need to exist.


I agree that it makes a lot of things run faster (especially compiling, 
which creates temporary files in /tmp), but it has also caused 
situations that required pressing SysRq to resolve (also during compiling).



For /var/tmp, I think the primary motivation to garbage-collect those
files is that filling up /var/tmp is often quite bad for the system.  It's
frequently not on its own partition, but is shared with at least /var, and
filling up /var can be very bad.  It can result in bounced mail, unstable
services, and other serious problems.


When /var runs full, the problem is probably initrd building.

Taking a quick look around all my machines, the accumulated cruft in 
/var/tmp is on the order of kilobytes -- mostly reportbug files, and a 
few from audacity -- and these machines have not been reinstalled in the 
last ten years.


   Simon



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Luca Boccassi
On Tue, 7 May 2024 at 22:57, Russ Allbery  wrote:
>
> Richard Lewis  writes:
> > Luca Boccassi  writes:
>
> >> what would break where, and how to fix it?
>
> > Another one for you to investigate: I believe apt source and 'apt-get
> > source' download and extract things into /tmp, as in the mmdebootstap
> > example mentioned by someone else, this will create "old" files that
> > could immediately be flagged for deletion causing surprises.
>
> > (People restoring from backups might also find this an issue)
>
> systemd-tmpfiles respects atime and ctime by default, not just mtime, so I
> think this would only be a problem on file systems that didn't support
> those attributes.  atime is often turned off, but I believe support for
> ctime is fairly universal among the likely file systems for /var/tmp, and
> I believe tmpfs supports all three.  (I'm not 100% sure, though, so please
> correct me if I'm wrong.)

Yes atime/ctime are used too, so things that are really in the process
of being used are not really an issue.

I checked screen and even in bookworm it uses /run/screen/ as you
said, so it's fine.

I checked tmux and indeed it uses /tmp/tmux-UID/ - which is a terrible
choice given it's predictable so if something manages to run first it
can hijack it, but that's really a pre-existing issue. I've filed a
bug to notify that it needs to start flocking the file in /tmp/ while
running to avoid them being deleted while in use.



Re: Y2038-safe replacements for utmp/wtmp and lastlog

2024-05-07 Thread Craig Small
On Wed, 8 May 2024 at 09:03, Jun MO  wrote:

> 1) I hope there will still be the original
> w(1)/last(1)/lastb(1)/lastlog(1)/faillog(1)
> tools which can still read *old* format utmp/wtmp/lastlog in Debian at
> least for
> a while after switch to Y2038-safe replacements. Those tools can read
>

I can only speak for w.  It currently prefers what it gets from systemd or
elogind, effectively
iterating over the sessions coming from sd_get_sessions() if sd_booted()
returns true.

If sd_booted() returns false, then it uses the old utmp/utmpx files for
now. Besides the Y2038
issue, the utmp "API" is pretty awful with things like errors pretty much
undetectable. There is also
the problem about who (e.g. which process) should be writing to those files
as you have pointed out
in your email.

For now w/uptime will use utmp as a fallback, but I'll be happy if this
gets updated to something better; it's a low-priority
for me because systemd/elogind do what I need most of the time.

 - Craig


Re: Y2038-safe replacements for utmp/wtmp and lastlog

2024-05-07 Thread Jun MO

Dear Developers,

A bit from a Debian user. Please note that I am not come to here to 
blame/complain
against the Upstream/Maintainer of the pam package and the Maintainer of 
the shadow
package, or come to here to request something. I just come to here to 
present

some of my hope.

I often use the w(1)/last(1) command, and sometimes use the lastb(1) 
command.
Several days ago, I noticed that records logged in from 
console(tty{1..6}) are missing
from the output of last(1) while records from ssh/tmux/lightdm are still 
there,
and I started to guess maybe some of my recent changes to the system 
caused this.
Today I try to find out what happened, and after several hours of 
fruitless effort
(try different options of agetty/login, use strace/gdb on agetty/login 
and read
the source of the shadow package), I noticed that a word "pam_lastlog" 
in the

source code. Finally I find out the problem is caused by that login(1) use
pam_lastlog.so to write /var/log/wtmp, but pam_lastlog.so was not longer 
included
in the libpam-modules package. (It was somehow related to #1068229, but 
I missed

it.)

From my understanding, why we need to deal with the Y2038 issue is that 
the issue
may cause problems, some of which will be big problems, after year 2038. 
But so,
let's now rush some changes, to confuse users or break something? (Just 
joke and,
again I am not to blame anyone.) Are there issues using 
utmp/wtmp/lastlog and
w(1)/last(1)/lastlog(1), *currently*? Are there security issues, big 
defects or

are they hard to maintain?

If not, I prefer a bit more slow pace, compatible and less disturbed 
process.

More specifically, regarding to some changes proposed in the wiki [1],

1) I hope there will still be the original 
w(1)/last(1)/lastb(1)/lastlog(1)/faillog(1)
tools which can still read *old* format utmp/wtmp/lastlog in Debian at 
least for
a while after switch to Y2038-safe replacements. Those tools can read 
old format

files without convert/import to new format. I am keeping old wtmp files for
several years. Starting from 2016 my system with a proprietary nvidia 
driver failed to

resume from suspend2ram and playing a video using hardware accelerator would
cause the system unstable. Five years later, still having the problem, 
with some
help of reading kernel version from `last -f /var/log/wtmp.*', I finally 
found
out that there is a commit change in the kernel caused the problem. This 
shows
that having those tools installed may provide a few help. Another point 
may be
that package from 3rd parties may still write old-format 
utmp/wtmp/lastlog, it
will be good still having those tools installed at least for a while. 
Those tools
may be modified that when invoked, print messages to inform users 
time/date maybe
incorrect after the year 2038, and suggest/recommend/urge users only use 
those

tools to read old files and switch to use new replacements. Anyway it seems
keeping those tools have little harm. And I have a look into wtmpdb, 
from salsa[2].
From manpage and --help, it seems to me the current version 0.11.0 of 
wtmpdb does
not support read/import/migrate from /var/log/wtmp, the suggest of 
/usr/bin/wtmpdb
take over last(1) in the wiki is not feasible as some user may still 
expect using
`last -f /var/log/wtmp.*' to read old files. Even if new version of 
wtmpdb can
read from /var/log/wtmp without import it into 
/var/lib/wtmpdb/wtmpdb.db, it is
still good to have a choose to use the original last(1) to read from 
/var/log/wtmp.
(Also see below.) It is similar to lastlog2. I see lastlog2 can already 
import
from /var/log/lastlog, but from usage() [3], it will always import to a 
lastlog2

database.
2) I hope I can choose keeping or deleting the old utmp/wtmp/lastlog 
files when
switch to Y2038-safe replacements. By default, those old files may be 
automatically
deleted, but before extracting new package/running maint scripts, there 
may be a prompt
telling users that those files will be deleted and asking users chose 
whether continue
or not; if not, dpkg should exit without deleted those files. Or those 
will not be
automatically deleted, instead a Debian.NEWS may be displayed or maint 
script can
print a message telling those files can be safely deleted after 
converted. I think
the current dpkg already have functions to implement aforementioned 
actions as I
have seen something alike many times. I known normally purge a package 
will deleted
its log(and configuration), but wtmp/btmp/lastlog/faillog do not belong 
to any
package and many program can read from/write to them. Also it seems to 
me that
deleting logs during upgrade is not so good, and maybe leave it to users 
to decide.
You may ask why I want to keep those old format files instead of 
converting them
and use new tools to read? I can not exactly tell why, but I maybe 
afraid the
converting may not perfect and I want to compare both output before 
deleting the
old ones. For example, the old format files may have corrupted, and the 

Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Russ Allbery
Richard Lewis  writes:

> btw, i'm not trying to argue against the change, but i dont yet
> understand the rationale (which id like to be put into the
> release-notes): is there perhaps something more compelling than "other
> distributions and upstream already do this"?

It sounds like that is what kicked off this discussion, but moving /tmp to
tmpfs also usually makes programs that use /tmp run faster.  I believe
that was the original motivation for tmpfs back in the day.

For /var/tmp, I think the primary motivation to garbage-collect those
files is that filling up /var/tmp is often quite bad for the system.  It's
frequently not on its own partition, but is shared with at least /var, and
filling up /var can be very bad.  It can result in bounced mail, unstable
services, and other serious problems.

Most modern desktop systems now have large enough drives that this isn't
as much of a concern as it used to be, but VMs often still have quite
small / partitions and put /var/tmp on that partition.

-- 
Russ Allbery (r...@debian.org)  



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Sven Mueller
Am 07.05.2024 22:56 schrieb Richard Lewis :Luca Boccassi  writes:



> qwhat would

> break where, and how to fix it?



Another one for you to investigate: I believe apt source and 'apt-get

source' download and extract things into /tmp, as in the mmdebootstap

example mentioned by someone else, this will create "old" files that

could immediately be flagged for deletion causing surprises.`apt download` and `apt source` download to your current working directory. Same for apt-get.
(People restoring from backups might also find this an issue)I would not expect people to restore to /tmp and expecting that restore to work across a reboot. And to be honest, I find the expectation to have any guarantee on files in /tmp or /var/tmp across a reboot quite surprising. The directories are named "tmp" because they are meant as temporary storage, which implies automatic deleting at some point IMHO.Now, bad choices by various tools have been mentioned, so a cleaner for these directories that runs outside a reboot has to be careful anyhow. But during a reboot? I don't think that should be too much of a problem. Cheers, Sven

Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Russ Allbery
Richard Lewis  writes:
> Luca Boccassi  writes:

>> what would break where, and how to fix it?

> Another one for you to investigate: I believe apt source and 'apt-get
> source' download and extract things into /tmp, as in the mmdebootstap
> example mentioned by someone else, this will create "old" files that
> could immediately be flagged for deletion causing surprises.

> (People restoring from backups might also find this an issue)

systemd-tmpfiles respects atime and ctime by default, not just mtime, so I
think this would only be a problem on file systems that didn't support
those attributes.  atime is often turned off, but I believe support for
ctime is fairly universal among the likely file systems for /var/tmp, and
I believe tmpfs supports all three.  (I'm not 100% sure, though, so please
correct me if I'm wrong.)

-- 
Russ Allbery (r...@debian.org)  



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Richard Lewis
Holger Levsen  writes:

> I'm a bit surprised how many people seem to really rely on data in /tmp
> to survive for weeks or even months. I wonder if they backup /tmp?

I use /tmp for things that fall somewhere between "needs a backup" and
"unimportant, can be deleted whenever". I think all of the issues raised
(disappearing files from git checkouts, old files in unpacked
tarballs/debootstraps/downloads, autopkgtests, 3rd-party-software, bad
choices in tmux/ssh-agent/etc) fall into that spectrum: It's not that
the data is critical, but losing it creates more work -- what is the
reason for accepting that 'risk'?

btw, i'm not trying to argue against the change, but i dont yet
understand the rationale (which id like to be put into the
release-notes): is there perhaps something more compelling than "other
distributions and upstream already do this"?



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Richard Lewis
Luca Boccassi  writes:

> qwhat would
> break where, and how to fix it?

Another one for you to investigate: I believe apt source and 'apt-get
source' download and extract things into /tmp, as in the mmdebootstap
example mentioned by someone else, this will create "old" files that
could immediately be flagged for deletion causing surprises.

(People restoring from backups might also find this an issue)



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Andrey Rakhmatullin
On Tue, May 07, 2024 at 09:49:17PM +0200, Johannes Schauer Marin Rodrigues 
wrote:
> Quoting Andrey Rakhmatullin (2024-05-06 19:14:40)
> > On Mon, May 06, 2024 at 04:50:50PM +0100, Barak A. Pearlmutter wrote:
> > > > tmpfiles.d snippets can be defined to cleanup on a timer _anything_,
> > > 
> > > It's a question of what the *default* behaviour should be.
> > > 
> > > For whatever reason, a lot of people who process large data use
> > > /var/tmp/FOO/ as a place to store information that should not be
> > > backed up, but also should not just disappear.
> > To be honest I'm greatly surprised by this idea, and by the suggestion
> > that a lot of people do this, to me this is very similar by that half-joke
> > about people storing useful files in the Recycle Bin.
> 
> I'm doing exactly that. I use paper I already printed stuff on but which were
> misprints or which are no longer useful to me and are just destined for the
> recycling bin to write stuff on that is important to me. After I'm done with 
> my
> work on this scrap paper I decide what I want to keep and copy to permanent
> storage and what I want to really throw away.
I actually meant the Windows feature :)


-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Johannes Schauer Marin Rodrigues
Quoting Andrey Rakhmatullin (2024-05-06 19:14:40)
> On Mon, May 06, 2024 at 04:50:50PM +0100, Barak A. Pearlmutter wrote:
> > > tmpfiles.d snippets can be defined to cleanup on a timer _anything_,
> > 
> > It's a question of what the *default* behaviour should be.
> > 
> > For whatever reason, a lot of people who process large data use
> > /var/tmp/FOO/ as a place to store information that should not be
> > backed up, but also should not just disappear.
> To be honest I'm greatly surprised by this idea, and by the suggestion
> that a lot of people do this, to me this is very similar by that half-joke
> about people storing useful files in the Recycle Bin.

I'm doing exactly that. I use paper I already printed stuff on but which were
misprints or which are no longer useful to me and are just destined for the
recycling bin to write stuff on that is important to me. After I'm done with my
work on this scrap paper I decide what I want to keep and copy to permanent
storage and what I want to really throw away.

I would not like if (suppose I had a person cleaning up my stuff) a cleaning
person came to my desk, saw that there was obvious scrap paper ultimately
destined for the bin and threw that away every once in a while. I might be
taking my notes on what is ultimately for the trash but I want to be the one to
decide when to bring the trash out.

Thank you for this great example! :)

cheers, josch

signature.asc
Description: signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Johannes Schauer Marin Rodrigues
Hi,

Quoting Holger Levsen (2024-05-07 17:22:48)
> On Tue, May 07, 2024 at 04:24:06PM +0300, Hakan Bayındır wrote:
> > Consider a long running task, which will take days or weeks (which is the
> > norm in simulation and science domains in general). System emitted a warning
> > after three days, that it'll delete my files in three days. My job won't be
> > finished, and I'll be losing three days of work unless I catch that warning.
> Then it will be high time you learn not to abuse /tmp that way and work in
> your (or your services) home/data directory.
> 
> Problem easily avoided. plus you don't need to make /tmp 20 TB because you
> have lots of data. ;)
> 
> I'm a bit surprised how many people seem to really rely on data in /tmp to
> survive for weeks or even months. I wonder if they backup /tmp?

I like using /tmp because it's a tmpfs which makes some things faster. For
quite a few things I do not want them to be stored long-term on my SSD so I
resort to using /tmp and not the directory I called ~/tmp inside my $HOME.

This is also not only about data surviving for weeks and months. Elsewhere in
this thread i mentioned mmdebstrap as an application which creates files in
/tmp which have a modification time far in the past. The same happens when
using other tools, for example lets say I want to have a small scratch space
into which I wget some files:

$ wget https://www.debian.org/Pics/debian-logo-1024x576.png
$ stat -c %y debian-logo-1024x576.png
2020-12-17 10:59:08.0 +0100

Will this mean that debian-logo-1024x576.png might accidentally get cleaned up
unless I disable that mechanism? The problem is not limited to people with a
crazy large /tmp either. My system has 3.7 GB of ram and having /tmp be a tmpfs
(even though it's very small) is still beneficial for me because my maximum
read speed from my SSD is 140 MB/s. My small RAM is much faster than that.

Thanks!

cheers, josch

signature.asc
Description: signature


Re: how to upgrade testing

2024-05-07 Thread Brad Rogers
On Tue, 7 May 2024 20:54:39 +0200
Jérémy Lal  wrote:

Hello Jérémy,

>could we have a hint when it's "safe" to upgrade testing ?

I'm not Debian developer, so do what you will with what I say;

I've been upgrading testing daily for a couple of weeks.  Due, in part,
to cut the amount of packages updated at any one time (highest was about
250 in one day).  I've had no insurmountable issues.  There have been
days when I had to take care upgrading because there appeared to be a
need to remove huge numbers of essential packages.  Much of that could be
my 'fault' because I have repos other than official Debian that I use.
However careful selection of packages allowed me to upgrade in a sane
manner.

Just keep an eye on what is being marked for removal (many will be
libraries being replaced by their t64 counterparts), "just in case".

-- 
 Regards  _   "Valid sig separator is {dash}{dash}{space}"
 / )  "The blindingly obvious is never immediately apparent"
/ _)rad   "Is it only me that has a working delete key?"
I'm spending all my money and it's going up my nose
Teenage Depression - Eddie & The Hot Rods


pgpsu9aA7UOzq.pgp
Description: OpenPGP digital signature


Re: how to upgrade testing

2024-05-07 Thread Andrey Rakhmatullin
On Tue, May 07, 2024 at 08:54:39PM +0200, Jérémy Lal wrote:
> could we have a hint when it's "safe" to upgrade testing ?
It was always safe...

> Currently I get for a full-upgrade:
> 2338 mis à jour, 362 nouvellement installés, 715 à enlever et 41 non mis à
> jour.
alias e='LC_ALL=C'
e apt full-upgrade

But also if apt wants to remove some things you need then you may need to
upgrade some packages manually or install some *t64 libs manually, just
like in sid in March.

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Barak A. Pearlmutter
I guess sometimes when people discuss technical matters, good ideas pop up.

(Although I still think that its problematic interactions with lengthy
suspends makes the whole idea of auto-deletion based purely on
timestamps problematic. I can imagine more coherent mechanisms, which
doesn't count time the machine was suspended against the removal
clock, which check if any files are open, check if any process has its
current directory in a tree, which treats trees as a whole instead of
deleting leaves piecemeal only deleting the tree if all its contents
are ripe, etc. That would introduce considerable complexity though.
However, it is the sort of thing a good sysadmin would do before
manually removing stuff in /var/tmp/, so ...)



Re: Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Josh Triplett
Barak A. Pearlmutter wrote:
> You know, that's a pretty good idea!
>
> Put a 00README-TMP.txt in /tmp/ and /var/tmp/ which briefly states the
> default deletion policy, the policy in place if it's not the default,
> and a pointer to info about altering it. "/tmp's contents are deleted
> at boot while /var/tmp is preserved across rebooting." Maybe in
> /var/tmp suggest /var/scratch/ or /var/cache/tmp or such as a place
> sysadmins might want to set up for not-backed-up but not-auto-deleted
> material.
>
> If the contents aren't dynamic, maybe they could be links to files in
> /usr/share/doc/systemd/.

This seems like a *great* idea. systemd-tmpfiles configuration can
easily create such a file, either with contents or as a symlink to a
documentation file in /usr/share/doc.



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread rhys
It is not "abuse of /tmp" to put files there, even if they need to be there for 
a long time. That is an unnecessary characterization. 

Yes, /tmp gets backed up along with the rest of the system on every VM in my 
environment. 

Sometimes "temporary" CAN mean "weeks or even months." That's not something 
that needs to be determined in advance by someone else. 

This is keeping in mind that I, myself, said that I would be fine with /tmp as 
a tmpfs. I wouldn't cry at all if /tmp were cleaned out at boot time (which 
could be weeks or even months).

But a) at boot time, all processes have been restarted. It would be an 
exceptional process that needs to resume after a reboot using data in /tmp 
(though not impossible) and...

...perhaps more importantly, b) I wouldn't want to declare on behalf of all 
Debian users that doing so Is Wrong And You Should Not Do It. That strikes me 
as unnecessarily presumptuous. 

It's one thing to support a change that MY systems can cope with. It's quite 
another to declare that EVERYONE will now use these rules, particularly when it 
means deleting their files. 

Even after a reboot, I would be upset to lose the debug files that I've been 
accumulating for several days while trying to track down an intermittent 
problem with this stupid VPN...

Sent from my mobile device.


From: Holger Levsen 
Sent: Tuesday, May 7, 2024 10:22
To: debian-devel@lists.debian.org
Subject: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default 
[was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

Then it will be high time you learn not to abuse /tmp that way 

I'm a bit surprised how many people seem to really rely on data in /tmp
to survive for weeks or even months. I wonder if they backup /tmp?


--
cheers,
Holger

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
⠈⠳⣄

"When one man dies it's a tragedy. When thousands die it's statistics."
(Stalin commenting the worlds reaction on Covid 19.)

Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Marvin Renich
Early in this meta-thread it was suggested to separate /tmp-is-tmpfs
from cleanup-of-{,/var}/tmp.  I am really surprised that nobody has
suggested the obvious separation of new installs from upgrades.

Changing the local configuration for either feature is trivial either
way.  I think the proposed changes are reasonable for _new_
_installations_.  However, making configuration changes on upgrade that
may potentially cause significant problems, even if only for a small
number of users, and even if they are currently abusing tmp directories,
seems unnecessary to me.

The release notes can say "The defaults for new installations have
changed.  For upgrades, if it will not cause a disruption on your
system, you can get the new behavior by changing these configuration
files"

Personally, I believe (for upgrades) changing tmp-is-tmpfs will have
much less disruptive effect overall than the cleanup of /var/tmp, but I
don't see any reason to force either change on upgrade.

...Marvin



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Russ Allbery
Hakan Bayındır  writes:

> The applications users use create these temporary files without users'
> knowledge. They work in their own directories, but applications create
> another job dependent state files in both /tmp and /var/tmp. These are
> different programs and I assure you they’re not created there because
> user (or we) configured something. These files live there during the
> lifetime of the job, and cleaned afterwards by the application.

Then someone should fix those applications, because that behavior will
result in user data loss if they're not fixed.  However, first one should
check whether the applications are just honoring TMPDIR or equivalent
variables, in which case TMPDIR on batch systems often should be set to a
user-specific or job-specific persistent directory for exactly this
reason.  That way you can use a user-specific cleanup strategy, such as
purging that directory when all of the user's jobs have finished.

I understand your point, which is that this pattern is out there in the
wild and Debian is in danger of breaking existing usage patterns by
matching the defaults of other distributions.  This is a valid point, and
I appreciate you making it.

My replies are not intended to dispute that point, but to say that the
burden of addressing this buggy behavior should not rest entirely on
Debian.  What the combination of batch system and application is doing is
semantically incorrect and is dangerous, and it really should be fixed.
Even if Debian changes nothing, at some point someone will deploy workers
with a different base operating system and be very surprised when these
files are automatically deleted.

We were automatically cleaning /tmp and /var/tmp on commercial UNIX
systems in 1995 and fixing broken applications that didn't honor TMPDIR.
This is not a new problem.  Nor is having /var/tmp fill up and cause all
sorts of system problems because someone turned off /var/tmp cleaning
while trying to work around broken applications.

-- 
Russ Allbery (r...@debian.org)  



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Sam Hartman
> "Luca" == Luca Boccassi  writes:

Luca> On Mon, 6 May 2024 at 15:42, Richard Lewis
Luca>  wrote:
>> 
>> Luca Boccassi  writes:
>> 
>> > Hence, I am not really looking for philosophical discussions or
>> lists > of personal preferences or hypotheticals, but for facts:
>> what would > break where, and how to fix it?

ssh-agent appears to default to creating a socket under /tmp.
I think respecting $XDG_RUNTIME_DIR would be better.

/etc/X11/Xsession.d/90x11-common_ssh-agent also doesn't override where
the socket ends up.
I definitely think for session scripts like that $XDG_RUNTIME_DIR would
be better.


gnome-keyring's ssh-agent handles this better, although last time I
checked, it did not support pkcs11, so I could not use it with PIV
cards.
(Other parts of gnome-keyring do support pkcs11).



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Hakan Bayındır



Sent from my iPhone

> On 7 May 2024, at 18:39, Holger Levsen  wrote:
> 
> On Tue, May 07, 2024 at 04:24:06PM +0300, Hakan Bayındır wrote:
>> Consider a long running task, which will take days or weeks (which is the
>> norm in simulation and science domains in general). System emitted a warning
>> after three days, that it'll delete my files in three days. My job won't be
>> finished, and I'll be losing three days of work unless I catch that warning.
> 
> Then it will be high time you learn not to abuse /tmp that way and
> work in your (or your services) home/data directory.
> 
> Problem easily avoided. plus you don't need to make /tmp 20 TB because you
> have lots of data. ;)
> 
> I'm a bit surprised how many people seem to really rely on data in /tmp
> to survive for weeks or even months. I wonder if they backup /tmp?
Me is figurative here. Neither me, nor my code nor our users abuse these 
folders. The applications they use create these files without users’ knowledge. 

And yes, these applications rely on the data they saved on /tmp during the job. 
Again, let me repeat. These are not users' files, but applications internal 
data which they automatically create.

And sometimes these /tmp folders are put on high speed internal NVMe RAIDs to 
allow multiple GPUs work together with lower latency, for weeks. 
> -- 
> cheers,
>Holger
> 
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
> ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
> ⠈⠳⣄
> 
> "When one man dies it's a tragedy. When thousands die it's statistics." 
> (Stalin commenting the worlds reaction on Covid 19.)



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Hakan Bayındır



> On 7 May 2024, at 18:57, Russ Allbery  wrote:
> 
> Hakan Bayındır  writes:
>> Dear Russ,
> 
>>> If you are running a long-running task that produces data that you
>>> care about, make a directory for it to use, whether in your home
>>> directory, /opt, /srv, whatever.
> 
>> Sorry but, clusters, batch systems and other automated systems doesn't
>> work that way.
> 
> Yours might not, but I spent 20 years maintaining clusters and batch
> systems and I assure you that's how mine worked.
> 
That’s nice. We’re in it for the same duration. 
>> That's not an extension of the home directory in any way. After users
>> submit their jobs to the cluster, they neither have access to the
>> execution node, nor they can pick and choose where to put their files.
> 
>> These files may stay there up to a couple of weeks, and deleting
>> everything periodically will probably corrupt the jobs of these users
>> somehow.
> 
> Using /var/tmp for this purpose is not a good design decision.
> Directories are free; they can make a new one and point the files of batch
> jobs there.  They don't have to overload a directory that historically has
> different semantics and is often periodically cleared.  I get that this
> may not be your design or something you have control over, so telling you
> this doesn't directly help, but the point still stands.
> 
> Again, obviously the people configuring that cluster can configure it
> however they want, including overriding the /var/tmp cleanup policy.  But
> they're playing with fire by training users to use /var/tmp, and it's
> going to result in someone getting their data deleted at some point,
> regardless of what Debian does.
> 
You still assume that we direct users' home directories to /var/tmp or /tmp. 
This is not true, users work on their own home folders, on a different storage 
system. Possibly I didn’t myself clear enough. 

The applications users use create these temporary files without users' 
knowledge. They work in their own directories, but applications create another 
job dependent state files in both /tmp and /var/tmp. These are different 
programs and I assure you they’re not created there because user (or we) 
configured something. These files live there during the lifetime of the job, 
and cleaned afterwards by the application. 
> -- 
> Russ Allbery (r...@debian.org)  
> 



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Russ Allbery
Hakan Bayındır  writes:
> Dear Russ,

>> If you are running a long-running task that produces data that you
>> care about, make a directory for it to use, whether in your home
>> directory, /opt, /srv, whatever.

> Sorry but, clusters, batch systems and other automated systems doesn't
> work that way.

Yours might not, but I spent 20 years maintaining clusters and batch
systems and I assure you that's how mine worked.

> That's not an extension of the home directory in any way. After users
> submit their jobs to the cluster, they neither have access to the
> execution node, nor they can pick and choose where to put their files.

> These files may stay there up to a couple of weeks, and deleting
> everything periodically will probably corrupt the jobs of these users
> somehow.

Using /var/tmp for this purpose is not a good design decision.
Directories are free; they can make a new one and point the files of batch
jobs there.  They don't have to overload a directory that historically has
different semantics and is often periodically cleared.  I get that this
may not be your design or something you have control over, so telling you
this doesn't directly help, but the point still stands.

Again, obviously the people configuring that cluster can configure it
however they want, including overriding the /var/tmp cleanup policy.  But
they're playing with fire by training users to use /var/tmp, and it's
going to result in someone getting their data deleted at some point,
regardless of what Debian does.

-- 
Russ Allbery (r...@debian.org)  



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Holger Levsen
On Tue, May 07, 2024 at 04:24:06PM +0300, Hakan Bayındır wrote:
> Consider a long running task, which will take days or weeks (which is the
> norm in simulation and science domains in general). System emitted a warning
> after three days, that it'll delete my files in three days. My job won't be
> finished, and I'll be losing three days of work unless I catch that warning.

Then it will be high time you learn not to abuse /tmp that way and
work in your (or your services) home/data directory.

Problem easily avoided. plus you don't need to make /tmp 20 TB because you
have lots of data. ;)

I'm a bit surprised how many people seem to really rely on data in /tmp
to survive for weeks or even months. I wonder if they backup /tmp?


-- 
cheers,
Holger

 ⢀⣴⠾⠻⢶⣦⠀
 ⣾⠁⢠⠒⠀⣿⡁  holger@(debian|reproducible-builds|layer-acht).org
 ⢿⡄⠘⠷⠚⠋⠀  OpenPGP: B8BF54137B09D35CF026FE9D 091AB856069AAA1C
 ⠈⠳⣄

"When one man dies it's a tragedy. When thousands die it's statistics." 
(Stalin commenting the worlds reaction on Covid 19.)


signature.asc
Description: PGP signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Hakan Bayındır

Dear Russ,

It's not *me* using /var/tmp for my own temporary files, it's the 
applications other people use. I just logged in one of the nodes we have 
and there were job-dependent files created by a particular, high end 
scientific application (which is developed by another prominent 
company). This is neither in mine or users' control. It's the 
application they use, and I bet it has no setting for that.


> If you are running a long-running task that produces data that you
> care about, make a directory for it to use, whether in your home
> directory, /opt, /srv, whatever.

Sorry but, clusters, batch systems and other automated systems doesn't 
work that way.


> Not as an extension of people's home directory.

That's not an extension of the home directory in any way. After users 
submit their jobs to the cluster, they neither have access to the 
execution node, nor they can pick and choose where to put their files.


These files may stay there up to a couple of weeks, and deleting 
everything periodically will probably corrupt the jobs of these users 
somehow.


I for one understand that all the folders in a UNIX system have historic 
reasons and customs. I prefer to stick to the traditions and 
specifications come with them, however when you run a big system with 
tons of users who use tons of applications, I can't expect every 
software developer under the sun to know, understand and respect that 
conventions.


I just wanted to highlight a very prominent scenario from my vantage 
point, because it's the domain I'm working in.


> Your system is your system, so of course you can configure /var/tmp
> however you want and no one is going to stop you, but a lot of people
> on this thread are describing habits that are going to lose their data
> if they use a different distribution or even a differently-configured
> Debian distribution with tmpreaper installed.

Again, what I'm not describing is a *habit of mine*, but how many of the 
systems I interact with work, and there's no way to change that. I'm 
just pointing out how the systems we work with behave. We don't 
configure them that way. Heck, some of the applications our users use 
have no configuration file whatsoever.


I'm all for progress and a better, self-healing system, but I'm very 
against breaking things while doing that.


Cheers,

H.

On 7.05.2024 ÖS 5:32, Russ Allbery wrote:

Hakan Bayındır  writes:


Consider a long running task, which will take days or weeks (which is
the norm in simulation and science domains in general). System emitted a
warning after three days, that it'll delete my files in three days. My
job won't be finished, and I'll be losing three days of work unless I
catch that warning.


I have to admit that I'm a little surprised at the number of people who
are apparently using /var/tmp for things that are clearly not temporary
files in the traditional UNIX sense.  Clearly this bit of folk knowledge
is not as widespread as I thought, so we have to figure out how to deal
with that, but periodically deleting files out of /var/tmp has been common
(not universal, but common) UNIX practice for at least thirty years.

Whatever we do with /var/tmp retention, I beg people to stop using
/var/tmp for data you're keeping for longer than a few days and care about
losing.  That's not what it's for, and you *will* be bitten by this
someday, somewhere, because even with existing Debian configuration many
people run tmpreaper or similar programs.  If you are running a
long-running task that produces data that you care about, make a directory
for it to use, whether in your home directory, /opt, /srv, whatever.

/var/tmp's primary purpose historically was to support things like
temporary recovery files that needed to survive a system crash, but which
were still expected to be *temporary* in that one would then either use
the recovery file or expect it to be deleted.  Not as an extension of
people's home directory.

Your system is your system, so of course you can configure /var/tmp
however you want and no one is going to stop you, but a lot of people on
this thread are describing habits that are going to lose their data if
they use a different distribution or even a differently-configured Debian
distribution with tmpreaper installed.





OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Luca Boccassi
On Tue, 7 May 2024 at 15:53, Sam Hartman  wrote:
>
> > "Johannes" == Johannes Schauer Marin Rodrigues  
> > writes:
> >> > > If [files can be deleted automatically while mmdebstrap is using 
> them],
> >> > > how should applications guard against that from
> >> > > happening?
> >> >
> >> > As documented in tmpfiles.d(5), if mmdebstrap takes out an exclusive
> >> > flock(2) lock on its chroot's root directory, systemd-tmpfiles should
> >> > fail to take out its own lock on the directory during cleanup, and
> >> > respond to that by treating the directory as "in use" and skipping 
> it.
> >>
> >> That also works, but only as long as mmdebootstrap is actually
> >> running, and as far as I understand it is not a long-running service,
> >> not sure if it works for this use case
>
> Note that according to the man page, ctime is used as well as mtime.
> So for roots that are actually temporary, I don't think much needs to be
> done.
> It won't matter that the mtime might be old because the ctime should be
> consistent with when the root is unpacked.
>
> I do wish there were a way to specify for /var/tmp that directories
> under /var/tmp should be deleted in their entirety or entirely left
> alone.
> I realize we'd have a big debate about whether that was a good default,
> but I'd find it useful for my systems at least.

This is a reasonable RFE, and it has already been proposed some days
ago (in the right place, upstream):
https://github.com/systemd/systemd/issues/32674



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Sam Hartman
> "Johannes" == Johannes Schauer Marin Rodrigues  writes:
>> > > If [files can be deleted automatically while mmdebstrap is using 
them],
>> > > how should applications guard against that from
>> > > happening?
>> >
>> > As documented in tmpfiles.d(5), if mmdebstrap takes out an exclusive
>> > flock(2) lock on its chroot's root directory, systemd-tmpfiles should
>> > fail to take out its own lock on the directory during cleanup, and
>> > respond to that by treating the directory as "in use" and skipping it.
>> 
>> That also works, but only as long as mmdebootstrap is actually
>> running, and as far as I understand it is not a long-running service,
>> not sure if it works for this use case

Note that according to the man page, ctime is used as well as mtime.
So for roots that are actually temporary, I don't think much needs to be
done.
It won't matter that the mtime might be old because the ctime should be
consistent with when the root is unpacked.

I do wish there were a way to specify for /var/tmp that directories
under /var/tmp should be deleted in their entirety or entirely left
alone.
I realize we'd have a big debate about whether that was a good default,
but I'd find it useful for my systems at least.



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Andrey Rakhmatullin
On Tue, May 07, 2024 at 04:24:06PM +0300, Hakan Bayındır wrote:
> On the other hand, if we need to change the configuration 99% of the time,
[citation needed]




-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Russ Allbery
Hakan Bayındır  writes:

> Consider a long running task, which will take days or weeks (which is
> the norm in simulation and science domains in general). System emitted a
> warning after three days, that it'll delete my files in three days. My
> job won't be finished, and I'll be losing three days of work unless I
> catch that warning.

I have to admit that I'm a little surprised at the number of people who
are apparently using /var/tmp for things that are clearly not temporary
files in the traditional UNIX sense.  Clearly this bit of folk knowledge
is not as widespread as I thought, so we have to figure out how to deal
with that, but periodically deleting files out of /var/tmp has been common
(not universal, but common) UNIX practice for at least thirty years.

Whatever we do with /var/tmp retention, I beg people to stop using
/var/tmp for data you're keeping for longer than a few days and care about
losing.  That's not what it's for, and you *will* be bitten by this
someday, somewhere, because even with existing Debian configuration many
people run tmpreaper or similar programs.  If you are running a
long-running task that produces data that you care about, make a directory
for it to use, whether in your home directory, /opt, /srv, whatever.

/var/tmp's primary purpose historically was to support things like
temporary recovery files that needed to survive a system crash, but which
were still expected to be *temporary* in that one would then either use
the recovery file or expect it to be deleted.  Not as an extension of
people's home directory.

Your system is your system, so of course you can configure /var/tmp
however you want and no one is going to stop you, but a lot of people on
this thread are describing habits that are going to lose their data if
they use a different distribution or even a differently-configured Debian
distribution with tmpreaper installed.

-- 
Russ Allbery (r...@debian.org)  



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Barak A. Pearlmutter
> ...3) I would put a file in any auto-cleaned space named "1-AUTOCLEAN.txt" 
> that contains some verbage explaining that things in this directory will be 
> wiped based on rules set in (wherever).

You know, that's a pretty good idea!

Put a 00README-TMP.txt in /tmp/ and /var/tmp/ which briefly states the
default deletion policy, the policy in place if it's not the default,
and a pointer to info about altering it. "/tmp's contents are deleted
at boot while /var/tmp is preserved across rebooting." Maybe in
/var/tmp suggest /var/scratch/ or /var/cache/tmp or such as a place
sysadmins might want to set up for not-backed-up but not-auto-deleted
material.

If the contents aren't dynamic, maybe they could be links to files in
/usr/share/doc/systemd/.



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Alexandru Mihail

> Consider a long running task, which will take days or weeks (which is
> the norm in simulation and science domains in general). System
> emitted a 
> warning after three days, that it'll delete my files in three days.
> My 
> job won't be finished, and I'll be losing three days of work unless I
> catch that warning.
> 
> Now consider these tasks are run on (dark) servers, where users'
> daemons 
> login to run the tasks but users do not. How can the user know? What
> can 
> they do? Same can be said for long running daemons like mail servers,
> CI 
> runners and such.
> 
> One may argue that we can change the configuration, which is true.
> 
You're making a strong argument here, indeed. I personally manage a
horde of bookworm VMs which I really don't want to watch for precisely
minute changes that break the system in subtle ways like this.
Putting /tmp in RAM also won't work, as discussed here. Apart from
SBCs, all other Debian use cases would not benefit from this. There's
plenty of people using cloud virtual machines happily running Debian
with 1 GB of ram, running clamav, mail servers, apache, etc, etc. Let's
not eat up that ram further if we really don't have to. I know because
I am such an user and certainly not the only one. I don't know how this
would impact people who run a boatload of containers with Debian too.

> On the other hand, if we need to change the configuration 99% of the 
> time, why are we making the change to a worse one in the first place?
> 
There's been quite some debate here which is good. This sets us apart
from corporate-run Linux in that there's technical democracy in
decisions impacting users. Maybe listing and weighing, calmly, the pros
and cons of this decision and how bad/good will the impact be on
programs and users could help drive a decision.
For instance, adopting this behavior:
- aligns us with upstream (neutral in my opinion)
- prevents clutter in /var/tmp by misbehaving applications mainly  
(users filling up their drive is their fault; they can still dd of=/  
anyway; we shouldn't put baby wheels on our OS) (good)
- might surprise users who got used to /var/tmp as being a scratch
space since a long time ago and might cause frustration if their files
randomly disappear. Yes, they shouldn't be storing files there anyway,
but deleting them without them explicitly setting that mechanism up or
hitting Y on a big prompt isn't helpful. (regression)
- might require updates to older applications which wrongly use
/var/tmp as discussed above. (neutral, that wasn't a sturdy mechanism
anyway)
Thanks,
Alexandru


signature.asc
Description: This is a digitally signed message part


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread rhys
The /tmp/ as tmpfs discussion is funny to me because while we've been kicking 
around the idea of whether or not to clean /tmp/, having /tmp/ as a tmpfs makes 
that whole argument moot. It all goes away at boot time! Problem solved! :D

Honestly, I see this one as a much easier topic, assuming that no one is 
talking about changing existing systems. (I haven't seen anyone say that.)

So for new systems, /tmp/ as a tmpfs strikes me as a legitimate option, and the 
partition layout is something that any good admin pays close attention to on 
any new system, particularly a new distribution or even distro version. (That 
is, even going from 12.1 to 12.2, I'm going to be on the lookout for changes in 
the installer.)

Whether you want /tmp as a tmpfs is a decision that's going to be made at the 
same time as whether or not /home should be on a separate partition. The admin 
is going to do whatever makes the most sense for this system. 

To me, it's all about the display. I want to see what partition will be mounted 
as root, what partition will be mounted as /home, which will be swap (if any), 
and so on. But I don't need to see /proc and /sys. Those aren't optional. 

So if /tmp is not part of the root partition, it doesn't matter if it's a 
separate partition or a tmpfs. It should just appear in the screen that 
displays the filesystem layout, and then the admin can decide whether or not 
that's a good idea. 

I have no opinion on whether or not it should be the default. If /tmp/ as tmpfs 
becomes the default, I would probably only override that on certain low-memory 
systems that I run and just leave it on most others. I've seen it done before 
and it seemed to work fine in some cases and not in others. 

As long as it's somewhere that I can SEE it in the installer, I'd be happy. 
That's definitely a thing the admin can change later on with few consequences. 

Sent from my mobile device.


From: Hakan Bayındır 
Sent: Tuesday, May 7, 2024 05:45
To: 966...@bugs.debian.org; debian-devel@lists.debian.org
Cc: Carsten Leonhardt; Luca Boccassi; Peter Pentchev
Subject: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default 
[was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

Similarly, I’m following the thread for a couple of days now, and wondering 
about its implications. 

When I consider server scenarios, pushing /tmp to RAM looks highly undesirable 
from my perspective. All the servers I manage use their whole RAMs and using 
the unused space as a disk cache is far more desirable than a /tmp mount. When 
servers are virtualized, RAM is at premium, and a disk cache is way more usable 
rather than /tmp in the RAM. 

The other scenario I think is HPC, where applications use all the RAM 
available, squeezing the hardware dry. Again, /tmp in the RAM is very 
undesirable, because /tmp/$USER is also heavily used and an OOM situation 
creates a lose-lose situation where you either delete runtime files, or lose 
the executing job, which results in job failure in any case. 

On the other hand, I personally use my desktop computer as a development 
workstation and use tons of RAM either with software or with VMs. Again a /tmp 
in RAM is an inferior scenario to my current setup. 

The only useful state where /tmp is in RAM is single board computers where temp 
is both lightly utilized and maximizing SD/eMMC life is important. These 
systems even mount /var/log to a tmpfs and sync on boot/reboot/shutdown, 
reducing flash wear. 

Deleting /var/tmp has the same problems since long running tasks on the servers 
might need a file once in a month, but it can be crucial for functions of the 
software. 

I can’t see any scenario where these two are useful in typical or niche 
installations of Debian. 

FWIW, RedHat family doesn’t mount /tmp as a tmpfs on its default installation. 

Cheers, 

H. 



Re: Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Alexandru Mihail
Maybe putting the cleanup task for /var/tmp on a longer timer and
warning users ahead of time of impending deletion (maybe 3 days before,
2 days, etc) would help with files of unsuspecting users getting
deleted. A log entry could also be emitted. I could see a gentle
warning on ssh login (minimal, one or two lines) and desktop
notification (for desktop only users who never see the terminal) be
helpful. A smarter implementation could perhaps only warn if dirs/files
that are going to be deleted are not systemd generated random items.
This does not fix issues with applications depending on stuff being
there long term; yet again nothing's perfect in software


signature.asc
Description: This is a digitally signed message part


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread rhys
I'm always in favor of logging system changes. 

Notification at run time is really tricky. No one ever logs into several of my 
Debian servers. Other systems have interactive GUI or CLI users, some of whom 
are admins and others not. 

I don't know if login notices are terribly reliable as no one may ever see 
them. I see nothing wrong with having them, but I wouldn't consider them to be 
a good *primary* notification.

Personally, I would resort to email, as it's most likely that a good admin has 
at least set up root mail to go somewhere appropriate. 

Whether or not root should get copied on notifications sent to other users 
strikes me as a security question, though if the data to be deleted is in a 
shared scratch space like /tmp/ then perhaps that's not a concern (and the 
potential for lost data might override any such concerns anyway, plus the fact 
that root is, well, root in the first place). I would argue that the role of 
"root as helpful admin" should prevail in this case and root should get copied. 

I, for one, don't want a lot of email, but once the data is gone, it's GONE. 
I'd rather be notified and have to deal with it than not be told at all. 

Sent from my mobile device.

From: Alexandru Mihail 
Sent: Tuesday, May 7, 2024 07:59
To: debian-devel@lists.debian.org
Subject: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default 
[was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

Maybe putting the cleanup task for /var/tmp on a longer timer and warning users 
ahead of time of impending deletion (maybe 3 days before, 2 days, etc) would 
help with files of unsuspecting users getting deleted. A log entry could also 
be emitted. I could see a gentle warning on ssh login (minimal, one or two 
lines) and desktop notification (for desktop only users who never see the 
terminal) be helpful. A smarter implementation could perhaps only warn if 
dirs/files that are going to be deleted are not systemd generated random items. 
This does not fix issues with applications depending on stuff being there long 
term; yet again nothing's perfect in software 


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Hakan Bayındır
Consider a long running task, which will take days or weeks (which is 
the norm in simulation and science domains in general). System emitted a 
warning after three days, that it'll delete my files in three days. My 
job won't be finished, and I'll be losing three days of work unless I 
catch that warning.


Now consider these tasks are run on (dark) servers, where users' daemons 
login to run the tasks but users do not. How can the user know? What can 
they do? Same can be said for long running daemons like mail servers, CI 
runners and such.


One may argue that we can change the configuration, which is true.

On the other hand, if we need to change the configuration 99% of the 
time, why are we making the change to a worse one in the first place?


On 7.05.2024 ÖS 3:59, Alexandru Mihail wrote:

Maybe putting the cleanup task for /var/tmp on a longer timer and warning users 
ahead of time of impending deletion (maybe 3 days before, 2 days, etc) would 
help with files of unsuspecting users getting deleted. A log entry could also 
be emitted. I could see a gentle warning on ssh login (minimal, one or two 
lines) and desktop notification (for desktop only users who never see the 
terminal) be helpful. A smarter implementation could perhaps only warn if 
dirs/files that are going to be deleted are not systemd generated random items. 
This does not fix issues with applications depending on stuff being there long 
term; yet again nothing's perfect in software




OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread rhys
Perhaps ones *intentions* are good, but when you're making decisions for 
someone else, it is often necessary to take a step back and look at the bigger 
picture of what one is doing. 

I do not consider myself to be immune to the effects of this thought process. I 
have kids. They remind me constantly that I'm just as subject to the "I know 
better" mindset as anyone else. 

But you said the best thing to do is "keep an open mind." What Luca has 
literally said multiple times is "I've already made my decision and I'm only 
looking for comments that support the technical aspects of that decision."

That squashes discussion. Only one person has made such comments, and that 
person was not me. 

On a more procedural note...

I have repeatedly voiced my opinion that automatically deleting files is a bad 
idea and I stand by that opinion for reasons already stated. 

However, recognizing that I don't always get my way, I've thought about how one 
might "do it anyway" while still addressing my underlying concerns and here's 
what I came up with:

The two biggest things that are underneath are a) applications that don't clean 
up after themselves and b) a change in how the system behaves vs. what the 
users expect. 

To address a) in a coherent way, if I had infinite resources, I would create a 
package similar to popularity-contest that is obvious and optional and reports 
back what commonly appears in various scratch spaces. That is how I would 
gather a wide range of data on what packages don't behave well. D-i can allow 
the admin to opt in or not on the same screen as the one that asks about 
popularity-contest. (Those that opt in or out will likely do so for that same 
reasons, after all.)

As for b) the underlying problem is the change in *expected behavior* of the 
system. The real problem has nothing to do with whether or not it's technically 
a good idea. It's a shift in expectation with potentially disastrous 
consequences. Deleted files are often just gone. 

So to mitigate that, I would 1) only implement it on new installs (we'll come 
back to this), 2) mention it at least twice in the various d-i screens, and...

...3) I would put a file in any auto-cleaned space named "1-AUTOCLEAN.txt" that 
contains some verbage explaining that things in this directory will be wiped 
based on rules set in (wherever). 

This is how files like /etc/resolv.conf read when they are controlled by other 
processes. They just have text that tells you, "Don't change this directly. 
Your changes will be overwritten. Make your changes in (canonical place)."

Last but not least, I would go ahead and deploy the packages that automatically 
clean tmp spaces even to existing systems, but their default configuration 
would be disabled. The only thing that would enable them would be a) 
debian-installer (optionally, possibly as default), or b) admins who have heard 
about this and decide it's a good idea. 

Some will opt in, some won't. But the packages and their default configs could 
be pushed out safely. No one (even me) would have a reasonable complaint about 
such an arrangement. 

That would allow the expectation to shift over time while significantly 
reducing the number of surprised users who get their data deleted. 

In the end, this is all for people to use. So to me, it's much more an issue of 
making sure the people know what they're using and how to use it than anything 
else. If it were just a technical issue, this would be a much shorter 
conversation. ;)

--J

Sent from my mobile device.


From: "Barak A. Pearlmutter" 
Sent: Tuesday, May 7, 2024 07:18
To: r...@neoquasar.org
Cc: Luca Boccassi; debian-devel@lists.debian.org
Subject: Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default 
[was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

Rhys, I think you're being unfair. We have a *technical* disagreement 
here. But our hearts are all in the same place: Luca, myself, and all 
the other DDs discussing this, all want what's best for our users, we 
all want to build the best OS possible, and are all discussing the 
issue in good faith. 

There is an unavoidable tension, and we're hashing it out. Upstream 
has fielded a default behaviour which requires adjustment of a variety 
of other programs and workflows. Basically, anything that stores stuff 
in /tmp or /var/tmp needs to be made might-be-deleted-aware. There are 
mechanisms for dealing with this, but they're pretty complicated, and 
differ wildly for different file lifetimes etc. Other distributions 
have adopted that default, and rather than using exposed mechanisms 
for avoiding unexpected deletion, are just telling people not to count 
on files in /var/tmp/ surviving a reboot if the computer is shut down 
more than a month, or whatever. What should Debian do? You can make 
arguments both ways, and we are. Generally we follo

Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Hakan Bayındır
Similarly, I’m following the thread for a couple of days now, and wondering 
about its implications.

When I consider server scenarios, pushing /tmp to RAM looks highly undesirable 
from my perspective. All the servers I manage use their whole RAMs and using 
the unused space as a disk cache is far more desirable than a /tmp mount. When 
servers are virtualized, RAM is at premium, and a disk cache is way more usable 
rather than /tmp in the RAM.

The other scenario I think is HPC, where applications use all the RAM 
available, squeezing the hardware dry. Again, /tmp in the RAM is very 
undesirable, because /tmp/$USER is also heavily used and an OOM situation 
creates a lose-lose situation where you either delete runtime files, or lose 
the executing job, which results in job failure in any case.

On the other hand, I personally use my desktop computer as a development 
workstation and use tons of RAM either with software or with VMs. Again a /tmp 
in RAM is an inferior scenario to my current setup.

The only useful state where /tmp is in RAM is single board computers where temp 
is both lightly utilized and maximizing SD/eMMC life is important. These 
systems even mount /var/log to a tmpfs and sync on boot/reboot/shutdown, 
reducing flash wear.

Deleting /var/tmp has the same problems since long running tasks on the servers 
might need a file once in a month, but it can be crucial for functions of the 
software.

I can’t see any scenario where these two are useful in typical or niche 
installations of Debian.

FWIW, RedHat family doesn’t mount /tmp as a tmpfs on its default installation.

Cheers,

H.

> On 7 May 2024, at 12:42, Peter Pentchev  wrote:
> 
> On Tue, May 07, 2024 at 10:38:14AM +0200, Carsten Leonhardt wrote:
>> Luca Boccassi  writes:
>> 
>>> Defaults are defaults, they are trivially and fully overridable where
>>> needed if needed. Especially container and VM managers these days can
>>> super trivially override them via SMBIOS Type11 strings or
>>> Credentials, ephemerally and without changing the guest image at all.
>> 
>> That argument goes both ways and I prefer safe defaults. What
>> you/upstream propose are unsafe defaults, as was shown by several
>> comments in this thread. Whoever wants the unsafe defaults of deleting
>> old files and risking OOM situations can than "trivially and fully
>> override" the safe defaults.
> 
> So I've been wondering for a couple of days now, following this thread...
> ...would it be a good idea to make this a debconf prompt, high priority,
> default "yes", so that it is activated on new automatically installed
> systems, but people who upgrade their current Debian installations can
> choose to keep the old behavior?
> 
> I do realize that more debconf prompts are not always desirable, and
> such decisions must be taken on a case-by-case basis, so... yeah.
> 
> G'luck,
> Peter
> 
> -- 
> Peter Pentchev  r...@ringlet.net r...@debian.org pe...@morpheusly.com
> PGP key:http://people.FreeBSD.org/~roam/roam.key.asc
> Key fingerprint 2EE7 A7A5 17FC 124C F115  C354 651E EFB0 2527 DF13



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Simon McVittie
On Tue, 07 May 2024 at 07:34:54 -0500, r...@neoquasar.org wrote:
> possibly convince those applications to use their own
> scratch space such as /tmp// that is more easily identifiable

This would be a denial of service at best, and a privilege escalation
vulnerability at worst. To be safe, it would have to be more like
/tmp/.XX where the XX is replaced by a random string
by mkstemp() or similar.

(For example my system currently has /var/tmp/flatpak-cache-5X58M2/ which
is fine, but using /var/tmp/flatpak-cache/ would be wrong.)

smcv



Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Alexandru Mihail
Maybe putting the cleanup task for /var/tmp on a longer timer and warning users 
ahead of time of impending deletion (maybe 3 days before, 2 days, etc) would 
help with files of unsuspecting users getting deleted. A log entry could also 
be emitted. I could see a gentle warning on ssh login (minimal, one or two 
lines) and desktop notification (for desktop only users who never see the 
terminal) be helpful. A smarter implementation could perhaps only warn if 
dirs/files that are going to be deleted are not systemd generated random items. 
This does not fix issues with applications depending on stuff being there long 
term; yet again nothing's perfect in software


Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread rhys
This, in my opinion, is the correct view. 

If the users/admins of a system are putting files somewhere, those are their 
files and therefore their responsibility. It is not up to anyone else to claim 
they know better and clean up after them. 

If the files are abandoned by applications that don't clean up after 
themselves, the applications should be updated to clean up properly, since only 
those applications understand the full context of what is or isn't still "in 
use" or needed (and possibly convince those applications to use their own 
scratch space such as /tmp// that is more easily identifiable). 
Checking the documentation of those applications for descriptions of what ends 
up in [/var]/tmp/ would also be useful, since that informs the admin's decision 
on how to deal with possibly abandoned temp files. 

Making the applications behave properly and trusting the system owners to run 
their systems as they see fit will always be the better choice. 

Otherwise, all that will happen is that over time, another scratch space that 
does not automatically get reaped will appear, users and apps will migrate to 
that space, and we will all be back where we started. (No, they won't just 
"change the defaults" because that's not a stable process. One admin may allow 
that while another doesn't.)

--J

Sent from my mobile device.


From: Philip Hands 
Sent: Tuesday, May 7, 2024 06:31

This makes me wonder what it is that we're expecting to need to delete.

Is this a symptom of sloppy applications that fail to clear up the
debris they create in /var/tmp?  If so, is that not a bug in that
application?

I'd suggest that rather than clearing up after the sloppy behaviour of
buggy applications, we instead leave it visible, in the hope that it can
then be fixed.

Of course, that's obviously not worked in some (many?) cases, so where
we know of problematic packages, could we not add per-package tmpfiles.d
files that name the specific paths that those packages are known to
litter the system with, with appropriate deletion timeouts chosen by the
Maintainer?

That ought to achieve the benefit you're looking for, without hiding
symptoms of future problems with other packages, and without
inconveniencing anyone that's using /var/tmp as scratch space.

Cheers, Phil.
--
Philip Hands -- https://hands.com/~phil

Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Barak A. Pearlmutter
Rhys, I think you're being unfair. We have a *technical* disagreement
here. But our hearts are all in the same place: Luca, myself, and all
the other DDs discussing this, all want what's best for our users, we
all want to build the best OS possible, and are all discussing the
issue in good faith.

There is an unavoidable tension, and we're hashing it out. Upstream
has fielded a default behaviour which requires adjustment of a variety
of other programs and workflows. Basically, anything that stores stuff
in /tmp or /var/tmp needs to be made might-be-deleted-aware. There are
mechanisms for dealing with this, but they're pretty complicated, and
differ wildly for different file lifetimes etc. Other distributions
have adopted that default, and rather than using exposed mechanisms
for avoiding unexpected deletion, are just telling people not to count
on files in /var/tmp/ surviving a reboot if the computer is shut down
more than a month, or whatever. What should Debian do? You can make
arguments both ways, and we are. Generally we follow upstream unless
there's a compelling reason not to. You can suggest various strategies
for making things reliable despite following upstream. You can discuss
why maybe upstream should not be followed in this case. This is
precisely the kind of discussion that leads to good decisions, with
everyone keeping an open mind and sharing information and ideas.



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Philip Hands
Luca Boccassi  writes:

> On Mon, 6 May 2024 at 11:33, Barak A. Pearlmutter  wrote:
>>
>> > We have two separate issues here:
>>
>> > a/ /tmp-on-tmpfs
>> > b/ time based clean-up of /tmp and /var/tmp
>>
>> > I think it makes sense to discuss/handle those separately.
>>
>> Agreed.
>>
>> I also don't see any issue with a/, at worst people will be annoyed
>> with it for some reason and can then change it back.
>>
>> > Regarding b/: ...
>>
>> > The tmpfiles rule tmp.conf as shipped by systemd upstream ...
>> > Files that are older then 10 days or 30 days are automatically cleaned up.
>>
>> This seems like a rather dangerous thing to spring on people.
>>
>> First of all, time can be pretty fluid on user machines.
>
> Then upon reading the release notes, on such a machine, one can simply do:
>
> touch /etc/tmpfiles.d/tmp.conf
>
> And they get no automated cleanups. This stuff is designed to be
> trivially overridable, both by end-users and image builders. What I am
> looking for is, what packages need bugs/MRs filed to deal with this
> change, if any.

Isn't this change (as presented) effectively about masking bugs?

We've had people suggesting that implementing this will surprise them
and disrupt their existing use of /var/tmp as scratch storage, and I've
got a lot of sympathy with that, so I'm guessing that the people that
are expected to benefit from this are not those that remember systems
where the main distinction between /tmp and /var/tmp was that /tmp got
emptied at boot time, whereas /var/tmp did not.

That makes me assume that those that would be most likely benefit from
such a change will mainly be users that are never going to type "/tmp"
(with or without a preceding "/var"), and are therefore not going to
have any idea what is being deleting for them, but will be happy never
to get their disk filled.

This makes me wonder what it is that we're expecting to need to delete.

Is this a symptom of sloppy applications that fail to clear up the
debris they create in /var/tmp?  If so, is that not a bug in that
application?

I'd suggest that rather than clearing up after the sloppy behaviour of
buggy applications, we instead leave it visible, in the hope that it can
then be fixed.

Of course, that's obviously not worked in some (many?) cases, so where
we know of problematic packages, could we not add per-package tmpfiles.d
files that name the specific paths that those packages are known to
litter the system with, with appropriate deletion timeouts chosen by the
Maintainer?

That ought to achieve the benefit you're looking for, without hiding
symptoms of future problems with other packages, and without
inconveniencing anyone that's using /var/tmp as scratch space.

Cheers, Phil.
-- 
Philip Hands -- https://hands.com/~phil


signature.asc
Description: PGP signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Peter Pentchev
On Tue, May 07, 2024 at 10:38:14AM +0200, Carsten Leonhardt wrote:
> Luca Boccassi  writes:
> 
> > Defaults are defaults, they are trivially and fully overridable where
> > needed if needed. Especially container and VM managers these days can
> > super trivially override them via SMBIOS Type11 strings or
> > Credentials, ephemerally and without changing the guest image at all.
> 
> That argument goes both ways and I prefer safe defaults. What
> you/upstream propose are unsafe defaults, as was shown by several
> comments in this thread. Whoever wants the unsafe defaults of deleting
> old files and risking OOM situations can than "trivially and fully
> override" the safe defaults.

So I've been wondering for a couple of days now, following this thread...
...would it be a good idea to make this a debconf prompt, high priority,
default "yes", so that it is activated on new automatically installed
systems, but people who upgrade their current Debian installations can
choose to keep the old behavior?

I do realize that more debconf prompts are not always desirable, and
such decisions must be taken on a case-by-case basis, so... yeah.

G'luck,
Peter

-- 
Peter Pentchev  r...@ringlet.net r...@debian.org pe...@morpheusly.com
PGP key:http://people.FreeBSD.org/~roam/roam.key.asc
Key fingerprint 2EE7 A7A5 17FC 124C F115  C354 651E EFB0 2527 DF13


signature.asc
Description: PGP signature


Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-07 Thread Carsten Leonhardt
Luca Boccassi  writes:

> Defaults are defaults, they are trivially and fully overridable where
> needed if needed. Especially container and VM managers these days can
> super trivially override them via SMBIOS Type11 strings or
> Credentials, ephemerally and without changing the guest image at all.

That argument goes both ways and I prefer safe defaults. What
you/upstream propose are unsafe defaults, as was shown by several
comments in this thread. Whoever wants the unsafe defaults of deleting
old files and risking OOM situations can than "trivially and fully
override" the safe defaults.



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-06 Thread rhys
You're now at the stage where you're not just MISSING the point of what people 
are trying to tell you, you're actively IGNORING it. 

Automatically deleting files is a bad idea. Those files aren't yours. You don't 
know why they are there. Leave them alone. 

--J

Sent from my mobile device.


From: Luca Boccassi 
Sent: Monday, May 6, 2024 08:20
To: Barak A. Pearlmutter
Cc: debian-devel@lists.debian.org
Subject: Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default 
[was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

On Mon, 6 May 2024 at 13:42, Barak A. Pearlmutter  wrote: 
> 
> > Then upon reading the release notes, on such a machine, one can simply do: 
> > 
> > touch /etc/tmpfiles.d/tmp.conf 
> > 
> > And they get no automated cleanups. 
> 
> This also disables on-boot cleaning of /tmp/. 

Yes, as it's going to be a tmpfs, so it is no longer needed. Trivial 
to maintain though if one wants to do so, though. 

> The root issue here is that deleting not-read-in-a-while 
> but-maybe-stat'ed-recently-by-make-that-doesn't-count files from 
> /var/tmp/ by default, particularly when the system didn't used to, 
> violates the principle of least surprise. 

Which is what release notes are for, if everything was always the same 
we wouldn't spend time to put those together

> There's an old debugging story 

While personal anecdotes and stories can be interesting and amusing in 
many circumstances, I am not really looking for those at this very 
moment. What I am looking for right now is packages or internal 
infrastructure that need 
an update to cope with these two changes before I upload them, so if 
you know of any please do let me know and I'll happily look into it 
and at least file a bug, if not a MR. Thanks. 



Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-06 Thread Russ Allbery
Luca Boccassi  writes:
> Richard Lewis  wrote:

>> - tmux stores sockets in /tmp/tmux-$UID
>> - I think screen might use /tmp/screens

>> I suppose if you detached for a long time you might find yourself
>> unable to reattach.

>> I think you can change the location of these.

> And those are useful only as long as screen/tmux are still running,
> right (I don't really use either that much)? If so, a flock is the right
> solution for these

Also, using /tmp as a path for those sockets was always a questionable
decision.  I believe current versions of screen use /run/screen, which is
a more reasonable location.  Using a per-user directory would be even
better, although I think screen intentionally supports shared screens
between users (which is a somewhat terrifying feature from a security
standpoint, but that's a different argument).

-- 
Russ Allbery (r...@debian.org)  



Re: Re: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-06 Thread Johannes Schauer Marin Rodrigues
Hi,

Quoting Luca Boccassi (2024-05-07 00:09:51)
> To be more specific, as per documentation:
> 
> https://www.freedesktop.org/software/systemd/man/latest/tmpfiles.d.html
> 
> 'x' lines can be used to override cleanup rules, and support globbing,
> so something like:
> 
> x /tmp/mmdebstrap.*

thank you for being patient with me. I saw the man page but I also tried using
codesearch to look for other packages doing the same thing already and was
unable to find one. This made me doubt whether I had understood this correctly.
For example, tmpfiles.d(5) makes it look like there are several minuses
required after the path but apparently those are optional. I've never written
such files before, so your input is useful for me, thank you!

> > And just to confirm (I read this elsewhere in this thread): if my
> > /etc/fstab has an entry for /tmp (with a tmpfs) does this automatically
> > mean that no cleanup will happen or do i still have to put something into
> > /etc to disable the periodic cleanup?
> That's something different, fstab is about whether /tmp is a tmpfs or not,
> cleanups still happen regardless of the filesystem type.

I shall put the following into my local /etc/tmpfiles.d/josch.conf to disable
the cleanup of /tmp completely, then:

x /tmp

Thanks!

cheers, josch

signature.asc
Description: signature


Re: Bug#966621: Make /tmp/ a tmpfs and cleanup /var/tmp/ on a timer by default [was: Re: systemd: tmpfiles.d not cleaning /var/tmp by default]

2024-05-06 Thread Luca Boccassi
On Mon, 6 May 2024 at 23:03, Richard Lewis
 wrote:
>
> Luca Boccassi  writes:
>
> > Hence, I am not really looking for philosophical discussions or lists
> > of personal preferences or hypotheticals, but for facts: what would
> > break where, and how to fix it?
>
> - tmux stores sockets in /tmp/tmux-$UID
> - I think screen might use /tmp/screens
>
> I suppose if you detached for a long time you might find yourself unable
> to reattach.
>
> I think you can change the location of these.

And those are useful only as long as screen/tmux are still running,
right (I don't really use either that much)? If so, a flock is the
right solution for these



<    1   2   3   4   5   6   7   8   9   10   >