Re: Transparency into private keys of Debian

2024-02-05 Thread Philipp Kern

On 2024-02-05 08:58, Simon Josefsson wrote:

What would be involved is to 1) during signing of artifacts, also sign
and upload into Sigstore/Sigsum, and 2) during verification in the
f-droid app, also verify that the signature has been committed to the
Sigstore/Sigsum logs.  Both projects have clients written in Go which
should work on Android, but the rest of the details are sketchy to me.
I'm happy to continue discuss and help with design if you are
interested, to understand what the limitations of your environments are
and how to resolve them.


One weirdness with the release keys we use is that they are technically 
able to sign independently, but practically only ever appear next to the 
archive signing key. That gives us an escape hatch to update the key set 
when we managed to lose the archive key, I guess.


Obviously you'd need to have an efficient way to test against a 
transparency log - I don't think it's sufficient for just tracking the 
signatures if your worry is that someone MITMs you with a malicious 
signature. I'll note that Firefox also still does not implement 
Certificate Transparency checks. (Which I find quite surprising and 
makes it less secure in my book.)


Kind regards
Philipp Kern



Re: Proposal for how to deal with Go/Rust/etc security bugs

2024-01-25 Thread Philipp Kern

On 2024-01-25 17:10, Russ Allbery wrote:

Rebuilding a bunch of software after a security fix is not a completely
intractable problem that we have no idea how to even approach.  It's 
just

CPU cycles and good metadata plus ensuring that our software can be
rebuilt, something that we already promise.  Some aspects of making 
this

work will doubtless be *annoying*, but it doesn't seem outside of our
capabilities as a project.


One worry I'd have is that we have - in the past - been very 
conservative in what we rebuilt. We have not rebuilt the archive 
pre-release either (maybe we should!). So you are suddenly rebuilding 
binaries with a new toolchain and updated inputs - and while it's easy 
to review source diffs, there are always some unknowns what is happening 
to the binary as a result. From a(n ex) release perspective that would 
be my main worry.


I agree that the rebuilds themselves are a matter of programming, 
assuming they succeed and you don't need to do them in stages to resolve 
dependencies. Presumably we'd then need a pipeline to publish partial 
security updates as the leaves of the tree complete (or enough of a 
heads-up). If you have stages because intermediate builds incorporate 
bits of other packages and re-export them into build environments 
(unlikely?) or if you need to shepherd a lot of failed builds and try to 
debug what happened, then it becomes a lot more toilsome and 
labor-intensive.


Kind regards
Philipp Kern



Re: Bug#1059618: ITP: ssh3 -- faster and rich secure shell using HTTP/3

2023-12-29 Thread Philipp Kern

On 29.12.23 11:30, Simon Josefsson wrote:

Package: wnpp
Severity: wishlist
X-Debbugs-Cc: debian-devel@lists.debian.org, debian...@lists.debian.org

* Package name: ssh3
   Version : 0.1.4
   Upstream Contact: François Michel
* URL : https://github.com/francoismichel/ssh3
* License : Apache-2.0
   Programming Lang: Go
   Description : faster and rich secure shell using HTTP/3

SSH3 is a complete revisit of the SSH protocol, mapping its semantics on
top of the HTTP mechanisms. In a nutshell, SSH3 uses QUIC+TLS1.3 for
secure channel establishment and the HTTP Authorization mechanisms for
user authentication. Among others, SSH3 allows the following
improvements:


I feel like SSH3 is an unfortunate name. The program claims "SSH3 stands 
for the concatenation of SSH and H3." - well sure, but you're also 
reusing the name of an existing protocol and bump its version. ssh-h3?


Both the paper and the project are very new - so there should not be 
that many things referring to it yet.


Kind regards
Philipp Kern



Bug#1058704: ITP: nsncd -- Name service non-caching daemon

2023-12-14 Thread Philipp Kern
Package: wnpp
Severity: wishlist
Owner: Philipp Kern 
X-Debbugs-Cc: debian-devel@lists.debian.org, pk...@debian.org

* Package name: nsncd
  Version : 1.4.1 (plus patches[1])
* URL : https://github.com/twosigma/nsncd
* License : Apache 2.0
  Programming Lang: Rust
  Description : Name service non-caching daemon

 nsncd implements the NSCD (name-service caching daemon) protocol to
 provide out-of-process NSS lookups but does not implement caching.

 It is designed to provide high-performance NSS lookups for programs
 that are not using the system libc, while providing semantics as if
 NSCD were not being used.

This is useful in environments in which you are mixing binaries from
several sources. One such environment is Nix, where binaries will
be linked to a (hermetic) glibc from the Nix store. By dropping the
need to cache, nsncd is a lot simpler than nscd - it's only purpose
is to decouple your binaries from the NSS modules you have configured,
which will continue to run under the system glibc.

I'm going to maintain the package for the time being. If the Rust team
also wants to maintain Rust leaf software, I'm also happy to collaborate
there.

Kind regards
Philipp Kern

[1] The Nix people also added support for host resolution in
https://github.com/nix-community/nsncd/.



Re: Misc Developer News (#59)

2023-11-22 Thread Philipp Kern

Hi Donald,

On 2023-11-22 03:17, Donald Norwood wrote:

The news are collected on https://wiki.debian.org/DeveloperNews
Please contribute short news about your work/plans/subproject.


Thanks for posting this, but it looks like you crafted this email as a 
reply, which means it got automatically redirected to d-d and not posted 
to d-d-a.


Kind regards
Philipp Kern



Re: /usr/-only image

2023-09-11 Thread Philipp Kern

On 2023-09-10 22:42, Russ Allbery wrote:
So far as I know, no one has ever made a detailed, concrete proposal 
for

what the implications of this would be for Debian, what the transition
plan would look like, and how to address the various issues that will
arise.  Moving configuration files out of /etc, in particular, is
something I feel confident saying that we do not have any sort of 
project

consensus on, and is not something Debian as a project (as opposed to
individuals within the project) is currently planning on working on.


I don't feel like not having /etc is going to be feasible for a while. 
However another interesting question is what a minimal /etc looks like 
and what could be generated on-demand or regenerated from some staging 
ground in /usr and /var (e.g. the debconf database).


If you squint, that's what NixOS is doing. But we have a pretty messy 
way with multiple conflicting systems to put configuration in and how we 
merge the files when updates are installed. There would need to be some 
deeper primitives to make this happen.


Kind regards
Philipp Kern



Re: systmd-analyze security as a release goal

2023-07-05 Thread Philipp Kern

On 2023-07-05 09:36, Russell Coker wrote:

On Monday, 3 July 2023 22:37:35 AEST Russell Coker wrote:

https://wiki.debian.org/ReleaseGoals/SystemdAnalyzeSecurity


People have asked how hard it is to create policy for daemons.  For an
individual to create them it's a moderate amount of work, 1-2 hours per 
daemon
which is a lot considering the dozens of daemons that people use.  But 
for a
group of people it's not a big deal, it's almost nothing compared to 
the scale
of Debian development work.  The work that I've done writing SE Linux 
policy
for daemons is significantly greater than what I'd like the collective 
of DDs

to do in this regard.


My fear here would be that you are not in control of what your 
dependencies are doing. This is especially true if you think of NIS and 
PAM, where libraries are dlopen()ed and can spawn arbitrary helper 
binaries. I remember openssh installing a syscall filter for its auth 
binary and then it failed with certain PAM modules (see also your 
allow_ypbind example). So we should also not be too limiting when 
sandboxing daemons.


Kind regards
Philipp Kern



Re: proposal: dhcpcd-base as standard DHCP client starting with Trixie

2023-06-22 Thread Philipp Kern

On 22.06.23 16:03, Marco d'Itri wrote:

On Jun 22, Martin-Éric Racine  wrote:

The point has always been to ship some ifupdown-supported DHCP client
by default. This can be done either by keeping the default client's
priority to important or by making ifupdown Depends on one.  I prefer
the later.

It would be totally unacceptable to make ifupdown Depend on a DHCP
client, because this would bloat most server installations.
But I would be happy with a Recommends. It's not like the people whose
server needs a DHCP client do not know that they need to install one.


TBH time is too short to manually provision IP addresses on servers.

Kind regards
Philipp Kern



Re: proposal: dhcpcd-base as standard DHCP client starting with Trixie

2023-06-19 Thread Philipp Kern

On 2023-06-19 14:37, Luca Boccassi wrote:

The advantage of doing that is that it's what Ubuntu does IIRC, so
there will be extra pooling of resources to maintain those
setups, and the road should already be paved for it.


I am not sure if I have seen this play out in practice[1]. 
Ubuntu^WCanonical has been doing its own development in this space as 
well with netplan. Ubuntu will continue to do its own fixes to glue 
things together.


Kind regards
Philipp Kern

[1] With notable exceptions like doko maintaining the toolchain - and 
I'm sure I'm not crediting everyone. But that's also explicit package 
maintainership.




Re: DEB_BUILD_OPTIONS=nowerror

2023-02-28 Thread Philipp Kern

On 28.02.23 20:34, Steve Langasek wrote:

This is conceptually interesting to me.  In practice, I don't see us using
this in Ubuntu.  We have per-architecture differences from Debian (ppc64el
building with -O3 by default; riscv64 being a release architecture where it
isn't in Debian) that make it interesting to pick up on per-architecture
build failures caught by -Werror and not without.  But it's not practical to
do CI -Werror builds; when we do out-of-archive rebuilds for all
architectures, it's a significant committment of resources and each rebuild
takes about a month to complete (on the slowest archs).  And to be able to
effectively analyze build results to identify Werror-related failures with
high signal would require two parallel builds, one with and without the
flag, built against the same baseline.


That you are so resource constrained here surprises me a little. I can 
see that for Debian, but I'm surprised that Ubuntu is affected as well. 
Especially as you'd think that this could also be done within 
virtualization - the evaluation here is mostly around running the 
compiler and checking its errors, not so much about running tests 
accurately on real hardware.


Kind regards
Philipp Kern



Re: Populating non-free firmware?

2022-12-25 Thread Philipp Kern

On 25.12.22 00:31, Andrew M.A. Cater wrote:

The problem is a logistics one: the archives need to be split up, there
needs to be a transition plan, maybe the easiest way is to do NMU uploads


This is what (archive) overrides were invented for. However with the 
split maybe it'd be good for dak to temporarily export that component 
into both its own and non-free proper. That'd decouple the migration on 
the user side.


Kind regards
Philipp Kern



Re: ppc64el porterbox replacement: plummer.d.o -> platti.d.o

2022-10-24 Thread Philipp Kern

Hi,

On 24.10.22 06:40, Johannes Schauer Marin Rodrigues wrote:

Quoting David Bremner (2022-10-22 18:16:12)

Aurelien Jarno  writes:

We lost access to the Power9 machine hosted at Unicamp, which was
hosting the ppc64el porterbox called plummer.d.o. A new porterbox called
platti.d.o has been setup as a replacement.


It would be nifty if someone (TM) would update where
ppc64el-porterbox.debian.net points to.


this is the first time I hear about $arch-porterbox.debian.net. This is super
cool! When was that announced and who maintains it? Why is it only on
debian.net and not on debian.org?

I always use https://db.debian.org/machines.cgi to obtain the mapping from
debian architecture to porterbox machine. Maybe that website could inform me
that $arch-porterbox.debian.net also exists?

Whoever maintains this mapping (thank you!!) should also add it to
https://wiki.debian.org/DebianNetDomains and then it would be easy to figure
out whom to trigger once an update is necessary. :)


Looks like it's Jakub:


pkern@master ~ % ldapsearch -x 'dnsZoneEntry=ppc64el-porterbox*' uid
# extended LDIF
#
# LDAPv3
# base  (default) with scope subtree
# filter: dnsZoneEntry=ppc64el-porterbox*
# requesting: uid 
#


# jwilk, users, debian.org
dn: uid=jwilk,ou=users,dc=debian,dc=org
uid: jwilk


Kind regards
Philipp Kern



Re: epoch for tss2 package

2022-10-21 Thread Philipp Kern

On 21.10.22 20:06, Johannes Schauer Marin Rodrigues wrote:

Is that really the correct statement? Even after excluding all virtual packages
with a single provider, there are literally thousands of source packages for
which their first alternative is a virtual package. Is this "policy" documented
somewhere? Because if it is, then it either should change or the archive has to
be changed to match that policy.


Hm, good point. I thought it was documented in a way stronger way than 
it actually is. In [1] it says:



To specify which of a set of real packages should be the default to satisfy a 
particular dependency on a virtual package, list the real package as an 
alternative before the virtual one.


Which does not say that you have to, only what you should do if you want 
to specify a real package.



In any case, this was not my original question. Andreas presented a way to use
a transitional package to rename a package which will work fine I guess except
that we have to carry an empty package for a release and that empty package has
to be cleaned up at some point, for example by deborphan.

My original question was why using a virtual package for the same purpose is a
bad idea. The wiki page https://wiki.debian.org/RenamingPackages lists reasons
against it that are incorrect. So why is it really a bad idea?

Is there any reason not to delete the first reason (the sbuild one) completely?

And either I misunderstand the second reason or I implemented my POC
incorrectly or apt (as of 2022-10-21) is perfectly capable of replacing the old
with the new one.

Can somebody shed some light on this?


Yeah, I think that line should be deleted as it might never have been 
true. I checked the first code in git from 2004 and it doesn't reject 
virtual packages either. It does pick a winner manually in the resolver 
and it looks random (or rather in "apt showpkg" order). But it's not 
like it didn't work.


Kind regards
Philipp Kern

[1] 
https://www.debian.org/doc/debian-policy/ch-relationships.html#virtual-packages-provides




Re: epoch for tss2 package

2022-10-20 Thread Philipp Kern

On 20.10.22 13:40, Johannes Schauer Marin Rodrigues wrote:

Quoting Andreas Henriksson (2022-10-20 12:13:24)

Cannot be used for packages that are used in build dependencies, as several
build tools (like sbuild) do not support virtual packages in those
dependencies by design, to guarantee deterministic builds.

Wait what? If sbuild doesn't support virtual packages I'd like to hear about
that. Can I just remove this reason from the wiki page? It is obviously wrong.
If it is not, please file a bug against sbuild.


The correct statement here is that you ought to pick a default choice 
first[1] before a virtual alternative. We don't want to leave it up to 
the resolver to pick an arbitrary available build-dependency. So this is 
more of a policy rather than a technical question. Behavior for 
experimental might currently differ due to a different resolver choice 
that's more flexible by design - to get newer versions from experimental 
if necessary.


Kind regards
Philipp Kern

[1] This might require an overall agreement across Debian at times. But 
that seems to be more relevant for dependencies than build-dependencies.




Re: Sunsetting sso.debian.org

2022-10-17 Thread Philipp Kern

On 17.10.22 17:29, Sam Hartman wrote:

That
* Gives us a second source of sso
* still leaves tracker wanting to consume client certs
* As far as I can tell keycloak can consume but not produce client certs
* Even if it can produce client certs we have all the usability
challenges of client certs


But is there a technical reason for tracker.d.o to do client certs in 
the first place? It's easy for a first-party d.o service running on DSA 
machines to enable OpenID Connect-based SSO against Salsa. And that only 
requires minor changes to the code to get the username from the slightly 
different HTTP header.


If there are API clients talking to it, it might be slightly more 
involving to setup - but it's not like other people haven't had to deal 
with getting OIDC tokens for various APIs before. :)


Kind regards
Philipp Kern



Re: adduser default for sgid home directories

2022-07-25 Thread Philipp Kern

On 25.07.22 08:46, Bjørn Mork wrote:

Matt Barry  writes:

- why has a change been made


I think this is explained in excruciating detail.  The short version
(from NEWS):

"mode 0700 provides both the most secure, unsurprising default"

[...]

And the claim that this is "most unsurprising" (less surprising?) is
obviously false. "No change" is always less surprising than any change,
whatever the rationale is.


It can also be unsurprising from an end-user's perspective. For someone 
new to the system. So that line of argument does not really hold.


Kind regards
Philipp Kern



Re: RFC: Switch default from netkit-telnet(d) to inetutils-telnet(d)

2022-07-19 Thread Philipp Kern

On 19.07.22 16:05, Michael Stone wrote:

On Sun, Jul 17, 2022 at 01:49:53AM -0400, Timothy M Butterworth wrote:
Telnet is old, insecure and should not be used any more. What is the 
point of
packaging a Telnet daemon when everyone should be using SSH. Telnet 
Client I
can see because a person may need to connect to a router or switch 
that is

still using telnet or hasn't had SSH Certificates generated yet.
I personally use telnet to connect to systems whose ssh implementations 
are old enough that they are no longer interoperable with current ssh. 
Every system will eventually become an old system, and telnet has a much 
better record of working over the long term than does ssh. Security 
concerns have a place in determining defaults, but not in banning 
software that other people find useful in a context that might not 
matter to you.


I found the client-side very tolerant of ancient server-side 
implementations when the right kinds of switches are passed to it (e.g. 
KexAlgorithms and HostKeyAlgorithms). I have yet to be unable to 
actually connect to a target - even if it means fiddling increasingly 
with flags.


Kind regards
Philipp Kern



Re: popularity-contest: support for XB-Popcon-Reports: no

2022-05-04 Thread Philipp Kern

Hi,

On 2022-05-04 18:21, Bill Allombert wrote:
I plan to add support for 'XB-Popcon-Reports: no' to 
popularity-contest.

This allows to build packages with private names that will not be
reported to popcon, by adding 'XB-Popcon-Reports: no' to 
debian/control.

This must not used by packages in the debian archive, however that can
be used by packages generators that create packages with randomly
generated names (package names that include 128bit uuid for examples)
or by organizations that generates packages for internal use
whose name include the organization name.

The rationale is that the only info really leaked is the package name,
so it only make sense to hide a package if every system that have it
installed are also hiding it, so it is better to make it a property
of the package than of the system.


I like the idea, especially for organizations who know what they are 
doing[1]. I just fear that it won't actually solve your denylisting 
problem at hand. People will keep not specifying it. Can't popcon go and 
just accept reports for packages in the archive somehow?


Kind regards
Philipp Kern

[1] Although most might disable popcon anyway.



Re: partman, growlight, discoverable partitions, and fun

2021-09-26 Thread Philipp Kern

On 26.09.21 10:50, Adam Borowski wrote:

My biggest worry personally (aside from the realpolitik of
getting this change through) regards the automated partitioning
language available through the preseed system. Trying to emulate
this bug-for-bug is a non-starter, I think, both from a
technical and quality-of-life standpoint. If the emulation can't
be perfectly accurate, I don't think it ought be attempted for
such a critical, delicate procedure.

I personally think that preseed is nasty enough that users who do automation
on a scale that would make learning it worthwhile already have a better way to
do such automation.  For me, d-i is for manual installs, scripted stuff
wants a partitioner + glorified debootstrap.


As someone who has tried in the past to get partitioning correct using 
preseeding on a wider variety of disk shapes, I think anything but would 
be an improvement. FAI's setup-storage is obviously better. But good 
riddance to the lack of sensible debugging of the shell script horror 
story that is the existing system. :)


Kind regards
Philipp Kern



Bug#993488: maybe reason for wontfix?

2021-09-03 Thread Philipp Kern

On 2021-09-03 14:23, Tomas Pospisek wrote:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=993488#16 contains a
"wontfix + close" but no rationale. Which leaves the original reporter
with a large "?" I guess.

I am guessing that the reason for the "wontfix" is "that's just how
Unix works unfortunately" aka "that's a Unix design bug"? Is my guess
correct?

One other question - any idea on a way forward here? I would guess
that behaviour (changing group membership won't change group
membership of running processes) is rooted somewhere quite low in the
stack, maybe in the kernel itself (or in posix :-#)? So if the
original reporter would want to go ahead and look to that problem
beeing fixed would he need to go talk to the kernel mailing list or do
you have idea where he could go to?


Processes in *nix inherit permissions. That's inherent to the design. If 
you want more guarantees, you need to move from discretionary access 
control (based on the identity at the time of process (tree) creation) 
to mandatory access control (e.g. SELinux).


Kind regards
Philipp Kern



Bug#992692: general: Use https for {deb,security}.debian.org by default

2021-09-03 Thread Philipp Kern

Hi,

On 03.09.21 13:11, Simon Richter wrote:
[Revocation mechanism]

If we don't have one, shouldn't we worry more about that given the
widespread use of TLS?
We have a big hammer, shipping a new ca-certificates package. If we want 
something that only affects apt, but not other packages, that mechanism 
doesn't exist yet.


I think that's an interesting point, not just for revocation. There are 
forces pushing for more agility, switching out roots of trust more 
frequently. So for very old releases, you usually had the signing key of 
the next release on disk, so you could move to the next release. In this 
case you sort of risk not having the TLS authority on disk to make that 
happen. And of course we need to track what the authorities are doing 
that our frontends are using (e.g. [1] around how to deal with old 
Android devices).


But then I'm not sure how much we need to care about ancient releases 
that are out of security support. We would need to commit to regularly 
update the certificate bundle, though.


To your other point: I don't think managing trust into individual CAs 
will scale. We cannot really anticipate which CAs we are going to use in 
the future.


Kind regards
Philipp Kern

[1] https://letsencrypt.org/2020/12/21/extending-android-compatibility.html



Re: Q: Use https for {deb,security}.debian.org by default

2021-08-21 Thread Philipp Kern

On 20.08.21 21:11, Russ Allbery wrote:

The way I would put it is that the security benefit of using TLS for apt
updates is primarily that it makes certain classes of attempts to mess
with the update channel more noisy and more likely to produce immediate
errors.
One thing of note is that it introduces a time dependency on the client. 
Now we seem to gravitate towards a world where you'd also fail DNS 
resolution if your time is wrong (because you cannot get at the 
DNS-over-TLS/HTTPS server), so this is probably accepted as not making 
things worse overall. I guess we could have some (somewhat insecure) 
defense in depth if we wanted to, but maybe the world just agreed that 
you need to get your clock roughly correct. ;-)


Kind regards
Philipp Kern



Re: Q: Use https for {deb,security}.debian.org by default

2021-08-19 Thread Philipp Kern

On 19.08.21 21:48, Paul Gevers wrote:

On 19-08-2021 21:46, Simon Richter wrote:

For the most part, users would configure https if they are behind a
corporate firewall that disallows http, or modifies data in-flight so
signature verification fails, everyone else is better off using plain http.

Except for the security archive, where https can prevent a
man-in-the-middle from serving you outdated information and thus deprive
you from updates.


For a week until Valid-Until expires. Note that the denial of service 
equally works for HTTPS, it's just more noisy.


Kind regards
Philipp Kern



Re: Debian package manager privilege escalation attack

2021-08-12 Thread Philipp Kern

On 2021-08-12 17:56, Marc Haber wrote:

On Thu, 12 Aug 2021 13:44:24 +0200, Philipp Kern 
wrote:

On 2021-08-12 12:23, Polyna-Maude Racicot-Summerside wrote:

Now if people start doing stuff they don't master than it's not
privilege escalation but much more something like another 
manifestation

of human stupidity. And this, there won't be a number of article
sufficient to make people change.

[...]

This is only a article made to get people onto a website and see
publicity or whatever goal the author set. There's nothing genuine in
there.


I think it's less about human stupidity than about all the knowledge 
you
need to acquire (and retain) to securely administer a system. It is 
not

easy. The concern expressed here is pretty much common knowledge among
sysadmins of ye olde times.


I think the essence of the article is, that on some apt/dpkg using
distributions, a "normal" user gets sudo rights to do apt only (I have
never seen that on Debian, do we do this in some corner case?) and is
able to escalate to root from that trivially, even without doctoring
some malicious package, just shell out from dpkg's conffile prompt to
a full root shell.


You know that this is a bad idea (granting sudo to apt without a 
wrapper). I know that this is a bad idea. That was my point. Plus that 
this is a very common trope in multi-user settings that you want to hand 
out some privilege to install packages.


Kind regards
Philipp Kern



Re: Debian package manager privilege escalation attack

2021-08-12 Thread Philipp Kern

On 2021-08-12 12:23, Polyna-Maude Racicot-Summerside wrote:

Now if people start doing stuff they don't master than it's not
privilege escalation but much more something like another manifestation
of human stupidity. And this, there won't be a number of article
sufficient to make people change.

[...]

This is only a article made to get people onto a website and see
publicity or whatever goal the author set. There's nothing genuine in 
there.


I think it's less about human stupidity than about all the knowledge you 
need to acquire (and retain) to securely administer a system. It is not 
easy. The concern expressed here is pretty much common knowledge among 
sysadmins of ye olde times. Of course you can abuse this, and yes it got 
easier recently. The boundary that sudo provides is very blurry, hard to 
understand and full of footguns. People need to come up with better 
boundaries - or in this case they might already exist. Basically you 
need to be able to validate the request and execute it in a secure 
environment. At basically every shared environment people come up with 
some way to allow package installation, but it's not easy to find the 
right instructions on how to do this properly on Debian[1]. I'm not 
aware of a well-trotten path for maintaining a system where users do not 
need root. Throw in some reluctance to deal with "newfangled things" (to 
establish new, maybe controversial boundaries) and you end up with every 
one fighting for themselves.


Now of course there's value in people having this knowledge and 
companies should recognize this value. But from communication and 
awareness we learn, no?


Kind regards
Philipp Kern

[1] E.g. thinking of https://debian-handbook.info/browse/stable/



Re: Debian package manager privilege escalation attack

2021-08-12 Thread Philipp Kern

On 2021-08-12 08:32, Vincent Bernat wrote:

❦ 12 August 2021 10:39 +05, Andrey Rahmatullin:


I just ran across this article
https://blog.ikuamike.io/posts/2021/package_managers_privesc/ I 
tested

the attacks on Debian 11 and they work successfully giving me a root
shell prompt.
I don't think calling this "privilege escalation" or "attack" is 
correct.
The premise of the post is "the user should not be a root/admin user 
but

has been assigned sudo permissions to run the package manager" and one
doesn't really need a long article to prove that it's not secure.


I think the article is interesting nonetheless. Some people may think
that granting sudo on apt is OK. In the past, I think "apt install
./something.deb" was not possible.


I think the actual solution here is PackageKit. My understanding is that 
it does not let you do this when you grant the package-install 
permission to users. And it even lets you do flexible policies through 
polkit.


And sure, that still allows users to install packages from any 
configured source which might include packages with vulnerabilities or 
intended privilege escalation. But that feels like a different, more 
general problem.


Kind regards
Philipp Kern



Re: automatic NEW processing [was Re: Need help with Multi-Arch in systemd]

2021-07-14 Thread Philipp Kern

On 14.07.21 13:47, Michael Biebl wrote:

Am 14.07.21 um 12:59 schrieb Simon McVittie:

Would it be feasible for dak to have a list of binary package name
regexes mapped to a source package and a section/priority, and 
auto-accept

packages from the given source package that match the regex, assigning
the given section/priority, without manual action? That would let the
ftp team pre-approve src:systemd to ship /^libsystemd-shared-[0-9]+$/
in libs/optional, for example.

It seems like this would also be good for src:linux, where ABI breaks
are often tied to security fixes that should enter the archive ASAP.


If something fully automated like this would be implemented, I would 
have much less concerns with this option.


As it stands today, NEW processing is simply to unpredictable. It can 
range from taking a a few hours/days to several months.


And yet it should not dictate technical solutions. We basically see the 
same thing with nvidia-graphics-drivers that break your running 
applications when the libraries are upgraded and you don't reboot. 
Arguably the proper solution is to version them with the full 
major/minor version. But I can see how that's a total hassle with NEW 
processing for both for the maintainer and to the FTP team.


I do recall that the FTP masters would've been generally open to have 
such an auto-approver (but maybe I'm wrong), but that no-one stepped up 
yet to code it up?


Kind regards
Philipp Kern



Re: Thanks and Decision making working group (was Re: General Resolution: Statement regarding Richard Stallman's readmission to the FSF board result)

2021-04-20 Thread Philipp Kern

On 2021-04-20 12:44, Adrian Bunk wrote:
A single person being able to block consensus of basically everyone 
else

feels like opening up the process to unconstructive behavior.


A single person whom we trust to upload anything to our archive.[1]

If the person thinks there is something left that should be discussed
then there is no consensus, and if a DD is just trying to sabotage
random things in Debian then GR discussion periods are not my biggest
worry.


I mean that we have unilateral access possibilities to the archive is 
kinda bad as well. But there's much less cost to distracting a vote 
process and be obstructionist than it is to upload a compromised 
package. :)


Kind regards
Philipp Kern



Re: Thanks and Decision making working group (was Re: General Resolution: Statement regarding Richard Stallman's readmission to the FSF board result)

2021-04-20 Thread Philipp Kern

On 2021-04-20 10:59, Adrian Bunk wrote:

I would suggest to replace the option of shortening the discussion
period with the possibility of early calling for a vote after a week
that can be vetoed by any developer within 24 hours. This would ensure
that shorter discussion periods would only happen when there is
consensus that nothing is left to be discussed.


But K developers could have stopped this, right? (Per 4.2.2.3.) Now the 
constitution feels quite heavyweight on that ("sponsoring a 
resolution"), but I'd be surprised if the DPL would not have taken back 
the decision in that case (4.2.2.5).


A single person being able to block consensus of basically everyone else 
feels like opening up the process to unconstructive behavior.


Kind regards
Philipp Kern



Re: Possible DEP proposal - contribution preferences

2021-02-02 Thread Philipp Kern

On 2021-02-02 16:48, Adrian Bunk wrote:

A debhelper compat bump is a breaking change that must not be done
without the maintainer verifying that it didn't introduce any
regression.

This is the whole point of compat levels.

Unfortunately there is a lack of recognition how many bugs are
introduced by blindly updating debhelper compat levels - staying
at a deprecated compat level is better than a not properly tested
compat bump.


To be fair: You can assert statically if the compat bump did not 
introduce any changes (by compiling twice).


Kind regards
Philipp Kern



Re: Making Debian available

2021-01-24 Thread Philipp Kern
On 24.01.21 17:08, John Scott wrote:
> Changing the firmware on an EEPROM is far less practical for the user or 
> manufacturer (they're on similar footing), and if it's not electronically 
> erasable, it's merely an object that can't be practically changed of which 
> you'd need to make a new one anyway.

LVFS is a thing now (kudos to Richard Hughes) and firmware updates can
nowadays be pretty seamless, even on Linux. So I don't think I agree
that EEPROM updates are far less practical. And I think I'd still prefer
if the kernel pushes the (driver-)appropriate firmware to the device as
it sees fit rather than having explicit EEPROM update cycles independent
from driver updates.

> 1. Unlike with SSD firmware, there are wireless cards that use libre firmware 
> and some are still manufactured and quite easy to attain. The goalpost for 
> free software moves with what has been achieved.

I guess to make your point stronger you could also have linked to those
products that work with libre firmware. A brief research then finds two
abgn cards from Atheros that is not available through normal retail
channels anymore, because they are 8 to 10 years old (at least) and do
not support contemporary wifi standards. And the same research turns up
that it took many years from the point were it existed (2013) until it
got uploaded to Debian (2017) and released (2019). I think its existence
is super interesting from a research point of view. But I don't think it
makes a strong case for availability of libre firmware for wifi cards.
Especially if you care about spectral efficiency, i.e. using a shared
medium efficiently.

Kind regards
Philipp Kern

[1]
https://libreplanet.org/wiki/LinuxLibre:Devices_that_require_non-free_firmware



Re: Making Debian available

2021-01-17 Thread Philipp Kern
On 17.01.21 14:19, Marc Haber wrote:
> On Sun, 17 Jan 2021 18:16:04 +0500, Andrey Rahmatullin
>  wrote:
>> On Sun, Jan 17, 2021 at 11:33:28AM +0100, Marc Haber wrote:
>>> My workaround is to plug in a network cable for installation. But
>>> alas, I have up to now been able to avoid hardware without built-in
>>> Ethernet. I guess that many USB Ethernet interfaces will work out of
>>> the box without non-free, right?
>> I think people on Reddit usually recommend USB-tethering the cell phone.
> That works for IP?

I'm not sure what the question here is. You get NAT. You even get NAT to
your WiFi - i.e. you can use it as a glorified USB WiFi device (at least
with Android). I have successfully either fixed or installed Debian
through a cell phone in the past because there was no other way at hand.

Kind regards
Philipp Kern



Re: Making Debian available

2021-01-17 Thread Philipp Kern
On 17.01.21 11:51, Marc Haber wrote:
> On Sat, 16 Jan 2021 20:20:56 +0100, Andreas Tille 
> wrote:
>> IMHO the fact that people claim that
>> "Ubuntu is easy to use but Debian is not" is to quite some amount based
>> on this kind of experience of users who do not know that kind of basics
>> and are not able to fix a rudimentary system afterwards.
> 
> Absolutely. The Installation Experience is one of the first contacts
> with the distribution for most people¹, and since we all know that the
> first five seconds decide whether it's gonna be love or hate, we
> should not be trying THIS hard to be a failure in this very important
> part of the relationship our product is building with the user.

It's not very relevant today anymore, but my first Linux distributions
(including Debian) were actually store-bought. If we had needed to rely
on firmware back then (which we did not) and would it not have been
included in the box, the user would have been pretty much out of luck.
Especially on the networking side. I guess the end result is the
equivalent of shipping a separate driver disk with all the non-free bits. :(

The FSF also kinda muddied the waters with its stance on it being ok to
have soft-updatable firmware in EEPROMs but insisting that it is not ok
to load firmware on demand post-boot. At the same time efforts like SOF
which try to offer open firmware are interesting. But then we still end
up with the firmware in non-free, of course, as it needs to be signed
for the most common DSPs - and cannot be rebuilt reproducibly. I guess
we are not the target here either but instead it's for vendors basing
their firmware on one common architecture. So even when we get close, we
don't seem to get all the way. :(

Kind regards
Philipp Kern



Re: Making Debian available, non-free promotor

2021-01-16 Thread Philipp Kern
On 15.01.21 13:42, Ansgar wrote:
> On Tue, 2021-01-12 at 19:30 +0100, Geert Stappers wrote:
>> Ah, yes I also wonder how much the world will improve
>> if non-free would be split in non-free and non-free-firmware.
>> Currently is non free firmware a hugh promoter of non-free in
>> /etc/apt/sources.list
> 
> I proposed moving non-free firmware to a new non-free-firmware some
> time ago[1], but then it seemed like there was no consensus on this
> which I though we had before.  Some people wanted non-free/firmware
> instead (different name), wanted packages to start appearing in
> multiple components (non-free and non-free[/-]firmware), wanted
> additional categories (e.g. non-free/doc for Free Software Foundation
> stuff), wanted non-free drivers as well, wanted major changes how
> components work (which might imply incompatible changes for mirroring
> tools and so on), ...

I idly wonder if we could call it firmware and call it a day. I tried to
propose that a bunch of times and was not successful either (e.g. it was
unclear to me if that needed a GR).

I guess better non-free filtering would not be a bad idea, though. For
the buildd network it is also still an unsolved question how to allow
build-depending on a (small, allowlisted) subset of non-free.

Kind regards
Philipp Kern



Re: Package dependency versions and consistency

2020-12-30 Thread Philipp Kern
On 29.12.20 23:39, Josh Triplett wrote:
> API is not ABI, and in many ecosystems (including but not limited to
> Rust), a library is more than just a set of symbol names pointing to
> compiled machine code. For instance, libraries can include generics,
> compile-time code generation, constant functions evaluated at
> compile-time, code run by build scripts, macros, and a variety of other
> things that get processed at build time and can't simply get
> pre-compiled to a symbol in a shared library. It may be possible, in the
> future, to carefully construct something like a shared library using a
> subset ABI, but doing so would have substantial limitations, and would
> not be a general-purpose solution for every library. It *might* be a
> viable solution for specific libraries pre-identified as being
> particularly likely to require updates (e.g. crypto libraries).

Interestingly enough this is also growing discontent in the C++
community around ABI stability holding them back (e.g. [1] but that's by
far not the only such opinion).

I would have liked to make the ability to binNMU more accessible
(similar to the give-back self-service), however I'm now somewhat
convinced that we need no change source-only uploads, preferably
performed centrally by dak. And you need to be able to supply build
ordering constraints.

I do wonder if delta downloads of .debs would really help, though. Even
though individual builds might be reproducible the same is not
necessarily true across independent uploads. So at least for final
applications, with optimizations like LTO, it seems not that useful. For
the lot of small helper packages in between it might help but then the
delay is often to grab that off the mirror's disk and a delta scheme
might only make that worse.

Kind regards
Philipp Kern

[1] https://cor3ntin.github.io/posts/abi/



Re: Architecture: all binNMUs (was: deduplicating jquery/)

2020-12-06 Thread Philipp Kern
On 06.12.20 01:08, Paul Wise wrote:
> On Sat, Dec 5, 2020 at 12:21 PM Matthias Klose wrote:
> 
>> Maybe there is more. But there's no progress, or intent to fix every tool to 
>> be
>> aware of binNMUs.  Maybe it's better to rethink how sourceful no-change
>> no-maintainer uploads could be done without introducing the above issues?
> 
> `dch --rebuild` already exists, so this would just need support in
> wanna-build/sbuild for generating such uploads and support in dak for
> accepting sourceful uploads from wanna-build/buildds rather than
> maintainers.

Given the whole source code trust story it'd be better if dak were to do
it by itself rather than relying on an external service to do it.

(Or we make it culturally allowed to do it using client-side tooling, as
long as it is a no-change-but-debian/changelog upload.)

Kind regards
Philipp Kern




signature.asc
Description: OpenPGP digital signature


Re: Allowed to build-depend a pkg in main on a pkg in non-free?

2020-09-30 Thread Philipp Kern
On 30.09.20 22:45, Josh Triplett wrote:
> [Accidentally sent this early before it was finished.]
> 
> Roland Fehrenbacher wrote:
>>>>>>> "S" == Sven Joachim  writes:
>> S> In addition, the packages in *main*
>> S> 
>> S> * must not require or recommend a package outside of *main* for
>> S>   compilation or execution (thus, the package must not declare a "Pre-
>> S>   Depends", "Depends", "Recommends", "Build-Depends", "Build-Depends-
>> S>   Indep", or "Build-Depends-Arch" relationship on a non-*main* package
>> S>   unless that package is only listed as a non-default alternative for
>> S>   a package in *main*),
>>
>> Hmm, what I intend to do conforms to the first sentence of the paragraph
>> (the packages to go into main do not require or recommend a package
>> outside of *main* for compilation or execution),
> 
> The *source package* would, though, and that isn't allowed either.
> 
> One specific rationale for this: it must be possible for someone who
> *only* uses main to download the source, install the build dependencies,
> and successfully build the package themselves. Doing *that* must not
> require anything outside of main.

Somewhat ironically not depending on anything but main is also true for
non-free and contrib. (At least when you want it to be built by the
official builders.)

Kind regards
Philipp Kern



Re: [External] Re: Lenovo discount portal update (and a few other things)

2020-09-04 Thread Philipp Kern
On 04.09.20 11:23, Philip Rinn wrote:
>> Why do we need to make this 100% accurate in the first place? Everyone
>> who got access to a debian.org email address has been an OSS contributor
>> of sorts. Which leaves those who opted out of the email address entirely
>> (rather than not using it) - but they are free to reactivate it. It
>> feels like just checking for @debian.org is good enough, IMO.
> 
> Well, DMs don't have debian.org email addresses.

Sure, but I'd expect that state to be temporary, no?

Kind regards
Philipp Kern



Re: [External] Re: Lenovo discount portal update (and a few other things)

2020-09-04 Thread Philipp Kern
On 04.09.20 03:39, Paul Wise wrote:
> On Thu, 2020-09-03 at 15:18 -0400, Mark Pearson wrote: 
> 
>> For DSA - I'm assuming all role addresses have members behind it with 
>> debian addresses? "Please don't register on the portal with role 
>> addresses" would seem a sensible guideline to me.
> 
> I just took a look at the aliases repo and most of them are solely
> Debian members but some have folks who are not yet Debian members and
> at least one has no Debian members on it.
> 
>> If there is a group missing that it makes sense to add we can look at 
>> that - let me know. Using the debian.org email as a filter seemed like a 
>> neat and simple solution when I discussed it with Jonathan originally.
>> I'd rather avoid having to manage lists of individual email addresses. 
>> That's a real pain and IMO will only break in the long term.
>> Open to other suggestions if what we have implemented doesn't work but 
>> it has to be balanced with the amount of effort involved.
> 
> If you are able to regularly automatically load and process a file,
> there is one containing a list of Debian Maintainers, including an
> email address that they use in their Debian work. IIRC this list is
> regularly pruned by Debian when folks stop contributing. Probably
> updating your copy of it daily would be regular enough.

Why do we need to make this 100% accurate in the first place? Everyone
who got access to a debian.org email address has been an OSS contributor
of sorts. Which leaves those who opted out of the email address entirely
(rather than not using it) - but they are free to reactivate it. It
feels like just checking for @debian.org is good enough, IMO.

Kind regards
Philipp Kern



signature.asc
Description: OpenPGP digital signature


Re: The "which -s" flag

2020-08-31 Thread Philipp Kern
On 31.08.20 22:19, Fabrice BAUZAC-STEHLY wrote:
> Simon McVittie writes:
> 
>> which(1) is non-standardized, which is likely to be part of the reason
>> why Debian has its own implementation not shared with other Linux
>> distributions. Some other Linux distributions, for example Fedora and
>> Arch Linux, use GNU Which <https://savannah.gnu.org/projects/which/> and
>> that doesn't seem to support the -s option either (but does support a lot
>> of other non-standard options). Adding a -s option to the debianutils
>> implementation of `which` will help your scripts to run on Debian and
>> (eventually) Debian derivatives like Ubuntu, but won't help your scripts
>> to run on Fedora or Arch.
> 
> Is there a reason why Debian doesn't use GNU Which?  Sounds like
> unnecessary maintenance to me...

Well, it looks like GNU which was last updated in 2015 (both tarball and
CVS) and despite GNU redirecting to a github.io page it doesn't look
like there is any more up-to-date repository of it either. So I'm not
sure if maintenance is a great argument here. Although I will note that
Archlinux does not actually patch it.

Kind regards
Philipp Kern



Bug#968507: O: icon-naming-utils -- script for maintaining backwards compatibility of Tango Project

2020-08-16 Thread Philipp Kern
Package: wnpp
Severity: normal

I intend to orphan the icon-naming-utils package.

Last upstream release was 11 years ago. There is effectively no churn in
this package. It is also a required build dependency for a bunch of icon
themes:

# Broken Build-Depends:
extra-xdg-menus: icon-naming-utils
gnome-icon-theme: icon-naming-utils (>= 0.8.7)
human-icon-theme/non-free: icon-naming-utils (>= 0.8.1)
mate-icon-theme: icon-naming-utils (>= 0.8.7)
mate-themes: icon-naming-utils
metatheme-gilouche: icon-naming-utils
moblin-icon-theme: icon-naming-utils
sugar-artwork: icon-naming-utils
suru-icon-theme: icon-naming-utils
tangerine-icon-theme/non-free: icon-naming-utils (>= 0.7.1)
tango-icon-theme: icon-naming-utils (>= 0.8.90)

The package description is:
 Tango is a project to create a new cross-desktop and cross-platform icon
 theme, using a standard style guide, and the new Icon Naming Specification.
 This package contains the perl script for maintaining backwards
 compatibility.



Re: Salsa update: no more "-guest" and more

2020-04-28 Thread Philipp Kern

On 2020-04-28 18:48, Jeremy Stanley wrote:

On 2020-04-28 14:32:55 +0200 (+0200), Bernd Zeimetz wrote:

On 2020-04-28 14:27, PICCA Frederic-Emmanuel wrote:
> Is it possible to use it's ssh key in order to  have acces to
> the salsa api ? I mean instead of the token, which looks to me
> quite fragile compare to ssh via a gpg card and the gpg agent.

The api works with a token - and without 2fa. That will not change
if you enforce 2fa.

If you use ssh, you can create an own account for the ssh key and
give it very special permissions, if you need it for automatic
pushes or similar things.


So to summarize, 2FA in Salsa may protect against someone losing
control of their WebUI credentials, but does nothing to secure
against theft of API keys they've generated, nor of an SSH key
persisted/forwarded in an agent or left lying around unencrypted (or
easily guessed because someone made unfortunate choices when
patching a random number generator).


Hopefully adding those requires reauthenticating with 2FA even in an 
open session.



Before adding security controls, it's a good idea to assess your
threat model. Is it the same as projects which experienced high
profile compromises like the Linux kernel archive or Matrix, where
the attackers leveraged stolen SSH keys to gain a foothold? What is
Salsa hosting which would be sensitive if altered? Source code. And
how are those alterations normally applied? Git over SSH. (Granted,
there's discussion of using its WebUI to authenticate sessions for
other project systems, so that does potentially change the risks
involved.)


While that's true that's also a "it needs to provide perfect security" 
argument. While I'd also like to see 2FA gain proper support for 
authenticated key use including touch (FIDO/U2F support landed in 
OpenSSH), it also solves a different problem. The problem here is 
phishing. And unfortunately even the most technically adept users can be 
phished when they let their guard down.



I agree that having 2FA support in Salsa is great, but providing it
for those who want to rely on it for their accounts is different
from unilaterally forcing it on all users even if they find it a
significant additional inconvenience for little actual benefit.
Thankfully, it sounds like the Salsa admins plan to keep use of 2FA
voluntary.


It's a risk assessment and one of the population it needs to support. I 
think one should encourage people to set up 2FA and if necessary send 
out some hardware if there's an undue hardship. And then eventually make 
it mandatory. I fully understand that this is currently infeasible, but 
if Salsa is going to be the primary development platform we eventually 
need to trust, it should probably go into the direction of having a 2FA 
requirement as an ultimate goal.


Or we decide not to trust it because of its exposure and everyone else 
needs to work around that. I know ftp-master, DSA and other service 
owners have to do this today for good reasons. That pushes the cost 
elsewhere of course. On the other hand it's not the worst idea to 
require signatures on all commits instead.


Kind regards
Philipp Kern



Re: Salsa update: no more "-guest" and more

2020-04-28 Thread Philipp Kern

On 2020-04-28 05:08, Wookey wrote:

On 2020-04-26 14:07 +0200, Bernd Zeimetz wrote:

Hi,

Google Authenticator is a software-based authenticator by Google that
implements two-step verification services using the Time-based 
One-time
Password Algorithm (TOTP; specified in RFC 6238) and HMAC-based 
One-time
Password algorithm (HOTP; specified in RFC 4226), for authenticating 
users of

software applications.

There are even cli tools that do the same stuff. I'd guess there is at 
least

one on Debian.


yes oathtool.

But this is still a PITA for sites where it is required, like
microsoft and google. I don't want to have to do this for Debian stuff
too. (run this auth program, then have a menu to say which site I
am making the number for so it knows which token to use, then paste
the resulting magic number into a webform). Are you proposing
something less tiresome than this?

I would much prefer to continue to be trusted not to have a shit
password and take reasonable care in using it. Or that PAKE thing
sounded like it might work quite well and the site didn't have to keep
the whole password list. But my experience of 2FA so far has been
deeply irksome, so I resent it being enforced, unless there is some
way of not having to go through that rigmarole every time (the above
sites generally only make me do it every two weeks - if it was every
time I'd explode). But if every site started doing this it would be
truly awful - one has hundreds of logins these days.

Debian is one place that has a reasonably competent userbase - I
remain unconvinced that we need to change things.


It's kinda weird that the solution exists with 2FA FIDO tokens using 
webauthn. (Like the current generation of Yubikeys but there are of 
course others.) Gitlab supports that.


I mean I don't want to suggest that buying hardware is required, but 
that's literally what they were designed for. Automatically dealing with 
origin information sanely and then a touch signs you in. OTPs are as 
fishable as passwords.


Kind regards
Philipp Kern



Re: FTP Team -- call for volunteers

2020-03-15 Thread Philipp Kern

On 2020-03-15 18:25, Theodore Y. Ts'o wrote:

The bigger thing which we might be able to do is to require minimal
review if the source package is already in the distribution, but the
main reason why it is in the ftp-master tar pit is because upstream
has bumped the major version number of a shared library, and so there
is a new binary package triggering what appears to be de novo review
by the ftp master team.  I understand there is a super-abundance of
caution which seems to drive all ftp-master team decisions, but
perhaps this could be eased, in the interests of reduce a wait time
of, in some cases greater than a year?


It also drives technical decisions. A much cleaner way from a deployment 
perspective would be to version kernel packages (and another pet peeve, 
nvidia packages) for every upload. That way updates and rollbacks can be 
managed more cleanly. (E.g. old kernel remaining in the boot menu, just 
like Ubuntu bumps with every upload these days.)


Now we could also fix that using a whitelist approach. But I have not 
seen much openness to tackling this part of NEW review and I am unsure 
why. From the public NEW tooling (I don't know dak's side) it pretty 
clearly does not look like a de novo review, as the diff to the archive 
is highlighted. Maybe another way would be to split the queue using a 
weighting function. But I am not aware of public documentation on how 
the review process is organized currently. Is there any?


(I'm happy to look at potential whitelisting code, but I think last time 
someone tried a big refactoring and introduction of tests was required 
of them prior to the contribution - which is a high bar after getting 
dak to run properly for development purposes first.)


Kind regards
Philipp Kern



Re: Master-Slave terminology Re: [Piuparts-devel] piuparts.d.o stalled?

2020-02-13 Thread Philipp Kern

On 2020-02-13 09:14, Timo Weingärtner wrote:

Hallo Ulrike,

12.02.20 17:46 Ulrike Uhlig:

On 12.02.20 17:01, Nicolas Dandrimont wrote:
> In any case, since DSA had to restart everything at UBC, the piuparts
> slave got restarted as well and it's churning through the backlog.
> Unfortunately it looks like restarting the slave just eats its logs.

I'd like to attract your attention to this very fine document:

https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1

Quoting from there: "Master-slave is an oppressive metaphor that will
and should never become fully detached from history."

As an alternative:
"Several options are suggested here and should be chosen based on the
pairing that is most clear in context:

Primary-secondary
Leader-follower
Active-standby
Primary-replica
Writer-reader
Coordinator-worker
Parent-helper
"


I don't think giving slaves new labels helps them in any way; they will 
still

be slaves.

Or do you intend to actually liberate them?
If yes: how? which liberties are they supposed to gain?
If no: then you're actually helping the slave owners hiding their 
wrongdoings.


Regardless if you buy the premise of racism in language, the 
alternatives suggested are actually quite instructive: You can use names 
that are actually more descriptive and do not invoke bad memories, i.e. 
it's a constructive proposal. The same is true for blacklist vs. 
whitelist as mentioned in there, for which allowlist and rejectlist are 
terms that actually describe what is happening in most contexts.


Of course communities also build up some slang to see who is "in" the 
group and who is "out". But it actually makes things more accessible to 
others if you describe things as they are.


Kind regards
Philipp Kern



Re: Heads up: persistent journal has been enabled in systemd

2020-02-05 Thread Philipp Kern

On 2020-02-05 08:12, Michael Biebl wrote:

journald has matured of the years and journalctl has become more
sophisticated with dealing with broken journal files, so I would be 
very

much interested in your experience with more recent systemd versions.


Not quite the same issue, but I will note that journalctl's (and also 
systemctl status') performance reading journal files is still pretty 
awful on spinning rust[1]. At times this makes me go to text logs 
instead because slicing the files using tail and grep is much, much 
faster.


Kind regards
Philipp Kern

[1] I think this is pretty much 
https://github.com/systemd/systemd/issues/2460




Re: migration from cron.daily to systemd timers

2020-01-08 Thread Philipp Kern

On 2020-01-08 14:27, Daniel Leidert wrote:

And what s the benefit of this change: Getting rid of cron?

The very simple thing is: CRON=1 enables a cron job. It does *not* say: 
"Please
enable something different as long as it achieves the same." There is 
nothing
wrong with the cron job and it works perfectly fine. So I don't want to 
have it

replaced by something less transparent.

Why do you resist the appropriate behavior of raising a question 
whether the

user wants you to replace cron by systemd?


I don't think yelling in this way is helpful. "Run by cron" used to be 
the way of saying that it's a periodic job - as the only means of 
accomplishing this and it turns out it was a misnomer already given that 
it's run from cron.daily, which might be anacron on some systems and 
cron on others.


I think there needs to be a sensible choice for *periodic jobs* that we 
should document as the default unless there is a reason to use something 
else. It does not need to be cron, though.


Kind regards
Philipp Kern



Re: Be nice to your fellow Debian colleagues

2020-01-01 Thread Philipp Kern
> On 1/1/20 9:46 pm, Martin Steigerwald wrote:
>> I agree with Andrew that at least some of the options in the GR
>> were not about diversity or inclusion, but about exclusion and the
>> opposite of diversity. I pointed it out *clearly* before hand, but
>> that was all I could do.
> 
> Yes, but it's much more than that.  The diversity in decisions
> relating to Debian's future need to be able to be influenced by the
> people and for the people -- not by the political classes.  In this
> case, the political classes are the DDs that have absolute privilege here.
> 
> IOW, the GR process itself is severely flawed and it cannot, in it's
> current state provide what is needed for Debian from the eyes of all
> reasonable stakeholders, it is very limited to a small group of Debian
> users known collectively as DDs .. the current "gods" of Debian whom
> have ultimate power to do good or do bad with or for the project.
> [Such] Power corrupts and absolute power corrupts absolutely.

So Debian is a volunteer project. We build an OS - together. Those who
do the work get the say. And as far as our Social Contract goes, either
there is a trust of the user base that we consider the needs of our
users or they will go elsewhere. I think as Devuan has shown, they don't
actually do the latter. And if they do, more power to them if that
serves their need better. This is Free Software after all. If there is a
greater need and valuable *actual new software* (like - from hearsay -
elogind and opensysusers) as the output, someone who is intrigued by
that can package and integrate it. Telling others to completely stifle
any kind of progress because of almost religious[1] opposition is not
acceptable.

In every decision there will be people who feel misrepresented. Thus is
democracy. In fact the outcome was not the absolutist Proposal F but the
slightly more inclusive Proposal B. I think it is fair to assume that
the world can move on and what we settle on a default that serves as the
baseline for others to work against. There was clarity missing and I
would expect that now the actual assumptions will be clarified in
policy, as Russ already started in [2]. So there should be something to
work against for alternative systems.

Of course we can go and argue that only a small subset voted and if
those people are the most active in the project. But I don't think that
this is particularly useful distinction. For the best we know the others
 did not care enough to vote (or were unable to for technical reasons)
and were thus ok with any outcome. Also we welcome people to join the
project, if they do contribute in whatever way. And that comes with a vote.

Kind regards
Philipp Kern

[1] I looked for a more neutral word here, but failed to find one.
Please give me the benefit of the doubt, given that I am a non-native
speaker.
[2] https://lists.debian.org/debian-policy/2019/12/msg00025.html



Re: Building Debian source packages reproducibly

2019-10-29 Thread Philipp Kern

On 2019-10-29 08:32, Tobias Frost wrote:

On Mon, Oct 28, 2019 at 05:53:00PM +, Ian Jackson wrote:

(...)


For example, you would not be able to do this:
   git clone salsa:something
   cd something
   make some straightforward change
   git tag# } [1]
   git push   # }
Instead you would have to download the .origs and so on, and wait
while your machine crunched about unpacking and repacking tarballs,
applying patches, etc.



I'm missing a "and then I test my package to ensure it still works 
before

upload" step…

I wonder how someone should test their packages when they do
not build it locally.
And if they do (as they should), the advantages you line
out are simply not there.



More abstractly we do not do that for binNMUs either. My main worry here 
is that we are designing a solution which still precludes sourceful 
no-change NMUs, which would actually be the correct solution for 
consistent versioning across all architectures. Ubuntu exclusively does 
those and I still struggle how we would build such a service in Debian 
without facing exactly the same concerns as tag2upload. Maybe if dak 
itself would do it?


Kind regards
Philipp Kern



Re: Debian and our frenemies of containers and userland repos

2019-10-07 Thread Philipp Kern

On 2019-10-07 13:43, Johannes Schauer wrote:

Quoting Philipp Kern (2019-10-07 13:21:36)

On 10/7/2019 1:17 PM, Shengjing Zhu wrote:
> On Mon, Oct 7, 2019 at 6:29 PM Simon McVittie  wrote:
>> On Mon, 07 Oct 2019 at 07:22:53 +0200, Johannes Schauer wrote:
>>> Specifically, currently autopkgtest is limited to providing a read-only 
layer
>>> for certain backends and its upstream has no intention of widening the 
scope of
>>> the software [1]. This means that to upgrade an autopkgtest backend, the 
user
>>> has to apply backend-specific actions
>> I think "re-bootstrap, don't upgrade" is an equally good principle
> Why not have a repository for it, like dockerhub. So this becomes
> "pull latest build env", which saves lots of time("re-bootstrap" is still
> slow nowadays).
In that case it'd probably be better to make bootstrapping faster 
rather

than trusting random binaries on the internet. (Unless we grow an
"assemble an image from debs" service on, say, ftp-master.)


creating a working sbuild chroot takes 10 seconds on my system:

$ time mmdebstrap --variant=essential unstable debian-unstable.tar
[...]
mmdebstrap [...]   8.35s user 1.73s system 99% cpu 10.166 total

Do we need to spend engineering effort to become faster than that?


I suppose that also depends on deb caching/pipe bandwidth? But yes, I 
find that totally fine.


Downloading "random binary from the internet" is less of a problem if 
we can
create images which are bit-by-bit identical to checksums that we can 
verify

through a trusted service. This is also already provided by mmdebstrap:

$ SOURCE_DATE_EPOCH=1570448177 mmdebstrap --variant=essential
unstable - | sha256sum
[...]
f40a3d2e9e168c3ec6270de1be79c522ce9f2381021c25072353bb3b5e1703d6  -
$ SOURCE_DATE_EPOCH=1570448177 mmdebstrap --variant=essential
unstable - | sha256sum
[...]
f40a3d2e9e168c3ec6270de1be79c522ce9f2381021c25072353bb3b5e1703d6  -


I think that's required, but not sufficient. That still depends on 
someone actually verifying this fact and publishing their proofs. 
Otherwise you need to do it yourself or risk getting a binary served to 
your build process only if not checked interactively[0].


Lastly: yes, maybe "re-bootstrap, don't upgrade" is an equally good 
principle.
It has the advantage of not accumulating cruft. The sbuild-createchroot 
command
could gain an option which allows one to replace an existing chroot. 
Currently,

it refuses to work on already existing chroots.


At work we always regenerate unless when you test multiple times locally 
in quick succession. Assuming that the result is not totally wasteful 
(e.g. by caching bootstrap debs locally) that seems like a good way to 
get predictable local build environments.


Kind regards and thanks
Philipp Kern

[0] It seems to be a standard tool these days to serve exploits only 
when the caller looks like the target environment. I.e. if you check the 
script you pipe into bash from a browser it looks fine, if you curl it 
into bash[1], it looks different.

[1] Yes, it seems like even that case can be identified.



Re: Debian and our frenemies of containers and userland repos

2019-10-07 Thread Philipp Kern
On 10/7/2019 1:17 PM, Shengjing Zhu wrote:
> On Mon, Oct 7, 2019 at 6:29 PM Simon McVittie  wrote:
>>
>> On Mon, 07 Oct 2019 at 07:22:53 +0200, Johannes Schauer wrote:
>>> Specifically, currently autopkgtest is limited to providing a read-only 
>>> layer
>>> for certain backends and its upstream has no intention of widening the 
>>> scope of
>>> the software [1]. This means that to upgrade an autopkgtest backend, the 
>>> user
>>> has to apply backend-specific actions
>>
>> I think "re-bootstrap, don't upgrade" is an equally good principle
> 
> Why not have a repository for it, like dockerhub. So this becomes
> "pull latest build env", which saves lots of time("re-bootstrap" is
> still slow nowadays).

In that case it'd probably be better to make bootstrapping faster rather
than trusting random binaries on the internet. (Unless we grow an
"assemble an image from debs" service on, say, ftp-master.)

Kind regards
Philipp Kern



Re: Mozilla Firefox DoH to CloudFlare by default (for US users)?

2019-09-28 Thread Philipp Kern
On 9/27/2019 12:23 PM, Florian Weimer wrote:
[...]>> So currently DoH is strictly worse.
> 
> Furthermore, you don't have a paid contract with Cloudflare, but you
> usually have one with the ISP that runs the recursive DNS resolver.
> 
> If you look at 
> 
>   <https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/>
> 
> you will see that the data is shared with APNIC for “research”:
> 
> | Under the terms of a cooperative agreement, APNIC will have limited
> | access to query the transaction data for the purpose of conducting
> | research related to the operation of the DNS system.
> 
> And:
> 
> | Specifically, APNIC will be permitted to access query names, query
> | types, resolver location
> 
> <https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/privacy-policy/>
> 
> Typically, APNIC will only see a subset of the queries if you use your
> ISP's DNS resolver (or run your own recursive resolver).
> 
> Cloudflare only promises to “never sell your data”.  That doesn't
> exclude sharing it for free with interested parties.
It is probably worth pointing out that Firefox's use of Cloudflare's DoH
endpoint is governed by a different policy outlined here:

https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/firefox/

Per that policy, other third parties can only get the data with
Mozilla's written permissions. And APNIC (or any other third party) is
not mentioned.

Kind regards
Philipp Kern



Re: Git Packaging: Native source formats

2019-08-29 Thread Philipp Kern
On 8/29/2019 8:32 PM, Andrej Shadura wrote:
>> So `3.0 (native)' is not strictly better than 1.0.  dpkg-source
>> refuses to work in the situation where I am saying (and you seem to be
>> agreeing) that it shouldn't even print a warning ...
> 
> I have to disagree with you but I consider this strictly an
> improvement. Allowing native packages with non-native versions
> significantly increases complexity of code handling Debian source
> packages. Not even all Debian tools support this case; arguably it
> should not be supported at all as often leads to malformed packages
> being uploaded to the archive.

While this may be true on some level, it is also important to be able to
build packages from checked-out source trees (say, git repositories)
without an original source present.

For instance at work we check in whole Debian packages as-is (including
their non-native version) to fork and then modify them. Changing the
versioning scheme is pretty disruptive there. For people unfamiliar to
Debian the diff is already represented in the VCS and there is no
technical need to have this conveyed in the intermediate source package
representation that is only needed to feed the build to the build system.

Of course one workaround is to always build from the build tree and to
always specify -b/-B and never build a source package at all.
Unfortunately the various defaults in Debian's toolchain don't make that
as easy as it should be. Some can be addressed through wrapper scripts,
but then it's odd to anyone familiar with Debian.

Obviously I'm not bound to that format being "3.0 (native)" but some
"3.0 (dumb)" that just tars up the whole tree without caring about the
version scheme would then be nice to have as a replacement. ;-)

Kind regards
Philipp Kern



Re: Please stop hating on sysvinit (was Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?)

2019-08-20 Thread Philipp Kern
On 8/20/2019 8:02 AM, Bjørn Mork wrote:
> Bernd Zeimetz  writes:
>> On 8/11/19 12:01 PM, Adam Borowski wrote:
>>>   restart|force-reload)
>>> log_daemon_msg "Restarting $DESC"
>>> do_stop
>>> sleep 1
>>> do_start
>>> log_end_msg $?
>>> ;;
>>>
>>> Yes, this particular case might fail on a pathologically loaded box or with 
>>> a
>>> very slow chip, but I don't know of a way to ask firmware if something is
>>> still winding down, thus what we ship is probably still sanest.
>>
>> yes, for firmware this makes sense, but I've also seen various sysv
>> scripts where people tried to guess the time a service needs to
>> shutdown, so some random sleep was added instead of handling it in a
>> sane way. This issues are luckily fixed forever with systemd - it just
>> knows, whats going on.
> 
> LOL! Please try to be serious.
> 
> https://www.google.com/search?q=%22stop+job+is+running%22

I'd wager that most of this is due to unit files autogenerated by the
sysv generator as well as daemon options not being sufficiently tightly
speced out in native unit files. After all, you do want to give daemons
some time to stop. But at least with systemd you know when the process
has exited.

Also I mostly saw this taking a long time around deactivation of devices
(swap, crypto). (Although I question why you'd disable swap given the
consequence of getting everything back in, but alas.)

Kind regards
Philipp Kern



Re: Building GTK programs without installing systemd-sysv?

2019-08-14 Thread Philipp Kern
On 8/14/2019 2:24 PM, Simon Richter wrote:
> The long term question remains though -- I dimly remember that we once had
> the same discussion about a library pulling in rpcbind, and that made a lot
> of people very unhappy at the time.

As Holger said: Then use a chroot. With policy-rc.d you can deny service
startup, which is also what the builders do.

Kind regards
Philipp Kern



Re: Building GTK programs without installing systemd-sysv?

2019-08-14 Thread Philipp Kern
On 8/14/2019 12:40 PM, Simon Richter wrote:
> I have a few users who do test builds of kicad on my server, so I'd like to
> provide the necessary build dependencies, but since a few days, the
> dependency chain
> 
> libwxgtk3.0-gtk3-dev
>   libwxgtk3.0-gtk3-0v5
> libgtk-3-0
>   libgtk-3-common
> dconf-gsettings-backend
>   dconf-service
> dbus-user-session
>   libpam-systemd
> systemd-sysv
> 
> stops gtk upgrades from happening because I have pinned systemd-sysv to
> -100 to avoid repeating the last unfortunate incident where I had to drive
> to the colo facility.

You want dbus-x11 instead of dbus-user-session then, I think.

Kind regards
Philipp Kern



Re: Please stop hating on sysvinit (was Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?)

2019-08-08 Thread Philipp Kern

On 2019-08-08 14:43, Holger Levsen wrote:

On Thu, Aug 08, 2019 at 02:35:13PM +0200, Ondřej Surý wrote:

And there’s the problem.  If we keep with sysvinit as a baseline of
features provided by the init, we end up with just every init script
having something like this: [...]


it seems several people in this thread have missed the fact, that
sysvinit in Debian is maintained well again, eg there have been 17
uploads of it so far in 2019, see
https://tracker.debian.org/pkg/sysvinit/news/

(so I think the above fixes could all be made in one central place.)


As a lot of the conflict between sysvinit and systemd was about the 
philosophy. So then the question boils down to what kind of feature 
development sysvinit *in Debian* is willing to do to do that. If the 
answer is "we really want the shell scripts as they have been since the 
beginning of time - and that is the scope of sysvinit" (which would not 
be true either, I know), then we cannot have that discussion either.


That's also to some degree why I think a solution to this problem is for 
the init diversity folks to figure out and we should not block on that. 
And that seems fine given the scope they have set for themselves.


Kind regards
Philipp Kern



Re: Please stop hating on sysvinit (was Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?)

2019-08-08 Thread Philipp Kern

On 2019-08-08 13:47, Ondřej Surý wrote:

Please stop hating on sysvinit


So, just to clarify…  so, it’s ok to hate systemd, but it’s not ok to
hate sysvinit (spaghetti of shell scripts)?


I don't think that's a constructive line of argument. At the same time 
it's not a race to the bottom in terms of features. I think our baseline 
should be thinking in terms of the features of the default we have.


I don't have a great answer about the added maintenance cost that 
sysvinit support puts on maintainers and thus they are rejecting certain 
changes. I would like to say that I appreciate the work but personally 
not care, but I have learned the hard way that just keeping things 
working is a ton of work. And if you don't pay that cost stuff keeps 
rotting. Some of that could be addressed with better integration 
testing. But at the same time it does not answer the question on who 
pays the cost of rebasing changes, especially with more upstream 
packages providing base services building more around reliable systemd 
services. I feel like the answer is temporarily throwing out those parts 
if needed or accept that they are broken until they get fixed, but that 
might not be universally accepted, I guess.


Kind regards
Philipp Kern



Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?

2019-08-07 Thread Philipp Kern

On 2019-08-07 18:51, Jeremy Stanley wrote:

On 2019-08-07 10:19:00 +0200 (+0200), Marc Haber wrote:

On Mon, 05 Aug 2019 22:29:41 +0200, Philipp Kern wrote:

[...]

> I'd still expect a Cloud/Compute provider to offer default
> images in any case that could be preconfigured appropriately.

I am one of those customers who almost never uses the images
offered by the hoster whereever this is possible. Guilty as
charged, your honor.

[...]

As soon as you start depending on multiple providers to spread out
the risk for your workloads, it almost becomes a necessity to bring
your own images if you want to be able to build consistent systems
across disparate providers. Even if they haven't unnecessarily
tampered with official distro images themselves, there's no
guarantee that the Debian images they offer are for the same point
releases/snapshot dates and so on.


Yeah, and that's fair. But as you professionalize that arrangement and 
mature the offering, you also end up finding all these kinds of fixes 
yourself to optimize your infrastructure. Probably around the same time 
where you are thinking about your own Debian image release pipeline.


Although I have to say that I find it annoying that every sysadmin out 
there needs to learn these things (and also the whole "what are all the 
bits you need to remove from an image" from a different thread) the hard 
way. Yes, it's a way to make us all individually more valuable but 
maintaining something on best practices on how to maintain a Debian 
desktop, laptop, or server would be great.


Kind regards
Philipp Kern



Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?

2019-08-06 Thread Philipp Kern

On 2019-08-06 13:43, Bill Allombert wrote:

On Mon, Aug 05, 2019 at 10:29:41PM +0200, Philipp Kern wrote:

And finally, the load spikes: Upthread it was mentioned that
RandomizedDelaySec exists. Generally this should be sufficient to even 
out

such effects. I understand that there is a case where you run a lot of
unrelated VMs that you cannot control. In other cases, like laptops 
and
desktops, it is very likely much more efficient to generate the load 
spike

and complete the task as fast as possible in order to return to the
low-power state of (effectively) waiting for input.


This assumes the system has enough RAM and other ressources to complete
all the jobs in parallel.


We are talking about cron.*. I feel like that's more of a scheduling 
problem. If you run a constrained machine, I'm sure you are already 
doing customization to accommodate to the shortage and you would not 
want the background cron jobs to kill your production workload?


Apart from the fact that systemd actually lets you limit the resource 
usage/declare what to prioritize. If you go the route of separate timers 
even on a per script basis.


Kind regards
Philipp Kern



Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?

2019-08-05 Thread Philipp Kern

On 2019-08-05 17:34, Ian Jackson wrote:

With current code the options are:

A. Things run in series but with concatenated output and no individual
   status.

B. Things run in parallel, giving load spikes and possible concurrency
   bugs; vs.

I can see few people who would choose (B).

People who don't care much about paying attention to broken cron
stuff, or people who wouldn't know how to fix it, are better served by
(A).  It provides a better experience.

Knowledgeable people will not have too much trouble interpreting
combined output, and maybe have external monitoring arrangements
anyway.  Conversely, heisenbugs and load spikes are still undesirable.
So they should also choose (A).

IOW reliability and proper operation is more important than separated
logging and status reporting.


If we are in agreement that concurrency must happen with proper locking 
and not depend on accidental lineralization then identifying those 
concurrency bugs is actually a worthwhile goal in order to achieve 
reliability, is it not? I thought you would be the first to acknowledge 
that bugs are worth fixing rather than sweeping them under the rug. We 
already identified that parallelism between the various stages is 
undesirable. With a systemd timer you can declare conflicts as well as a 
lineralization if so needed.


I also question the "knowledgeable people will not have too much 
trouble". Export state as granular as possible and there is no guesswork 
required. I have no doubt that my co-workers can do this. But I want 
their life to be as easy as possible.


Similarly I wonder what the external monitoring should be apart from 
injecting fake jobs around every run-parts unit in this case. Replacing 
run-parts with something monitoring-aware? Then why not take the tool 
that already exists (systemd)?


And finally, the load spikes: Upthread it was mentioned that 
RandomizedDelaySec exists. Generally this should be sufficient to even 
out such effects. I understand that there is a case where you run a lot 
of unrelated VMs that you cannot control. In other cases, like laptops 
and desktops, it is very likely much more efficient to generate the load 
spike and complete the task as fast as possible in order to return to 
the low-power state of (effectively) waiting for input. I suspect that 
there is a conflict between the two that could be dealt with by 
encouraging liberal use of DefaultTimerAccuracySec on the system-level. 
I understand that Debian inherently does not distinguish between the two 
cases. I'd still expect a Cloud/Compute provider to offer default images 
in any case that could be preconfigured appropriately.


I apologize that I think of this in terms of systemd primitives. But the 
tool was written for a reason and a lot of thought went into it.


Kind regards
Philipp Kern



Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?

2019-07-29 Thread Philipp Kern

On 2019-07-28 15:07, Ian Jackson wrote:

Marc Haber writes ("Re: do packages depend on lexical order or
{daily,weekly,monthly} cron jobs?"):

On Sat, 27 Jul 2019 19:02:16 +0100, Ian Jackson
>I worry about additional concurrency.  Unlike ordering bugs,
>concurrency bugs are hard to find by testing.  So running these
>scripts in parallel is risky.
>
>And, I think running cron.fooly scripts in parallel is a bad idea.
>The objective is to run them "at some point", not to get it done as
>soon as possible.  Running them in sequence will save electricity,
>may save wear on components, and will reduce overall impact on other
>uses of the same system.

I fully agree with that. However, moving away from "in sequence" thing
would greatly ease the migration to systemd timers, making it easier
to get away without crond on many systems.


Why can't systemd run cron.fooly as one big timer job rather than one
timer job for each script ?


Of course it could. But it's better for the administrator to export 
state as granular as possible. That way the exit codes are exported into 
global systemd state and you can see exactly what is failing rather than 
a generic "something in cron.$foo failed, good luck finding the right 
logs". systemd also gives you the logs per timer unit, rather than in 
bulk, so the error is trivially visible rather than filtering a long log 
for what went wrong.


Think of this as an alert condition: I want to know if, say, 
popularity-contest failed and treat that with lower urgency than, say, 
debsums. One is a goodie and one is evidence of potential disk 
corruption. If the "cron.daily" script failed, I don't know if this has 
a particular urgency until I crawled the logs.



Obviously, I don't think it is a good idea to break this for
non-systemd users because of difficulties making it work properly
with systemd.  Perhaps I have misunderstood you ?


To be honest, that's something that the compatibility/init diversity 
folks then need to figure out.


Kind regards
Philipp Kern



Re: file(1) now with seccomp support enabled

2019-07-28 Thread Philipp Kern

On 2019-07-27 10:01, Vincent Bernat wrote:
Just a quick note: seccomp filters may need adaptations from one libc 
to

another (and from one kernel to another as the libc may adapt to the
current kernel). For example, with the introduction of "openat" 
syscall,

the libc has started to use it for "open()" and the new syscall has to
be whitelisted. On the other hand, if you start implementing seccomp
filters late, you may have whitelisted only the "openat" syscall while
older libc (or current libc running on older kernels) will invoke the
"open" syscall.

I am upstream for a project using seccomp since a long time and I have
never been comfortable to enable it in Debian for this reason. However,
they enable it in Gentoo and I get the occasional patches to update the
whitelist (I am not doing anything fancy).


But technically it should be possible to test this in an autopkgtest, 
no? I don't think perfect has to be the enemy of good here, as long as 
we can detect breakage and remediate it afterwards?


Technically you cannot use any non-vendored libraries when enabling 
seccomp if you reason about it this way. Practically it mostly works 
except sometimes when the filters need to be adjusted. And as you can 
see Gentoo deals with that just fine and we could accept some breakage 
in unstable too, as long as the migration of the breaking library is 
stopped until the fix for the dependencies is in.


Kind regards
Philipp Kern



Re: file(1) now with seccomp support enabled

2019-07-27 Thread Philipp Kern

On 2019-07-27 03:55, Christoph Biedl wrote:

Vincas Dargis wrote...


On 2019-07-26 18:59, Christoph Biedl wrote:
> > tl;dr: The file program in unstable is now built with seccomp support
> > enabled, expect breakage in some rather uncommon use cases.

Interesting, what are these uncommon use cases? Maybe we could confine 
it

with AppArmor instead, since we have it enabled by default?


LD_PRELOAD ruins your day. From the kernel's point of view there is no
difference between a syscall coming from the actual application and one
coming from the code hooked into it. And while the syscalls done by the
first (i.e. file) are more or less known, the latter requires
examination of each and every implementation and whitelisting
everything. Eventually fakeroot-tcp, wishes to open sockets, something
I certainly would not want to whitelist.


FWIW the same is true when you just link against libraries and they 
change their behavior. That makes things pretty brittle.


That being said: It feels like if you face this situation, you could 
also fork off a binary with a clean environment (i.e. without 
LD_PRELOAD) and minimal dependencies and only protect that with seccomp. 
Of course you lose the integration point of LD_PRELOAD that others might 
want to use if you do that, in which case I guess one could offer a flag 
to skip that fork.


In terms of prior art SSH also forks off an unprivileged worker to 
handle network authentication in preauth and only seccomps that one 
rather than its main process. But it's also not doing the environment 
cleanup AFAICS.


Kind regards and thanks for making all of us more secure! :)
Philipp Kern



Re: Dropping Release and Release.gpg support from APT

2019-07-15 Thread Philipp Kern

On 2019-07-09 20:53, Julian Andres Klode wrote:

we currently have code dealing with falling back from InRelease
to Release{,.gpg} and it's all a bit much IMO. Now that buster
has been released with an InRelease file, the time has IMO come for
us to drop support for the old stuff from APT!

Timeline suggestion
---
now add a warning to apt 1.9.x for repositories w/o InRelease,
but Release{,.gpg}
Aug/Sep turn the warning into an error, overridable with an option 
(?)

Q1 2020 remove the code

My idea being that we give this a cycle in the Ubuntu 18.10 stable
release before we drop it, so people are ready for it.

Why remove it?
--
* It's annoying UX to have repositories with Release files and the 
"Ign" lines

* Handling the fallback from InRelease to Release{,.gpg} involves some
abstractions
  and logic and the less logic we have in security-relevant file
fetching, the better


One thing worth noting in case we drop support for generating the files: 
It looks like choose-mirror (no bug found) and net-retriever (bug in 
[1]) in d-i still use Release and not InRelease. Found by investigating 
annoying file races internally that would have been solved by 
InRelease...


Kind regards
Philipp Kern

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=926035



Re: Dropping Release and Release.gpg support from APT

2019-07-10 Thread Philipp Kern

On 2019-07-10 10:04, Julian Andres Klode wrote:

On Wed, Jul 10, 2019 at 10:35:25AM +0800, Paul Wise wrote:

On Wed, Jul 10, 2019 at 2:53 AM Julian Andres Klode wrote:

> Timeline suggestion
> ---
> now add a warning to apt 1.9.x for repositories w/o InRelease, but 
Release{,.gpg}
> Aug/Sep turn the warning into an error, overridable with an option (?)
> Q1 2020 remove the code

[...]

We do need them to ship InRelease files. I just filed an issue for OBS
to do that. Given how long we had InRelease file, and how confusing it
is to not provide InRelease files (not to mention that it doubles the
traffic for no-change cases), I'm surprised they aren't using InRelease
files yet.


Given the timeline, shouldn't we also get oldstable to ship an InRelease 
file?


Kind regards
Philipp Kern



Re: mandatory source uploads

2019-07-08 Thread Philipp Kern

On 2019-07-08 09:14, Thomas Goirand wrote:

On 7/8/19 12:34 AM, Scott Kitterman wrote:
As long as your build-depends are properly versioned, why can't you 
just

upload all the source and let wanna-build sort it out?

[...]

This means that I have to baby-sit the Debian archive and upload
everything in the correct order, waiting for the previous upload to be
accepted and online.


Not really. wanna-build will only schedule the build if the build 
dependency is satisfiable. So if you encode everything correctly there 
(which might be hard, c.f. circular build dependencies), wanna-build 
will schedule in the correct order by itself.



BTW, one very important thing: are the buildds configured to use
incoming at least? If so, that probably could be bearable.


And of course buildds use incoming and so does wanna-build.

Kind regards
Philipp Kern



Re: mandatory source uploads

2019-07-08 Thread Philipp Kern

On 2019-07-08 00:34, Scott Kitterman wrote:

On Sunday, July 7, 2019 6:30:58 PM EDT Thomas Goirand wrote:

On 7/7/19 3:16 PM, Holger Levsen wrote:
> Hi,
>
> On Sun, Jul 07, 2019 at 02:47:00AM +0100, Jonathan Wiltshire wrote:
>> Shortly before the end of the 6th July, we released Debian 10, "buster".
>
> *yay* *yay* & *yay*!
>
>> No binary maintainer uploads for bullseye
>> =
>>
>> The release of buster also means the bullseye release cycle is about to
>> begin. From now on, we will no longer allow binaries uploaded by
>> maintainers to migrate to testing. This means that you will need to do
>> source-only uploads if you want them to reach bullseye.
>>
>>   Q: I already did a binary upload, do I need to do a new (source-only)
>>   upload? A: Yes (preferably with other changes, not just a version
>>   bump).
>>
>>   Q: I needed to do a binary upload because my upload went to the NEW
>>   queue,
>>
>>  do I need to do a new (source-only) upload for it to reach bullseye?
>>
>>   A: Yes. We also suggest going through NEW in experimental instead of
>>   unstable>>
>>  where possible, to avoid disruption in unstable.
>
> whh, that's *totally* awesome news! loving it.

I don't. I don't fee happy of this at all, and here's why.

I have 150 OpenStack packages waiting in Experimental, built for the
OpenStack Stein release. OF COURSE, they all are inter-dependent, and 
to
build a given package, you probably need the latest version of another 
one.


So, instead of preparing them all (build them all for Unstable and
upload at once, using sbuild and --extra-package when needed), it 
means

that I'll have to build them one by one, upload, wait for the next dak
run to have a new package in, then go to the next. With the amount of
package, this probably can take 3 weeks to a month instead of a single
week like I planned.

Also, the result, it's *less nice* for Sid/Bullseye users, because the
transition will be super long if I do this way.

The other alternative is to build all like I planned, upload all to
Unstable, then rebuild all again, and do a 2nd upload (source only 
this

time). There, I'm also loosing a lot of time for no valid technical
reason, which isn't nice at all either. I feel like I'm going to be
doing all of this during all of debcamp / debconf, which isn't fun at
all, I had planned for other stuffs to do there.

Advice on what's the best way would be welcome.

I also very much would prefer if this wasn't announced just like this,
without giving any amount of time to prepare for the thing and discuss
it. That's not the first time the release team does this way, and 
that's
really not the best way to do things. (If I missed the discussion, 
then

IMO it wasn't advertised enough, which has the same effect.)

I very much salute the source-only enforcement, but I really don't 
think

this was thought through completely.


As long as your build-depends are properly versioned, why can't you 
just

upload all the source and let wanna-build sort it out?


That'd assume that there are no circular dependencies. I take it that 
they are all arch:all? (Because otherwise wanna-build would already need 
to figure it out for you to build on other architectures.)


Kind regards
Philipp Kern



Re: Content Rating System in Debian

2019-06-25 Thread Philipp Kern

On 2019-06-25 09:31, Philip Hands wrote:

Russ Allbery:
It sounds like a whole ton of work to get a useful amount of coverage 
(not
to mention bothering upstreams with questionnaires that I suspect 
many of
them would find irritating -- I certainly would with my upstream hat 
on),
and I'm not clear on the benefit.  Do you have some reason to believe 
that
this is a common request by users of Debian?  If so, could you share 
with

us why you believe that?

I'm discussing about CRS inspired from Google Play.

Do Google Play not pay IARC for this?


App developers are generally forced to self-rate their apps, otherwise 
they disappear from the store.


Kind regards
Philipp Kern



Re: ZFS in Buster

2019-06-07 Thread Philipp Kern
On 6/6/2019 8:09 PM, Aron Xu wrote:
> Key interest in the thread is getting some insights about how to deal
> with the awkward situation that affects ZFS experience dramatically -
> Linux will remove those symbols in LTS kernel release, although
> in-kernel symbols are never promised to be stable. I've been in touch
> with ZoL upstream to listen to their plans and wishes, so that we
> (Debian ZoL people) can take actions that serve best for our users and
> community.

I will note that in terms of prior art Debian has in the past always
prioritized freeness over performance. Whenever there are binary blobs
involved to improve performance, we opted not to ship them unless they
could be reproduced using free software and have their source included.

Of course in that case people were still free to install the binary
blobs from non-free, assuming that the blob was in fact distributable.
This would not be the case here. But crippling the performance would be
indeed an option, even though this would make Debian much less relevant
for ZFS deployments and people would just go and use Ubuntu instead.

Still, crippling performance would still provide a lever and motivation
for upstream to come up with a way to disable the FPU on every supported
architecture one by one (if only on the most important one), even if
it's brittle and hacky. I personally wonder why a kernel which provides
a module interface does not provide a way to save FPU state, but alas,
they made their decision.

In the great scheme of things doing things the slow way has forced
certain progress to happen in the past when it was painful enough. The
question I wonder is if we are relevant enough here to push the needle
or if essentially all other distributions (say Ubuntu) will dare not to
follow upstream here and carry a patch forever.

Kind regards
Philipp Kern



Re: @debian.org mail

2019-06-07 Thread Philipp Kern
On 6/6/2019 12:49 PM, Bjørn Mork wrote:
> Daniel Lange  writes:
> 
>> We have more people registered for DebConf ("the Debian Developers'
>> conference") with @gmail.com than @debian.org addresses.
> 
> You can't fix @gmail.com.  It is deliberately broken for commercial
> reasons, and that won't stop with SPF and DKIM.  Anti-spam is just the
> current selling excuse for moving users to a closed, commercially
> controlled, messaging service.
> 
> Document that @gmail.com doesn't work and ask anyone subscribed with
> such an address to resubscribe using an Internet email service.
> 
> You might want to make a press announcement out of it, to prevent other
> service providers from making the same mistake Google has made.

It does not only affect @gmail.com but all other email hosted by Google,
too. And you cannot see that from just the domain name. Thus I have
already given up on trying to mail to destinations other than
@debian.org with my @debian.org account.

So yes, you can proclaim that, but it still makes the @debian.org email
address increasingly useless. The requirement essentially boils down to
using DKIM if you want your emails delivered. There already have been
some suggestions in this thread.

Kind regards
Philipp Kern



Re: Realizing Good Ideas with Debian Money

2019-06-01 Thread Philipp Kern
On 5/31/2019 11:04 PM, Luca Filipozzi wrote:
> Before you ask: an insecure hypervisor is an insecure buildd.

Are we then looking more closely at AMD-based machines given that those
had less problems around speculative attacks?

Kind regards
Philipp Kern



Re: Preferred git branch structure when upstream moves from tarballs to git

2019-05-02 Thread Philipp Kern
On 5/2/2019 7:11 AM, Ben Finney wrote:
> Conversely, I think it does a disservice to downstream users to mix in
> Debian packaging changes with upstream changes. The separation is useful
> and much easier to maintain when the repositories are separate.

To be honest the greatest disservice is that we cannot standardize on a
single way so that a downstream can just pull all of them and build from
them. Instead you need humans analyzing how every single one of them works.

Kind regards
Philipp Kern



Re: is Wayland/Weston mature enough to be the default desktop choice in Buster?

2019-04-22 Thread Philipp Kern
On 4/6/2019 11:41 PM, Guillem Jover wrote:
> Sure, and I've tried libinput with X.Org and for me it's the same subpar
> experience as on Wayland. The difference is that with X.Org I can install
> the synaptics driver.

I think it'd be worthwhile to try and articulate a bug report for
libinput as to what is sub par about it - apart from being subtly
different from synaptics. In particular I have found them to be very
responsive. Some touchpads do need quirks and they have documentation
how to record and replay touchpad input[1] for debugging purposes.

Both Linux (with libinput) and Windows (with "Precision" drivers) now go
the way of processing the events in software only rather than having
proprietary processing in the touchpad. Once the kinks are ironed out
that should actually yield a better, more consistent experience.

Kind regards and thanks
Philipp Kern

[1] https://wayland.freedesktop.org/libinput/doc/latest/tools.html



Re: Hurd-i386 and kfreebsd-{i386,amd64} removal

2019-04-13 Thread Philipp Kern
On 4/13/2019 12:49 PM, Aurelien Jarno wrote:
> The process to inject all packages to debian-ports is to get all the
> deb, udeb and buildinfo files from the archives (main and debug) and
> associate them with the .changes files that are hosted on coccia. We'll
> also need to fetch all the associated GPG keys used to sign the changes
> files. Then we can inject that in the debian-ports archive.
I'm curious how the GPG bit works given that there is no guarantee that
the signature can be validated at any other point in time than ingestion
on ftp-master - especially considering the rotation/expiry of subkeys
and buildd keys. In this case the files already come from a trusted
source and should be ingested as-is, I guess? (Not that I particularly
like the fact that it's only a point in time validation.)

Kind regards
Philipp Kern



Re: FYI/RFC: early-rng-init-tools

2019-02-24 Thread Philipp Kern
On 2/24/2019 8:52 PM, Thorsten Glaser wrote:
> In buster/sid, I noticed a massive delay booting up my laptop
> and some virtual machines, which was reduced by hitting the
> Shift and Ctrl keys multiple times randomly during boot; a
> message “random: crng init done” would appear, and boot would
> continue.
> 
> This is a well-known problem, and there are several bugs about
> this; new in buster/sid compared to stretch is that it also
> blocks urandom reads (I was first hit in the tomcat init script
> from this). This is especially noticeable if you use a sysvinit
> non-parallel boot, but I’m sure it also affects all others.

FTR this is supposedly fixed on the main architectures featuring an RNG
in the CPU by linux 4.19.20-1, which enabled RANDOM_TRUST_CPU. Which Ben
announced on this list[1] earlier this month.

Kind regards
Philipp Kern

[1] https://lists.debian.org/debian-devel/2019/02/msg00170.html



Re: Namespace for system users

2019-02-10 Thread Philipp Kern
Hi,

On 2/9/2019 6:02 PM, Sean Whitton wrote:
> On Sat 09 Feb 2019 at 01:51PM +01, Guillem Jover wrote:
> 
>> To that effect I sent a patch to adduser to allow these in #521883,
>> but it seems that's stuck. :/
>>
>>> How do others deal with this problem? Could someone think of a viable
>>> approach on how to approach this from a policy side?
>>
>> Unfortunately, last time it looked like there was some push bach, due
>> to there not being a clear winner in "current" practice at the time
>> AFAIR. I think a way forward would be to get that adduser patch merged,
>> then keep promoting the underscore usage, and possibly try to switch
>> existing users to use that.
> 
> ISTM to me we have a consensus, at least, that new packages with system
> users should use the underscore prefix convention.  There isn't a
> consensus on what to do about old packages, but Policy can be written in
> such a way to refer only to new packages with system users.

that sounds great to me. I think we should finally come up with a
solution and flesh out how to grandfather in the old packages, while
nudging them to adopt a new scheme if possible. Marco's approach is
ultimately correct in that maintainers of packages with existing system
users should evaluate if something can be done - but it might well be
that it is pretty much impossible to fix for some of the packages. And
that's fine.

I do wonder if it would be possible to solve some of the rename cases
with some sort of dpkg-maintscript-helper so that not everyone needs to
figure this out on their own, but I fear that this could easily be
ratholed into a too generic solution that supports all cases - which
would not be useful.

I did a small evaluation on the set of the existing users created by
packages in sid and put it onto [0]. It's a large list of ~300 users to
exclude while skipping the ones with dashes and underscores in them. I'd
be great to stop the bleeding here, though.

It's a bit sad that the policy bug #248809 did not go anywhere with the
last update happening in 2008. And obviously the list is now much larger
than the list compiled by Vincent back then. Is that the bug in which we
should continue this discussion for the policy change?

> Ideally the adduser change would happen before we wrote this down in
> Policy, but since the adduser behaviour is easy to workaround (IIRC), it
> would not be required for it to happen first.

The former maintainer of the package seems to have been sympathetic to
the patch in [1], too.

Kind regards and thanks
Philipp Kern

[0] https://people.debian.org/~pkern/permanent/userlist.txt -- Obviously
this still contains some variables at the top that would need manual
analysis. I also ignored all of OpenStack which seems to have its own
way of shipping a shell library in every postinst script that calls adduser.
[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=521883#38



Namespace for system users

2019-02-09 Thread Philipp Kern

Hi,

at work we have a large fleet of Debian machines, but also more than 
200k user accounts with no reuse and somewhat painful rename 
experiences. Obviously an increasing number of accounts leads to a much 
increased risk of collisions with system users as created by Debian 
packages.


Of course it is easy to precompile a basic list to ban users from taking 
names like postfix, bind, or sshd. But it will never be exhaustive, 
packages are still free to come up with random names and users are free 
to install them and see things break.


Some core packages recently adding system users resorted to names like 
systemd-$daemon and _apt, which both address my concerns - as you can 
come up with simple rules like "no user might include [-_] in their 
username". On the other hand I know that Debian-* was painful and 
annoying for exim, but I suspect mostly because of the length of the 
username and tools dealing poorly with >8 character usernames. I think 
FreeBSD (among others?) picked the underscore at the front of the 
username. Intuitively that feels like a somewhat clean proposal that is 
also friendly to derivatives.


How do others deal with this problem? Could someone think of a viable 
approach on how to approach this from a policy side?


Kind regards and thanks
Philipp Kern



Accepted icon-naming-utils 0.8.90-4 (source) into unstable

2018-12-27 Thread Philipp Kern
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Thu, 27 Dec 2018 15:59:52 +0100
Source: icon-naming-utils
Binary: icon-naming-utils
Architecture: source
Version: 0.8.90-4
Distribution: unstable
Urgency: medium
Maintainer: Philipp Kern 
Changed-By: Philipp Kern 
Description:
 icon-naming-utils - script for maintaining backwards compatibility of Tango 
Project
Closes: 575303 916618
Changes:
 icon-naming-utils (0.8.90-4) unstable; urgency=medium
 .
   * Mark the package as multi-arch: foreign. (Closes: #916618)
   * Stop using cdbs and quilt. Switch to dh and source format 3.0.
   * Link to tango.freedesktop.org as the package's homepage.
 (Closes: #575303)
Checksums-Sha1:
 8b550d2c5f6328be0c0196a36693abe996a02ae8 1533 icon-naming-utils_0.8.90-4.dsc
 8e6c021bb5967e35e2cc9e0e6607cc6d18a7138b 4924 
icon-naming-utils_0.8.90-4.debian.tar.xz
 86493393f45295f39079f46b99ac3ba72addc5c7 5177 
icon-naming-utils_0.8.90-4_amd64.buildinfo
Checksums-Sha256:
 dffe135f38b3905d6b62c6649319c2f8bbca8f23e7923641974d870d25c7d9e0 1533 
icon-naming-utils_0.8.90-4.dsc
 1fac18b756995b2bf3bc4a3b1c44a0c22c11e2782f338bb7aaf11495f9911c12 4924 
icon-naming-utils_0.8.90-4.debian.tar.xz
 59d40e1ac0399a4bb3bdb0eaa59f5ac15d778c82d50543c1e9bdc35f5d09265e 5177 
icon-naming-utils_0.8.90-4_amd64.buildinfo
Files:
 bd65c84384d10cf7a37144969930205d 1533 x11 optional 
icon-naming-utils_0.8.90-4.dsc
 fe934f9a5ed5741a0f33e3428382d533 4924 x11 optional 
icon-naming-utils_0.8.90-4.debian.tar.xz
 cd03f8e02cfdf3acfbd3d5fbcc8adab3 5177 x11 optional 
icon-naming-utils_0.8.90-4_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQFFBAEBCgAvFiEEPzuChCNsw7gPxr3/RG4lRTXQVuwFAlwk7EARHHBrZXJuQGRl
Ymlhbi5vcmcACgkQRG4lRTXQVuz2LQgAukKGQkklry+ylsVziDTnbfoZUhfYP04F
BzRgKmO+1V6L6uoPHUgTUlQtbAZLAFeoGZ8A9jR9x7QqyPBUg5ZQQxKxcPVOQptF
gU37MYLKcg628KK9RMdq4AWsGR7m00mewiTX0aNoMF6SzotrmBwAq21zvBp/dpC/
f9ZZ1VrjLEFpSNC1R29Wb9hNp11Rk+CFnoqRRfcdaYY7438GNl9zG/lbsaGDsnZF
U0gG+fxo5Cs71cchjpIquW86D6OGwKxxrwNghGlKErWiySX5oOiDtolm513ythWJ
4o/3TVGZdgJFmPtZvdjPqVQDA2dNvLNgVRJ7tcXi8Az4tCErdnlvHA==
=91nn
-END PGP SIGNATURE-



Re: Proposal: Repository for fast-paced package backports

2018-12-26 Thread Philipp Kern

On 26/12/2018 19:31, Dominik George wrote:

   - The package must not be in testing, and care must be taken for the
 package not to migrate to testing.

So what would a user of testing do? Will there be a $codename-volatile[1]
suite for testing users? Or would they directly install unstable with no
other pre-release staging ground? (Which seems like a bad idea.)


The testing distribution and this concept contradict each other. The
testing distribution is the least supported stage in the Debian release
cycle, and if used, shall only be used for testing the next Debian
release. Especially, there are no timely security updates in testing,
neither by the security nor by the maintainer. So using it for anything
other than testing the next Debian release in a laboratory is a bad
idea.

This proposal, however, is about providing fast-paced software as an
exception on an otherwise stable system.

So if anyone wants to test fast-paced software within a Debian testing
system, they should use the package from unstable, yes. (Having testing
on a system without the sid zources is close to impossible outside the
freeze anyway.)


Of course you can use testing without having sid available, that's the 
point of testing[1]. I think there's a misunderstand about testing 
lurking here, too. And the requirement of backports for the package to 
be in testing is also to get a sane upgrade path from stable to 
next-stable. So you have to support your thing alongside testing, too. 
At least during the freeze, but optimally shortly after the release has 
been cut.



Similarly what are the constraints you set for upgrading, if any? How far
back will upgrades work and how current do users need to keep their system
in order to still be able to upgrade? For one, I think you will need to set
expectations here towards the maintainers if a package is never included in
a stable release, as they get very muddy otherwise. Plus you need to set
expectations for the users as the next package (maybe not gitlab) might come
up with requirements that upgrades need to go through every version on the
way to, say, update the database properly. And that's hardly supportable
unless everyone knows to update quickly.


That certainly is something maintainers need to care about. (I consider
a software not supporting skipping versions on upgrade a bug, anyway.)

But most importantly, this is nothing that is specific to the -volatile
proposal - it is the same for regular backports. So discussing this
general backporting issue should not be mixed with the discussion of
this proposal.


For backports the general supportability assumption is that you provide 
a sane upgrade path from stable to the backports and from the backport 
to the next stable (optimally the same package). Once you take the 
presence of the stable package out of the mix, it becomes weird. How 
long do you need to preserve compatibility code? How does an agile 
package that does not fit Debian's release cycle cater to these 
requirements?


Just discounting that on the grounds of "that's normal for backporting" 
if it's unique to your proposal is not quite satisfactory to me.


Kind regards
Philipp Kern

[1] You can make the argument that there's a problem with security 
update. But that's why the urgency system exists and maintainers can 
declare that a certain package needs to migrate with minimal waiting 
time. And most of the time (not always) the exploits start to 
materialize later.




Re: Proposal: Repository for fast-paced package backports

2018-12-26 Thread Philipp Kern

On 25/12/2018 21:46, Dominik George wrote:

Requirements for a package to go into stable-volatile
=

The new volatile proposal is not intended to ease life for package
maintainers who want to bypass the migration and QA requirements of the
regular stable lifecycle, so special need must be taken to ensure only
packages that need it go into volatile. I want to summarise the
requirements like so:

  - The package must be maintained in unstable, like every other package.
  - The package must not be in testing, and care must be taken for the
package not to migrate to testing.
So what would a user of testing do? Will there be a 
$codename-volatile[1] suite for testing users? Or would they directly 
install unstable with no other pre-release staging ground? (Which seems 
like a bad idea.)


Similarly what are the constraints you set for upgrading, if any? How 
far back will upgrades work and how current do users need to keep their 
system in order to still be able to upgrade? For one, I think you will 
need to set expectations here towards the maintainers if a package is 
never included in a stable release, as they get very muddy otherwise. 
Plus you need to set expectations for the users as the next package 
(maybe not gitlab) might come up with requirements that upgrades need to 
go through every version on the way to, say, update the database 
properly. And that's hardly supportable unless everyone knows to update 
quickly.


Kind regards
Philipp Kern

[1] I would like to re-register my objection to that name for the same 
reason Holger stated: it is confusing to reuse an older name (which, by 
the way, started outside of Debian, too and was then merged in) with a 
new concept.




Re: Conflict over /usr/bin/dune

2018-12-18 Thread Philipp Kern
Am 18.12.2018 um 18:48 schrieb Ian Jackson:
> But overall I think this, plus the history of the ocaml program's
> name, does demonstrate that the ocaml program's claim to the overall
> software name `dune', and the command name `dune' is incredibly weak.
> 
> I just checked and `odune' seems to be available.  For a build tool a
> reasonably short name is justified.  The `o' prefix is often used with
> ocaml and though there is of course a risk of clashes with both
> individual programs and with some suites like the old OpenStep stuff,
> it seems that `/usr/bin/odune', odune(1) et al, are not taken.

But then again it's a build tool that actually needs to be called its
name on the console (just like the node mess). whitedune is a GUI
program that could have any name as long as it's obvious from the
desktop metadata and in fact its webpage disappeared and it hasn't seen
a new upstream version since 2011. And the C++ library doesn't seem to
have a CLI name claim at all.

I suppose it's mostly the point that we package all free software on the
planet that we become an arbiter of names. But we should try not to be
that if we can avoid it.

Kind regards
Philipp Kern



Re: Bug#915050: (gitlab) Re: Bug#915050: Keep out of testing

2018-12-18 Thread Philipp Kern

On 2018-12-18 18:40, Pirate Praveen wrote:

On 12/18/18 8:41 PM, Holger Levsen wrote:

On Tue, Dec 18, 2018 at 08:38:39PM +0530, Pirate Praveen wrote:
But if that is not possible, volatile as a separate archive is also 
fine.


instead of volatile we need PPAs.


I think a redefined volatile is the best option for sharing work. But
PPA approach is best in case of conflicts.

I'm leaning towards volatile and hence I proposed it. If you feel
strongly about PPAs, please propose and drive it. Either option will
work for me.


Just for the record - I know you say "a redefined volatile" - and as a 
former volatile team member: This would in no way have been suitable for 
volatile either. Just like backports the assumption is that the packages 
are up to Debian's quality standards and we make the result available to 
users of stable earlier. Volatile's mission statement was keeping 
software like virus scanners actually useful while doing minimal changes 
to stable and that was the main reason for folding it into the regular 
release as well as creating -updates to have a timely push mechanism.[1]


In the Ubuntu PPA case you get free reign over what's in that archive 
and what you backport as part of offering the package. Obviously this 
might conflict with the existing set. But the same is true for a 
centralized volatile archive if you need to backport a large set of 
build dependencies to make the whole thing work in the first place. And 
I'm sure you wouldn't just go for a gitlab source package with bundled 
build dependencies.


Now the policy question of who can ship what in a way that looks 
semi-officially as being part of Debian is tricky. I personally find the 
notion that testing should just be the staging ground for the next 
release to be unfortunate but at the same time know how we ended up with 
it. Maybe there's a place for packages that cannot usefully be supported 
for a stable release and hence need to live in parallel. But then again 
I wonder if the answer wouldn't be an archive where the input is built 
for all suites and where the dependencies are bundled - if only because 
you'd track upstream closely and would through that (hopefully) pull in 
necessary security updates.


Kind regards
Philipp Kern

[1] And to some degree I am unhappy with backports' team's antagonistic 
view on volatile here. Stuff like gitlab would have been rejected in the 
same way it would've been in backports. The useful bits live on, it 
wasn't abandoned to cause more work for backports. At the same time 
backports can serve as a replacement of a subset of use cases too, where 
its rules fit just fine.




Re: Should libpam-elogind Provide libpam-systemd ?

2018-11-05 Thread Philipp Kern
Hi Simon,

On 03.11.2018 12:11, Simon McVittie wrote:
> However, note that if you want multiple parallel dbus-daemons per uid,
> in particular one per X11 display, then dbus-user-session is not for you,
> and you should continue to use dbus-x11 or some third party implementation
> of the dbus-session-bus virtual package instead.

we learned this the hard way and then people are confused about
systemd's session management not coming up. Do you know if any idea was
already floated somewhere on how to make this work? I.e. have multiple
systemd user instances per user?

In our case a remote desktop is spawned in parallel to the regular one,
in the background. That makes for all sort of weird behavior (what if
you want to have different settings in one of your two sessions?) and
dbus-user-session is only a part of it. But it's something users seem to
slowly expect to be working...

Kind regards and thanks
Philipp Kern



Accepted choose-mirror 2.95 (source) into unstable

2018-10-23 Thread Philipp Kern
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Tue, 23 Oct 2018 14:15:45 +0200
Source: choose-mirror
Binary: choose-mirror choose-mirror-bin
Architecture: source
Version: 2.95
Distribution: unstable
Urgency: medium
Maintainer: Debian Install System Team 
Changed-By: Philipp Kern 
Description:
 choose-mirror - Choose mirror to install from (menu item) (udeb)
 choose-mirror-bin - Choose mirror to install from (program) (udeb)
Changes:
 choose-mirror (2.95) unstable; urgency=medium
 .
   * Team upload
 .
   [ Philipp Kern ]
   * Update Mirrors.masterlist.
   * Bump maximum suite length from 32 to 128. If you try to install
 against some sort of snapshotted suite, you easily end up with longer
 strings and the additional memory use is very minimal.
 .
   [ Updated translations ]
   * Marathi (mr.po) by Nayan Nakhare
   * Latvian: punctuation fixes
Checksums-Sha1:
 5f265b473b2440661e2227d51c0123b1de9435d3 1546 choose-mirror_2.95.dsc
 a9b66f33fb4fec6452633245636353e070522429 187140 choose-mirror_2.95.tar.xz
 df492ab6198e9b37e90e3cce173bddd944717a38 5799 
choose-mirror_2.95_amd64.buildinfo
Checksums-Sha256:
 43994c842dfe430d14c34b40d042f53fb3fa2cc3fc0138aeb83ecdeaa58cf690 1546 
choose-mirror_2.95.dsc
 cfa6742dc9930c137da8606c5e88c1a259e5c2f2ea2db78f230449f3c35a47a4 187140 
choose-mirror_2.95.tar.xz
 85807e107b273fcb978d9415912885378fcb3a8a0778c317f93beafd56cab445 5799 
choose-mirror_2.95_amd64.buildinfo
Files:
 595c2580b3cdc2b459b3c180e360d35b 1546 debian-installer optional 
choose-mirror_2.95.dsc
 d1dfae3f5a186e0194c8e784a043aa7a 187140 debian-installer optional 
choose-mirror_2.95.tar.xz
 c6b21744c49538ea341f00172b1894e8 5799 debian-installer optional 
choose-mirror_2.95_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQFFBAEBCgAvFiEEPzuChCNsw7gPxr3/RG4lRTXQVuwFAlvPE4oRHHBrZXJuQGRl
Ymlhbi5vcmcACgkQRG4lRTXQVuznOwgAgRXfJB7G1VXgX0bz/jrq11VNMVqGAdyz
k4as6po+PW8/DGSi5K7+0HXSdmYSCUMRTqvgbrMZ31b4IDskcq8ohX3rmIDd51gB
tO4xokgOm3X60054FhiUI6mfhLqgsds7TArqPXNXw89lTmutd2i+0mgY881oaj7q
C07T6x5OOz+jgIcJiSuDdlvJoyV+H6MlYGsoWBMMNu/a2qG6RY8Nsfz1dbUDilLk
7FrMGxFMxC0w0i94gos8ML317dQL3PN/KuGLCnX9KipLNUqwvhKGYAgKKrnGxfk8
VW21euW78JYn6i+8+ehZsrUe1JwrQdUFW1OCc+gxPkstz1+MhL+D4w==
=Cv29
-END PGP SIGNATURE-



Re: Debian Buster release to partially drop non-systemd support

2018-10-19 Thread Philipp Kern

On 2018-10-19 08:39, Narcis Garcia wrote:

El 18/10/18 a les 22:07, Bernd Zeimetz ha escrit:
For my packages I can state that I do not have a single machine which 
is

not using systemd - and to be honest - I won't waste my time in
writing/debugging initscripts.

Most of people want to use a GNU operating system.
You particularly seem to only need a Systemd operating system.


So what you want is https://www.gnu.org/software/shepherd/?

Kind regards
Philipp Kern



Re: Debian Buster release to partially drop non-systemd support

2018-10-17 Thread Philipp Kern
 suppose one answer would be a cron generator. Given that policy
specifies naming schemes for /etc/cron.{hourly,daily,weekly,monthly,d},
there could probably be a mapping from filename to timer. But the cron
language itself does not contain identifiers, so there's still the
question what to do if you encounter multiple lines and how you'd map
them. Init scripts with their 1:1 mapping and metadata headers were much
easier to handle. Plus we'd need a way to tell cron not to execute those
cronjobs in case systemd is running, which I guess means adding systemd
guards to /etc/crontab (a conffile). And you'd of course still have cron
wake up to execute the shell statement.

But it's not like timer units are forbidden. Just like my
introductionary statement of "but if you use a different system not
considered an init system, you are fine", there's nothing in policy
mandating periodic jobs to work in a particular way. It just talks about
what to do if you do ship a cronjob.

Kind regards
Philipp Kern



Re: Debian Buster release to partially drop non-systemd support

2018-10-16 Thread Philipp Kern

On 2018-10-16 14:36, Ian Jackson wrote:

Philipp Kern writes ("Re: Debian Buster release to partially drop
non-systemd support"):

Could someone reiterate about what the current state of init diversity
is supposed to be? Is it assumed to be best effort of every maintainer
being required to ship an init script next to the systemd unit that is
actually used by default[1]?

I think describint that as `effort' is rather much.


I don't understand. If I submit a merge request to the maintainer, it's 
on me to test what I submit actually works. So if I add stuff for a 
completely different init system I have to test it. The question is: Is 
the package buggy if it does not contain an init script but a systemd 
unit and it seems to be the case. Note that there are a *lot* of useful 
options in a systemd unit that would need emulation to make properly 
work with sysvinit.



Can we rely on sysvinit users to report the
bugs with the scripts or how intensively do they need to be tested?

You should rely on users to report bugs.


Okay. In this case I contributed to the package of someone else and 
don't want to make it buggy.



Similarly, are maintainers allowed to ship timer units in lieu of
cronjobs? As an example I invested some time in
prometheus-node-exporter[2] to run textfile collectors of monitoring
data (SMART, apt) in the background. Would I have been required by
policy to make sure that all of this also works on a system with
sysvinit?

Obviously it would be better to make ti work with cron.  Ideally it
would go into cron.daily which I assume works with systemd too.


It'd need to run much more often (every 15 minutes). So cron.daily 
wouldn't fit. For the sake of the argument it'd need to be a shell 
script that checks multiple conditions (see [1]). And we currently don't 
have timer/cron deduplication, unfortunately. That means it'd also need 
to disable itself on systemd systems (but of course cron would still 
invoke the script periodically). Similarly - as a more general remark - 
having it as a cronjob doesn't let you monitor it in quite the same way.



But if you do just the systemd thing, I think if someone sends you a
patch to make it work with cron I think you should accept and carry
that patch.


In this case that might be feasible because if it breaks that user is 
hopefully going to monitor it anyway, because it's a monitoring thing. 
But there is a cost to carrying such things (such as cron confusingly 
invoking something whose output isn't used at all because it's going to 
be short circuited at startup).


Kind regards
Philipp Kern

[1] 
https://salsa.debian.org/go-team/packages/prometheus-node-exporter/merge_requests/1/diffs#229e10b19f8b27233d2301c8bb553b6bdd8e5b1a




Re: Debian Buster release to partially drop non-systemd support

2018-10-16 Thread Philipp Kern

On 2018-10-16 13:27, Matthew Vernon wrote:

So:
http://www.chiark.greenend.org.uk/mailman/listinfo/debian-init-diversity

It's a standard mailman list with a public archive. I'm hoping people
interested in init system diversity in Debian can use it as a place to
co-ordinate. I don't want it to be used to slag off $init_system or
$distribution_or_derivative.


Could someone reiterate about what the current state of init diversity 
is supposed to be? Is it assumed to be best effort of every maintainer 
being required to ship an init script next to the systemd unit that is 
actually used by default[1]? Can we rely on sysvinit users to report the 
bugs with the scripts or how intensively do they need to be tested?


Similarly, are maintainers allowed to ship timer units in lieu of 
cronjobs? As an example I invested some time in 
prometheus-node-exporter[2] to run textfile collectors of monitoring 
data (SMART, apt) in the background. Would I have been required by 
policy to make sure that all of this also works on a system with 
sysvinit? Note that this includes the usage of file presence checks and 
OnBootSec, so I suppose that'd mean both anacron and cron as well as an 
actual shell script that checks for the preconditions. Would anacron and 
cron need to be depended upon in that case or would they could they even 
just be recommended? Both would not be needed on a default Debian system 
that ships with systemd.


Kind regards and thanks
Philipp Kern

[1]
"Alternative init implementations must support running SysV init scripts 
as described at System run levels and init.d scripts for compatibility."

[https://www.debian.org/doc/debian-policy/ch-opersys.html#alternate-init-systems]
[2] 
https://packages.qa.debian.org/p/prometheus-node-exporter/news/20181015T165248Z.html




Re: Limiting the power of packages

2018-10-04 Thread Philipp Kern
On 04.10.2018 13:17, Enrico Weigelt, metux IT consult wrote:
>> (Note that I'm not saying Microsoft or Google are doing something
>> nefarious here: 
> 
> But I do think that. If they really wanted to do that in a reasonably
> secure and safe way (assuming they're not completely incompetent),
> they'd split off the sources.list part from the actual package (there're
> many good ways to do that), and added proper pinning to reduce the
> attack surface.
> 
> And they would have talked to the Distros about a proper process of
> bringing Skype into Distro repos.
> 
> OTOH, considering the tons of other bugs and design flaws, I'm not
> really sure whether they're nefarious or incompetent, maybe a mix of
> both ...

It's not like Debian provides a way that nicely integrates with the
system except by what they are doing. Yes, one could ship a pin to limit
to specific packages, but from the point of the vendor there's no threat
here: They know what they are going to ship. And from a vendor point of
view you actually want to have the agility to rotate the GPG key in use,
to switch to a different hosting place, and to ship more packages as
required. So it's just that your and the vendor's assumptions mismatch.

Ultimately what most users want is something that is kept up-to-date. At
the point where they decided that they want (or need) to use a vendor's
software, it's not really our business anymore to tell them off. You
yield full control to the vendor at that point, just like Debian has
full control of your system.

If we had a sensible way to report back binary provenance, we could at
least call out when a vendor did something nefarious. (Like serving a
trojan to a specific user.) But we don't.

And to the point of nefavious vs. incompetence: The truth is that most
companies employ software engineers to do the packaging. Apart from
Linux being a small market for most of this software, it is also
something they are not necessarily familiar with or would need to hire
some kind of specialist for. I understand that you are in that business.
But at the same time most programmers assume that it's just a small
matter of programming and it can't be that hard to integrate with
another system. They can't really anticipate the bugs. But we at least
need to hold them accountable to listening to feedback.

> That way, the vendors could just pick some minimal base system (maybe
> apline or devuan based) [...]

That's also where you lost me, FWIW.

Kind regards
Philipp Kern



Re: Re-evaluating architecture inclusion in unstable/experimental

2018-10-04 Thread Philipp Kern
On 03.10.2018 18:01, John Paul Adrian Glaubitz wrote:
>> For s390x I can say that the port was driven without any commercial
>> interest on both Aurelien's and my side
> The question is though: Is there quantifiable amount of users that is
> running Debian on such big iron instead of one of the Linux enterprise
> distributions on the market? If the argument is about maintenance burden,
> then does it justify to support Debian on s390x when the number of users
> is small? And, if yes, why does that not apply to ppc64, for example?
> (I would mention sparc64 here as well, but there is actually a valid
>  blocker which is the lack of supply of new hardware for DSA).

I cannot speak to ppc64. ppc64el is useful as I'm told POWER can be
competitive to Intel/AMD-based services. But I don't know how many users
would run Debian.

For s390x, IBM does not publicly admit that there are people running
Debian, but there are a few. Almost all of them turn popcon off - most
of the VMs can't talk to the internet. Of course I don't know if the
availability of Ubuntu significantly changed that. They were able to
invest much more time into polishing the port and most people just want
some kind of Debian derivative. Historically the base system has been
very well maintained by IBM, though. So the effort to keep it running
has been relatively small. This recently changed somewhat, given that
the primary focus is on enterprise distributions, in that stuff like
Javascript interpreters don't work well. Essentially it boils down to
server workloads that companies need to run, so as Docker and Go became
popular, IBM implemented support for it. The same happened for v8 as
used by Node. OpenJDK 9 finally comes with a JIT, so you don't have to
use IBM Java anymore.

And to IBM's credit, they even contributed some bits back to d-i.
Although some of those still await testing and merging. The Ubuntu
changes did not flow back / were not mergable as-is into Debian.

It's always a tradeoff between how much work is necessary to keep the
port alive and how many people use it. As long as the port keeps itself
working, that's sort of fine in my experience. Once you need to sheperd
a lot of things that all break (like the MIPSens historically had to,
even working around broken CPUs) or need to deal with 2 GB virtual
address space or don't have modern languages like Go or Rust around, it
quickly approaches the point where it's not worth it anymore.

Kind regards
Philipp Kern



Re: Re-evaluating architecture inclusion in unstable/experimental

2018-10-03 Thread Philipp Kern
On 29.09.2018 00:30, John Paul Adrian Glaubitz wrote:
> On 9/28/18 11:26 PM, Adam D. Barratt wrote:
>> On Fri, 2018-09-28 at 14:16 +0200, John Paul Adrian Glaubitz wrote:
>>> So, it's not always a purely technical decision whether a port
>>> remains a release architecture. It's also often highly political and
>>> somehow also influenced by commercial entities.
>> Please don't make implications like that unless you can back them up.
> Well, I cannot prove it. But when I see that we have ports as release
> architectures with hardware where atomics in hardware don't even work
> correctly and the virtual address space is limited to 2 GiB per process
> while on the other hand perfectly healthy and maintained ports like
> powerpc and ppc64 which have actually a measurable userbase and interest
> in the community are axed or barred from being a release architecture,
> then I have my doubts that those decisions aren't also driven by
> commercial interests or politics.

Please excuse my ignorance, but which architecture do we still have with
2 GiB address space? The main point of removing s390 was that this was
unsustainable.

> I have seen IBM people on multiple occasions in various upstream
> projects trying to remove code for older POWER targets because
> they insisted anything below POWER8 is not supported anymore. In
> some cases like Golang with success [1].

Yeah, IBM behavior has been incredibly frustrating here on the System z
side, too. Essentially they end up actively removing support for
anything they don't support anymore.

To some degree I understand this behavior: It's a real relieve to not
need to support something old and crufty when you're the engineer on the
other side having to do that. Even when such support is contributed,
someone needs to keep it working and they won't keep old hardware around
for that.

But it has very awkward implications on the people that still have that
hardware for one reason or another and don't actually rely on a support
contract.

For s390x I can say that the port was driven without any commercial
interest on both Aurelien's and my side.

Kind regards
Philipp Kern



Accepted python-gbulb 0.6.1-0.1 (source) into unstable

2018-09-18 Thread Philipp Kern
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Tue, 18 Sep 2018 14:27:08 +0200
Source: python-gbulb
Binary: python3-gbulb python-gbulb-doc
Architecture: source
Version: 0.6.1-0.1
Distribution: unstable
Urgency: medium
Maintainer: Konstantinos Margaritis 
Changed-By: Philipp Kern 
Description:
 python-gbulb-doc - PEP 3156 event loop based on GLib (common documentation)
 python3-gbulb - PEP 3156 event loop based on GLib (Python 3)
Closes: 895726 904388
Changes:
 python-gbulb (0.6.1-0.1) unstable; urgency=medium
 .
   * Non-maintainer upload
   * New upstream release (Closes: #904388)
   * Add dependency on python3-gi (Closes: #895726)
Checksums-Sha1:
 5a20fea91ff1f26a4445e02fb8fa14af353daa5d 1610 python-gbulb_0.6.1-0.1.dsc
 c58fbde124ddc691f95a8927582de590a2650c3b 20651 python-gbulb_0.6.1.orig.tar.gz
 5e4183ec710c045d7499b703be7df18b34aba1f0 2264 
python-gbulb_0.6.1-0.1.debian.tar.xz
 867d800c137440b17ecf49a84ad43381e8fc29b5 7485 
python-gbulb_0.6.1-0.1_amd64.buildinfo
Checksums-Sha256:
 d6dfa35feb895a21ed7bbc35d14c638471c94d136e6a856ee1fa9e6f02b32769 1610 
python-gbulb_0.6.1-0.1.dsc
 ab9dbde5d92a2b4f13c7acc9afc7235081a5c999d6807b049e2d8c2ef26c03a9 20651 
python-gbulb_0.6.1.orig.tar.gz
 df5dc006e79ece3836ffed35ab86b872c1e429e5b1389b81e731337e1394b679 2264 
python-gbulb_0.6.1-0.1.debian.tar.xz
 6d315f29c8c1d8298aa07f5c9aa84f8f667a216a8dee5aae810b9676df848640 7485 
python-gbulb_0.6.1-0.1_amd64.buildinfo
Files:
 a31bcf54868ffe65acbda10e6dcb1097 1610 python optional 
python-gbulb_0.6.1-0.1.dsc
 aaf38d5da6d80ee10c46f41ca68ea675 20651 python optional 
python-gbulb_0.6.1.orig.tar.gz
 c3ea15b6eaba4c4e0ff33b7f9e621807 2264 python optional 
python-gbulb_0.6.1-0.1.debian.tar.xz
 7da9ffc118d20f5e817520c822f47878 7485 python optional 
python-gbulb_0.6.1-0.1_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQFFBAEBCgAvFiEEPzuChCNsw7gPxr3/RG4lRTXQVuwFAlug8pERHHBrZXJuQGRl
Ymlhbi5vcmcACgkQRG4lRTXQVuy/+QgAxW5FzW+2zJFhSawP1MRpwoM8VbOSeoCg
QqAOaSwRh1r3F41Ni0aaolws07M4sUQKSwBn83uEIA1JrnyKgTyEOuV+T+7pp/ev
gLFUpevavjsZh6xs1hhpmAcGLRgj3LiajIvOYMoWseCxSCIVkmWTSWC8Zd5Nxs93
XM9pH2bCJCg1/7k4+Gny6rOHTkt7cNLJc4eptTIKUm0n6PGZSwaT/22HDs9VCax2
Ql96cq6GURm5hBRD0bT1xT2K7RBTo3sWGdLd0yeSF4nA2BCJOGYBQ+XR6TtIgHDb
j/wDfgSpfe7Fd8SCj13QWVedYxk0JoJqdTCKuJapxseVnBifjdYHjQ==
=r6xg
-END PGP SIGNATURE-



Accepted partman-auto-lvm 70 (source) into unstable

2018-08-21 Thread Philipp Kern
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Tue, 21 Aug 2018 17:50:31 +0200
Source: partman-auto-lvm
Binary: partman-auto-lvm
Architecture: source
Version: 70
Distribution: unstable
Urgency: medium
Maintainer: Debian Install System Team 
Changed-By: Philipp Kern 
Description:
 partman-auto-lvm - Automatically partition storage devices using LVM (udeb)
Closes: 515607 904184
Changes:
 partman-auto-lvm (70) unstable; urgency=medium
 .
   [ Sven Mueller ]
   * Add ability to limit space used within LVM VG. Originally by
 Colin Watson  (Closes: #515607, #904184)
   * Use LC_ALL=C when asking vgs for information
Checksums-Sha1:
 e7f378cd09cd3253f498f9fb045c0cbe57cb9438 1393 partman-auto-lvm_70.dsc
 8104ab8f4519418e0bef84ba878eda45f65f62d2 115788 partman-auto-lvm_70.tar.xz
 3c73c4de66cf3154f1409396ce0c13df0c4d161b 5079 
partman-auto-lvm_70_amd64.buildinfo
Checksums-Sha256:
 cc46c650c0af36c353c8419ecde9e241afa0de016e3c9786ef5bc0e77bd75789 1393 
partman-auto-lvm_70.dsc
 23b555ee58a8e6bd04837fa9165c07c73f443566e7243d78b1f559014aaf87b1 115788 
partman-auto-lvm_70.tar.xz
 5ade8daa6789944dc32128a22d1773ac96f32cf92fcd6d6481745c9c9ddb0d82 5079 
partman-auto-lvm_70_amd64.buildinfo
Files:
 520d27756187e092e11bac24745968df 1393 debian-installer optional 
partman-auto-lvm_70.dsc
 95fb547ef0b31ece7774d27fa203a6f1 115788 debian-installer optional 
partman-auto-lvm_70.tar.xz
 d917fb29f3916dccb6bfa9e0bd1b9ce3 5079 debian-installer optional 
partman-auto-lvm_70_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQFFBAEBCgAvFiEEPzuChCNsw7gPxr3/RG4lRTXQVuwFAlt8OFkRHHBrZXJuQGRl
Ymlhbi5vcmcACgkQRG4lRTXQVuzXuQf/QsOhaXGG/tp7NmhZQ/gh/T1BWKahrDEz
De8aUqotRw2ylnqkb3iEGD4YGpWD8XvkXoXD76/S3TGVTQ9coBwmCzw+6eNOgaM2
aXI/VHz49XSld7oqEpAxoKOI2LbcauUyY1JUu7E2DDrz2CFcJPMaqHE6mWITEWQ5
nRrmnziYd+OGWcL/O5AMkhe+pV4jQKTBogo9D7ulRVJd+2r521Ki/z0s3g1V6TAV
2Y9wPGAyUEjTQnfQdNp489owg/+fAUIO/4MnlOYeT4lY/gyhgIfJ8RJKS31F1m3V
CBGfEu5gN7fMyjtMELiuNYxfhev8Tmnwwx7QsE6dK0hDWPlAcXArtg==
=pgS2
-END PGP SIGNATURE-



Accepted partman-crypto 99 (source) into unstable

2018-07-31 Thread Philipp Kern
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Format: 1.8
Date: Tue, 31 Jul 2018 12:14:13 +0200
Source: partman-crypto
Binary: partman-crypto partman-crypto-dm
Architecture: source
Version: 99
Distribution: unstable
Urgency: medium
Maintainer: Debian Install System Team 
Changed-By: Philipp Kern 
Description:
 partman-crypto - Add to partman support for block device encryption (udeb)
 partman-crypto-dm - Add to partman support for dm-crypt encryption (udeb)
Closes: 902912
Changes:
 partman-crypto (99) unstable; urgency=medium
 .
   [ Cyril Brulebois ]
   * Update Vcs-{Browser,Git} to point to salsa (alioth's replacement).
 .
   [ Michael Schaller ]
   * Set discard option on LUKS containers. (Closes: #902912)
Checksums-Sha1:
 c0244addb1c41cf29172f15647cca8fd99cf06a6 1489 partman-crypto_99.dsc
 18418f258c2ce8391d9450a8707ffe4e5125be1e 265360 partman-crypto_99.tar.xz
 cfbfff03c6482cbf0ed842e31d2d0360a294b578 5334 partman-crypto_99_amd64.buildinfo
Checksums-Sha256:
 e05966ca3a40ba4a52bbe2de698704846fb011bd1bb34ec8fefc8b93fd3652f5 1489 
partman-crypto_99.dsc
 0fd014e835a7018a51c9c62e64dbb517318445a05c4bbce308766b317118b67f 265360 
partman-crypto_99.tar.xz
 0bb4936c314fb200870d0404295a31371c208f66861f9e5d6389d2542aa4b331 5334 
partman-crypto_99_amd64.buildinfo
Files:
 3fe214b9f5f28ea69bfb7be24a3e82b4 1489 debian-installer optional 
partman-crypto_99.dsc
 4b2fa0d92716e3302a420fc61ee47009 265360 debian-installer optional 
partman-crypto_99.tar.xz
 8956e12dc74849b4af1ef6ac91484da8 5334 debian-installer optional 
partman-crypto_99_amd64.buildinfo

-BEGIN PGP SIGNATURE-

iQFFBAEBCgAvFiEEPzuChCNsw7gPxr3/RG4lRTXQVuwFAltgQHQRHHBrZXJuQGRl
Ymlhbi5vcmcACgkQRG4lRTXQVuxm3ggAthrekZpV6zVABgCGa8rRVjNIHUCdZY0g
8cLEVrSLAY1UEBJhJUpnnfPC+AmFsG9f+e5a2FONSVXp0BgF9R44OMsR9jU8LGmC
p5WbL+UQvfbXaWORyECloCA6tDd4EKehYVQnCcJkb98vkAi8vm4J5MMk8/KBbpGb
J1TbLLeG1xx+twIVqa8q9IBzBF3Rginn4tl4gvkQXEbyotraDF58rFJ2voN3BM+I
o1FVtozN2pPVXtSltJZueilQK+L0k1rMPG7HpiKtq6YelsOT6yK0Qv14G1dwXk/5
Ee74QHRgL6UsSP92kz5aN4p1d2fOk0ZtGlWTFv+HGNL6cuAkLnB8/Q==
=H1FW
-END PGP SIGNATURE-



Re: Collaborative decision making with Loomio

2018-07-27 Thread Philipp Kern

On 2018-07-27 10:46, Joerg Jaspert wrote:

On 15111 March 1977, Dmitry Smirnov wrote:
Loomio [1] is a powerful tool to organize decision making. We have a 
long
standing RFP [2] for Loomio and I hope that with help of Ruby team it 
can be

packaged without too much effort.

It is not the tool here that is the problem. It is the people involved
that are. No matter being on a mailing list or loomio or
whatevermagictool, as long as people are unwilling to accept the other
sides and willing to find consensus, you can throw tools around as much
as you want. You wont get anywhere.

Or simply put:

You can't solve a social problem using technical means.


Yup, but to note: Consensus-driven arguments can only go so far and 
scale poorly with more people. Ultimately contentious issues need to be 
voted on, otherwise you can be held hostage by a single person or small 
vocal group.


Of course in Debian that mechanism is called GR or appealing to the 
tech-ctte and letting them vote (and then maybe another GR, hah).


Kind regards
Philipp Kern



Re: Bug#903977: ITP: sbws -- Simple Bandwidth Scanner

2018-07-20 Thread Philipp Kern
On 18.07.2018 20:38, ju xor wrote:
> Philipp Kern:
>> On 2018-07-18 18:24, ju xor wrote:
>>> Philipp Kern:
>>>> Should this live in some kind of tor-* namespace?
>>> no
>> Without any rationale? :(
> i'm not sure what you mean, but in case it helps, here some arguments
> why sbws package is not called something like tor-sbws:
> 
> - upstream is not using "tor-*" in the name
> - i don't think there's a Debian policy to name packages as "tor-*" [0]

Of course there isn't. But if the package is incredibly specialized, it
might make sense to do that anyhow. Debian is not bound to reuse the
upstream name, although in many cases it makes sense (first and foremost
when scripts are concerned, but there are plenty of other reasons).

> - AFAICT, the only package in Debian that is named as "tor-*" is
> "tor-geoipbd", and that's a package on which "tor" itself depends on.
> - "tor" itself does not depends on sbws, though sbws makes use of "tor"
> - python3-stem is a library to control tor on which sbws depends, and
> it's not called "tor-*"

I guess I was mostly concerned about the global namespace rather than a
library-specific one.

> - nyx, is a tor monitor, and is not called "tor-*"

Fair. Although, to note, it used to be called tor-arm according to the
package's description. And it feels like the possible target audience of
sbws is even less than the one of nyx. That said: Maybe include the
target audience (i.e. who is going to have an interest in running this
package) somewhere in your description. If this is of interest to all
relay operators rather than just the authorities, that's probably relevant.

> - there're several packages called "onion*", which is not "tor-*"

Well, tor-* was a proposal to disambiguate a short name. I don't
particularly care what the prefix would be.

Kind regards
Philipp Kern



Re: Bug#904019: ITP: libxcrypt -- Extended crypt library for DES, MD5, Blowfish and others

2018-07-20 Thread Philipp Kern
On 20.07.2018 10:18, Marco d'Itri wrote:
> On Jul 20, Philipp Kern  wrote:
>> Make sure that glibc splits out libcrypt into its own package, have libc6
>> depend on it and then provide libcrypt1? (Because it's really providing
>> libcrypt's ABI from another package.) Versioning might be tricky, though.
> At some point glibc will just stop building libcrypt, I am looking for 
> an interim solution that will not require coordination with the glibc 
> maintainers.

I think it's odd to say "here, I'm packaging up a replacement for your
library, but I'm not going to coordinate with you" when we are preparing
a (somewhat) coherent distribution, so I don't think that option should
be discarded. (Unless you have a reasonable worry that you experiment
will fail and hence don't want to bother people, I guess.)

Kind regards
Philipp Kern



Re: Bug#904019: ITP: libxcrypt -- Extended crypt library for DES, MD5, Blowfish and others

2018-07-20 Thread Philipp Kern

On 2018-07-20 02:18, Marco d'Itri wrote:

On Jul 18, Marco d'Itri  wrote:


Some day it may replace crypt(3), currently provided by glibc:
https://fedoraproject.org/wiki/Changes/Replace_glibc_libcrypt_with_libxcrypt

I tried creating a package which would divert libc's libcrypt, but it
appears to be much harder than I thought.

Installing it would looke like:

1) libcrypt1.preinst diverts glibc's libcrypt.so.1
2) dpkg does things
3) dpkg installs libxcrypt's libcrypt.so.1
4) dpkg does more things
5) libcrypt1.postinst runs/triggers ldconfig

And this means that perl (a libcrypt dependency) would be broken 
between

1 and 5 (or maybe 1 and 3): is this ever going to work?

But even if this worked correctly, glibc installs a libcrypt-N.NN.so,
whose exact name I expect changes among different releases.

Is there any way to implement all this safely?


Make sure that glibc splits out libcrypt into its own package, have 
libc6 depend on it and then provide libcrypt1? (Because it's really 
providing libcrypt's ABI from another package.) Versioning might be 
tricky, though.


Kind regards
Philipp Kern



Re: Bug#903977: ITP: sbws -- Simple Bandwidth Scanner

2018-07-18 Thread Philipp Kern

On 2018-07-18 18:24, ju xor wrote:

Philipp Kern:

Should this live in some kind of tor-* namespace?

no


Without any rationale? :(

Kind regards
Philipp Kern



  1   2   3   4   5   6   7   8   9   10   >