Bug#976907: golang-github-boltdb-bolt: FTBFS on ppc64el (arch:all-only src pkg): dh_auto_test: error: cd obj-powerpc64le-linux-gnu && go test -vet=off -v -p 160 -short github.com/boltdb/bolt github.co

2021-01-11 Thread El boulangero
> Can simply replacing the dependency in all of them with bbolt work?

I don't know myself, but some upstream think it's not a trivial change. For
example, see:

https://github.com/hashicorp/raft-boltdb/pull/19#issuecomment-703732437

In short: hashicorp-raft-boltdb wants to make sure there's no issue before
making the change. This change would impact reverse build deps of
hashicorp-raft-boltdb, like nomad or consul.

I think it's better to downgrade the severity here (as was done in
coreos-bbolt, see https://bugs.debian.org/976926).


Bug#979546: docker.io: version in Bullseye does not support "rootless mode", makes privilege escalation trivial

2021-01-08 Thread El boulangero
On Sat, Jan 9, 2021 at 2:00 AM Chris Mitchell  wrote:

> On Fri, 8 Jan 2021 11:38:59 +0700
> El boulangero  wrote:
>
> > Hi Chris,
> >
> > I believe what you refer to is a well-known issue with docker. I have
> > this reference from Apr. 2015:
> > https://fosterelli.co/privilege-escalation-via-docker.html
> >
> > This is how docker works. The most easy mitigation is NOT to add a
> > user to the docker group. This way, you will always invoke docker
> > with 'sudo docker', and then it's explicit that you're running
> > something as root. Explicit better than implicit maybe, at least no
> > more "accidental".
>
> This makes some sense. Given that it's apparently well-known that
> allowing a user to run Docker essentially gives them unrestricted root
> access, I'm rather surprised that no warning was presented at any point
> in this process.
>
> > Second thing, as you noted, docker can access a directory on the host
> > only if you share it with '--volume' or '--mount' or something. So
> > it's not really accidental if then the process in the container
> > writes to the host directory. It's something that you authorized
> > explicitly. And the fact that it's a root access is due to the fact
> > that the container is run as root, as you correctly noted.
>
> Ah, okay. This, I think, is where I fundamentally misunderstood the
> situation. I was picturing the "containerized app" as a single entity,
> presenting an all-or-nothing choice between "accept that anything you
> run in a Docker container has root access to your whole filesystem" or
> "don't use Docker". If Docker is providing meaningful enforcement and
> limiting the access of the "contained" app to only the directory(ies)
> you share with it in the container config (though not subject to the
> host system's *permissions*) that's a very different proposition.
>
> Trusting *myself* not to abuse Docker's privilege-escalation abilities
> (on a system where I already have root), checking carefully what paths
> are shared via the container configs, and making sure that the path
> containing those configs is never shared... That's within the realm of
> reasonable expectations.
>
> > If you download and run a containerized app as root, and share your
> > /home with the container, then you'd better trust this app 100%.
>
> To be clear, I did not knowingly or explicitly do any of these things
> except "download and run a containerized app". I downloaded the app as
> a regular user, used my sudo powers to add said user to the docker group
> (because all the "getting started" instructions just say that you need
> to be in the docker group to use Docker) and ran, as a regular user,
> "docker-compose -d up". Now that I know to go looking for it, I see
> that the "volumes" directive appears in one of the config files. While
> I acknowledge that the responsibility for understanding the tools I use
> ultimately falls to me, nothing in this sequence jumps out to me as
> "you are granting unrestricted root access!" or "you'd better trust
> this app 100%".
>

I never used docker-compose, only the docker command itself, so I've always
known more or less what was going on :)

docker-compose is a layer on top of docker, so it hides things away from
the user (in this case, the arguments for the docker command are in a
config file apparently). So things are even less explicit, for the sake of
simplicity.

I think two important things are 1) *which user* is running in the
container (root or non-root), and what volumes are shared. These are the
arguments that you would use the most often with 'docker run', and also the
king of things you'd want to check in a docker compose file.

There are many ways to run container with docker, depending on the
use-case. If you want to run a container manually, as your own user, and
while sharing only the current directory, you can do this:

sudo docker run -it --rm \
-u $(id -u):$(id -g) \
-v /etc/group:/etc/group:ro \
-v /etc/passwd:/etc/passwd:ro \
-v $(pwd):$(pwd) -w $(pwd) \
$YOUR_DOCKER_IMAGE

Note that the two first -v arguments are optional, but they allow that you
user id is known in the container. Try with and without, both work.

This is a command that I use often, but when you get started with docker,
it's unlikely you'll find that by yourself at the first try :)


>
> As a fairly experienced Debian user, I've been accustomed to add myself
> to all sorts of groups over the years, and the only one that has ever
> been presented as "this grants full root powers" is sudo, which then
> pops up a stern warning the first time you use it.

Bug#979546: docker.io: version in Bullseye does not support "rootless mode", makes privilege escalation trivial

2021-01-07 Thread El boulangero
Hi Chris,

I believe what you refer to is a well-known issue with docker. I have this
reference from Apr. 2015:
https://fosterelli.co/privilege-escalation-via-docker.html

This is how docker works. The most easy mitigation is NOT to add a user to
the docker group. This way, you will always invoke docker with 'sudo
docker', and then it's explicit that you're running something as root.
Explicit better than implicit maybe, at least no more "accidental".

Second thing, as you noted, docker can access a directory on the host only
if you share it with '--volume' or '--mount' or something. So it's not
really accidental if then the process in the container writes to the host
directory. It's something that you authorized explicitly. And the fact that
it's a root access is due to the fact that the container is run as root, as
you correctly noted.

If you download and run a containerized app as root, and share your /home
with the container, then you'd better trust this app 100%.

> a search for "docker" on backports.debian.org returned no results

Indeed, docker sits on top of 100+ dependencies, backporting would mean
backporting all those dependencies. Plus maybe the go compiler and the go
library. It would be such a huge work that it's not realistic.

Since Go is a statically compiled language, there's little value in
backporting anyway. You can just try your luck and install the docker from
Debian unstable into your Debian stable. It might work. Maybe some bugs
would be lurking here and there in the dark, maybe not, I just don't know.

As for rootless mode, it should be indeed supported in the 20.10 version. I
myself never tested it. If I'm not mistaken, everything is present in the
20.10 package provided in Debian unstable to run the rootless mode. You can
give it a try :)

All the best,

  Arnaud






On Fri, Jan 8, 2021 at 10:48 AM Chris  wrote:

> Package: docker.io
> Version: 18.09.1+dfsg1-7.1+deb10u2
> Severity: critical
> Tags: security
> Justification: root security hole
>
> Dear Maintainer,
>
> Unless I'm missing something, any program running in a Docker container
> using the Docker version currently available in Debian stable has a
> trivial-to-exploit path to full, persistent, root privilege escalation.
>
> Docker v18 only works when it's SUID root. Processes running as root
> *inside* the container accessing the *host* filesystem do so as root *on
> the host system* unless they are internally configured to map to a
> regular user on the host system. (According to my rough and inexpert
> understanding of the situation.)
>
> I installed docker.io from the official Debian stable repos, added a
> user to group "docker" in order to be able to use it, downloaded and
> ran a containerized program, and noticed that the program in the
> container was creating files and directories with root ownership in my
> home directory on the host system.
>
> A quick search of the web turned up a tutorial showing how easy it is
> to exploit this situation:
> https://medium.com/@Affix/privilege-escallation-with-docker-56dc682a6e17
> ...as well as tutorials on how not to *accidentally* create root-owned
> files on the host system when setting up Docker containers, eg:
> https://vsupalov.com/docker-shared-permissions/
>
> I discovered that newer versions of Docker have a "rootless mode" that
> doesn't require the main Docker executable to run SUID root (though it
> does rely on a couple of narrow-scope SUID helper utilities), which
> should hopefully mitigate this situation to a considerable extent:
> https://docs.docker.com/engine/security/rootless
> This capability was introduced as experimental in v19.03 and "graduated
> from experimental" in v20.10. Unsurprisingly, it requires that
> unprivileged_userns_clone be enabled.
>
> The version of docker.io in the Buster repos is 18.09 at the time of
> this writing, and a search for "docker" on backports.debian.org returned
> no results. While I am aware of the controversy around
> unprivileged_userns_clone, I gather that it will be enabled by default
> in Bullseye (starting with kernel 5.10, I believe) because at this point
> it presents the lesser evil.
>
> Unless I'm gravely mistaken about the situation, I'd much rather enable
> that potentially-exploitable kernel feature and run Docker in "rootless
> mode" than continue running Docker in a configuration that's so easily
> exploitable there are tutorials on how to prevent accidentally creating
> files as root when using Docker containers as a regular user.
>
> Accidental. Root. Filesystem access. As a regular user.
>
> I propose that — as a minimum — backporting the version of Docker in
> Bullseye (currently v20.10) to Buster be treated as an urgent security
> priority, so that users at least have the option to install Docker from
> an official Debian source and use it in the less-dangerous "rootless
> mode".
>
> Further, Docker is widespread and gaining popularity fast, it's already
> in the Debian stable repositories, and 

Bug#977652: Fix in golang-goprotobuf 1.3.4

2021-01-05 Thread El boulangero
> if I just disable code regeneration, the diff with 1.3.4-2 is really
minor.

Answering to myself, so I looked at that again, and the diff is not that
small. It's true that the only file impacted is plugin.pb.go (which is the
file that needs a fix), but the diff is not exactly minor.

Main differences:

-const _ = proto.ProtoPackageIsVersion3
+const _ = proto.ProtoPackageIsVersion2

-func (m *CodeGeneratorResponse_File) XXX_Unmarshal(b []byte) error {
+func (m *CodeGeneratorResponse_File) Unmarshal(b []byte) error {

-func (m *CodeGeneratorResponse_File) XXX_Marshal(b []byte, deterministic
bool) ([]byte, error) {
+func (m *CodeGeneratorResponse_File) Marshal(b []byte, deterministic bool)
([]byte, error) {

I really have no idea if these changes are significant.

On Wed, Jan 6, 2021 at 11:45 AM El boulangero 
wrote:

> It can be fixed with regeneration, in an ugly way. I patch the generated
> file after it's been generated, I didn't find a better solution... It could
> be a bug upstream. However upstream did a major code refactoring to go to
> version 1.4, so there's no fix that can be cherry-picked from their git
> history.
>
> But that's not the point. My concern is that if I rebuild the package with
> code regeneration, the diff with the current package "golang-goprotobuf-dev
> 1.3.4-2" is much bigger, and I'm afraid that it breaks things. On the other
> hand, if I just disable code regeneration, the diff with 1.3.4-2 is really
> minor.
>
> So I thought that, given the timeline, it was better to make as little
> change as possible to this package.
>
> On Wed, Jan 6, 2021 at 11:32 AM Shengjing Zhu  wrote:
>
>> On Wed, Jan 06, 2021 at 10:16:16AM +0700, El boulangero wrote:
>> > Hello Go Team,
>> >
>> > in order to solve #977652, I would need to modify & rebuild the package
>> > golang-goprotobuf.
>> >
>> > The issue is that this package has many reverse build deps, as you might
>> > know already:
>> >
>> > $ build-rdeps golang-goprotobuf-dev
>> > ...
>> > Found a total of 218 reverse build-depend(s) for
>> golang-goprotobuf-dev.
>> >
>> > I did some work already, and it seems that the least invasive way to fix
>> > #977652 is simply to disable code regeneration and rebuild
>> > golang-goprotobuf. The diff in the binary package golang-goprotobuf-dev
>> > will then be very minor. I can post a diff if anyone is interested.
>> >
>> > My question is: is it OK to update this package now, or is it too risky,
>> > and should I wait for after the freeze then?
>>
>> I think minor fix is ok. But OTOH I think we want to keep regenerating
>> files.
>> Can it be fixed with regeneration?
>>
>>


Bug#977652: Fix in golang-goprotobuf 1.3.4

2021-01-05 Thread El boulangero
It can be fixed with regeneration, in an ugly way. I patch the generated
file after it's been generated, I didn't find a better solution... It could
be a bug upstream. However upstream did a major code refactoring to go to
version 1.4, so there's no fix that can be cherry-picked from their git
history.

But that's not the point. My concern is that if I rebuild the package with
code regeneration, the diff with the current package "golang-goprotobuf-dev
1.3.4-2" is much bigger, and I'm afraid that it breaks things. On the other
hand, if I just disable code regeneration, the diff with 1.3.4-2 is really
minor.

So I thought that, given the timeline, it was better to make as little
change as possible to this package.

On Wed, Jan 6, 2021 at 11:32 AM Shengjing Zhu  wrote:

> On Wed, Jan 06, 2021 at 10:16:16AM +0700, El boulangero wrote:
> > Hello Go Team,
> >
> > in order to solve #977652, I would need to modify & rebuild the package
> > golang-goprotobuf.
> >
> > The issue is that this package has many reverse build deps, as you might
> > know already:
> >
> > $ build-rdeps golang-goprotobuf-dev
> > ...
> > Found a total of 218 reverse build-depend(s) for
> golang-goprotobuf-dev.
> >
> > I did some work already, and it seems that the least invasive way to fix
> > #977652 is simply to disable code regeneration and rebuild
> > golang-goprotobuf. The diff in the binary package golang-goprotobuf-dev
> > will then be very minor. I can post a diff if anyone is interested.
> >
> > My question is: is it OK to update this package now, or is it too risky,
> > and should I wait for after the freeze then?
>
> I think minor fix is ok. But OTOH I think we want to keep regenerating
> files.
> Can it be fixed with regeneration?
>
>


Bug#977019: ITP: golang-golang-x-term -- Go terminal and console support

2020-12-15 Thread El boulangero
Yes it's packaged already and waiting in the NEW queue:
https://ftp-master.debian.org/new.html

Cheers,
  Arnaud

On Wed, Dec 16, 2020 at 12:10 AM Roger Shimizu 
wrote:

> On Thu, Dec 10, 2020 at 2:18 PM Arnaud Rebillout 
> wrote:
> >
> > * Package name: golang-golang-x-term
> >   Version : 0.0~git20201207.ee85cb9-1
> >   Upstream Author : Go
> > * URL : https://github.com/golang/term
> > 
> > Why packaging: this is a new build dependency of docker.io 20.10.0
>
> Seems this is also a new dependency for golang-go.crypto package.
> Have you packaged anything? or already near upload?
>
> Thanks!
> --
> Roger Shimizu, GMT +9 Tokyo
> PGP/GPG: 4096R/6C6ACD6417B3ACB1
>


Bug#943981: Proposal: Switch to cgroupv2 by default

2020-12-14 Thread El boulangero
Hi! Docker 20.10 is now in unstable.

Best,

  Arnaud

On Sat, Dec 12, 2020 at 4:18 AM Michael Biebl  wrote:

> On Fri, 27 Nov 2020 15:50:03 +0700 El boulangero 
> wrote:
> > Hello all, here comes news from the docker package.
>
> Awesome news.
> I see you have uploaded docker  20.10.0+dfsg1-1 to experimental.
>
> Once systemd 247.1-4 has migrated to testing (i.e. in a couple of
> days), I intend to flip the switch to cgroupv2
> It would probably be good, if this was accompanied with a corresponding
> upload of docker.io 20.x to unstable.
>
> Regards,
> Michael
>


Bug#918375: Info received (dockerd segfaults can be repeated)

2020-12-01 Thread El boulangero
Thanks for the details.

So I just uploaded a new version in experimental, it is
20.10.0~rc1+dfsg3-1. In this version docker.io vendors the old go-radix, as
suggested.

If someone can give it a try and confirm that indeed the bug is fixed, that
would be great. Thanks again.

  Arnaud

On Wed, Dec 2, 2020 at 12:48 AM Shengjing Zhu  wrote:

> Sadly I see it in my log too. So after searching a bit, I find this
>
> https://github.com/moby/libnetwork/pull/2581
>
> So it's indeed caused by golang-github-armon-go-radix-dev 1.0.0
>
> And docker maintainer has proposed a patch to go-radix,
> https://github.com/armon/go-radix/pull/14
> But reading from the issue, it seems docker just implemented in the wrong
> way.
>
> So I suggest vendoring the old go-radix...
>
> On Mon, Sep 16, 2019 at 1:53 PM Arnaud Rebillout
>  wrote:
> >
> >
> > From: Vincent Smeets 
> >
> > Using journalctl, I see the following error:
> >
> > panic: runtime error: invalid memory address or nil pointer dereference
> > [signal SIGSEGV: segmentation violation code=0x1 addr=0x0
> pc=0x564a9d3a2158]
> > goroutine 439 [running]:
> > github.com/armon/go-radix.recursiveWalk(0x0, 0xc4212bddb8, 0xc4212bdc00)
> > /build/docker.io-18.06.1+dfsg1/.gopath/src/
> github.com/armon/go-radix/radix.go:477 +0x28
> >
> >
> > Hans, do you also see the same logs in the journal? (trying to be sure
> it's the same issue)
> >
> > docker-ce builds against armon/go-radix
> e39d623f12e8e41c7b5529e9a9dd67a1e2261f80, Jan 2015 [1]
> >
> > docker.io builds against armon/go-radix v1.0, Aug 2018 [2], as you can
> see with:
> >
> >   $ rmadison golang-github-armon-go-radix-dev
> >   golang-github-armon-go-radix-dev | 1.0.0-1 |
> stable | all
> >   golang-github-armon-go-radix-dev | 1.0.0-1 |
> unstable   | all
> >
> > That could be the issue. Now, I don't know if you hit a bug in go-radix
> v1.0, or if you hit an incompatibility between docker and the version v1.0
> of go-radix.
>
>
> --
> Shengjing Zhu
>


Bug#974857: new upstream version 20.10.0-beta1

2020-11-30 Thread El boulangero
I just uploaded docker.io 20.10.0~rc1+dfsg2-3 to experimental. containerd
has been completely removed from this version. Now docker.io build-depends
on golang-github-containerd-containerd-dev, and at runtime it depends on
containerd.

There's still some work to be done on the package, mainly updating /
cleaning up the build depend tree. I'll work on that this week. Cheers.

  Arnaud

On Sat, Nov 28, 2020 at 12:35 AM Shengjing Zhu  wrote:

> On Fri, Nov 27, 2020 at 11:19 PM Shengjing Zhu  wrote:
> [...]
> >
> > > What do you think of that ? Would you mind looking at
> https://salsa.debian.org/docker-team/docker/-/commit/da70c7e, and tell me
> if that makes sense to apply these patches in containerd ? Or do you have a
> better idea ?
> > >
> >
> > I will try to backport.
>
> It's in experimental now.
>
> --
> Shengjing Zhu
>


Bug#975563: golang-k8s-sigs-structured-merge-diff: Please update to upstream version 4.0 or later

2020-11-28 Thread El boulangero
On Sat, Nov 28, 2020 at 11:35 PM Shengjing Zhu  wrote:

> Hi
>
> On Sun, Nov 29, 2020 at 12:06 AM El boulangero 
> wrote:
> >
> > This broke the build for containerd, and also for docker.io which
> embeds containerd:
> >
> >
> https://buildd.debian.org/status/fetch.php?pkg=docker.io=s390x=20.10.0%7Erc1%2Bdfsg1-1=1606547204=0
> >
> > /<>/.gopath/src/
> github.com/containerd/containerd/vendor/sigs.k8s.io/structured-merge-diff/v3/value
> (vendor tree)
> > /usr/lib/go-1.15/src/sigs.k8s.io/structured-merge-diff/v3/value (from
> $GOROOT)
> > /<>/.gopath/src/sigs.k8s.io/structured-merge-diff/v3/value
> (from $GOPATH)
> >
> > (the latest containerd builds were still against
> k8s-structured-merge-diff v3, so they succeeded, only this docker.io
> build was late enough to pick up the v4)
> >
> > By any chance, do you know if simply patching to use v4 instead of v3 in
> the import path will work? Or is there another way to handle this?
> >
>
> The code in k8s-structured-merge-diff which containerd uses, is same
> in v3 and v4. So in containerd, I just relax the version,
>
> https://salsa.debian.org/go-team/packages/containerd/-/blob/debian/sid/debian/patches/0004-relax-structured-merge-diff-version.patch


Thanks for that!

I didn't realize that containerd 1.4.1 used structured-merge-diff v3, while
containerd 1.4.2 that was just releases uses v4. So basically, no problem.
Sorry for the noise!



>
> --
> Shengjing Zhu
>


Bug#975563: golang-k8s-sigs-structured-merge-diff: Please update to upstream version 4.0 or later

2020-11-28 Thread El boulangero
This broke the build for containerd, and also for docker.io which embeds
containerd:

https://buildd.debian.org/status/fetch.php?pkg=docker.io=s390x=20.10.0%7Erc1%2Bdfsg1-1=1606547204=0

/<>/.gopath/src/
github.com/containerd/containerd/vendor/sigs.k8s.io/structured-merge-diff/v3/value
(vendor tree)
/usr/lib/go-1.15/src/sigs.k8s.io/structured-merge-diff/v3/value (from
$GOROOT)
/<>/.gopath/src/sigs.k8s.io/structured-merge-diff/v3/value
(from $GOPATH)

(the latest containerd builds were still against k8s-structured-merge-diff
v3, so they succeeded, only this docker.io build was late enough to pick up
the v4)

By any chance, do you know if simply patching to use v4 instead of v3 in
the import path will work? Or is there another way to handle this?

Cheers,

  Arnaud


Bug#943981: Proposal: Switch to cgroupv2 by default

2020-11-27 Thread El boulangero
Hello all, here comes news from the docker package.

I can confirm, if ever there was a need, that docker 19.03 does not work
with `systemd.unified_cgroup_hierarchy=true`.

  $ dpkg -l | grep docker
  ii  docker.io  19.03.13+dfsg3-2  amd64  Linux container runtime

  $ findmnt /sys/fs/cgroup
  TARGET SOURCE  FSTYPE  OPTIONS
  /sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate

  $ systemctl is-active docker
  active

  $ sudo docker run --rm -it debian
  docker: Error response from daemon: cgroups: cgroup mountpoint does not
exist: unknown.

Docker will support cgroupv2 in the upcoming `20.10` version. They released
a rc1 a few days ago:
- 

I packaged this version and I just uploaded it to experimental:
- 
- <
https://buildd.debian.org/status/package.php?p=docker.io=experimental>

I can confirm with a quick test that it seems to work:

  $ dpkg -l | grep docker
  ii  docker.io  20.10.0~rc1+dfsg1-1  amd64  Linux container runtime

  $ sudo docker run --rm -it debian echo 'hello world'
  hello world

Please note that the package I uploaded to experimental is a work in
progress. Please give it a try and report issues.

Whether Docker upstream will release a "stable" version in time for Debian
Bullseye is another question. Wait and see.

Cheers,

  Arnaud


Bug#974857: new upstream version 20.10.0-beta1

2020-11-27 Thread El boulangero
I just finished packaging 20.10.0~rc1. It's still a WIP, but good enough
for upload to experimental. The package should be available in experimental
soon. Here are some links:

- <https://salsa.debian.org/docker-team/docker/-/tree/experimental>
- <
https://buildd.debian.org/status/package.php?p=docker.io=experimental>

@Shengjing: For now the package still embeds containerd, as was the case
with docker.io 19.03.x. However it would be nice to remove this copy of
containerd from docker.io.

At the moment, here is the situation:
- Docker 20.10.0-rc1 "upstream" builds against containerd d4e7820, which is
somewhere on the MASTER branch after v1.4.0 (so it's NOT on the 1.4 branch).
- The package I prepared for Debian is built against containerd v1.4.1
though, as I want to use a stable version of containerd.
- In order to build against containerd v1.4.1, I needed to backport 3
patches from the master branch, see:
https://salsa.debian.org/docker-team/docker/-/commit/da70c7e
- These patches are needed to fix a FTBFS in moby/buildkit, which is a
(vendored) build depend of docker

So at the moment, if I want to COMPLETELY remove containerd from docker.io,
it means that either:
- containerd needs to backport these 3 patches, just so that docker.io can
build
- OR find a better solution (for example, revert patches in moby/buildkit
so that it can build against old containerd v1.4, but no, I tried, I don't
think it can work)

If we go for this solution, I think that:
- it can work for debian bullseye (ie. stable), because after it's released
nothing will change much
- however for debian unstable, I think it could be trouble. Containerd 1.5
will be released, then 1.6, and docker.io will lag behind and maybe require
more and more patching to build (unless containerd is very good at
preserving backward compatibility).

There is also another solution, a bit in the middle: docker.io keeps a
vendor copy of containerd (so that docker.io can patch its copy of
containerd as needed), but it uses this copy only for build time. For
runtime, docker can depend on the containerd package from Debian.

My proposition would be:
- try to completely remove containerd from docker.io for now, so that it's
nice and clean for debian stable
- then during maintenance in debian unstable, if it becomes too much of a
mess, revert to vendoring containerd in docker.io, but only as a build dep

What do you think of that ? Would you mind looking at
https://salsa.debian.org/docker-team/docker/-/commit/da70c7e, and tell me
if that makes sense to apply these patches in containerd ? Or do you have a
better idea ?

(( Of course, this assumes that there will be a docker v20.10 stable
release in time for Bullseye... ))

Cheers,

  Arnaud



On Thu, Nov 19, 2020 at 10:57 AM Shengjing Zhu  wrote:

> 20.10 is still in beta, so it shouldn't be candidate for bullseye. But if
> they release the stable version before bullseye freeze, maybe we should
> update.
>
> Regarding cgroupv2, systemd maintainer has proposed to switch to cgroupv2
> by default 1year ago. Please see #943981. It seems docker is the only
> blocker now.
>
> // send from my mobile device
>
> El boulangero  于 2020年11月19日周四 11:24写道:
>
>> Hi Shengjing,
>>
>> thanks for the message. I agree that we should start packaging docker
>> 20.10.x in experimental.
>>
>> Regarding docker 19.03.x: do you know if it will work at all in bullseye?
>> Right now it works for me, running Debian unstable. I guess it's because
>> both cgroup interfaces are available:
>>
>>   $ grep cgroup /proc/filesystems
>>   nodev cgroup
>>   nodev cgroup2
>>
>> Do you know if Bullseye will ship with both cgroup? Hence docker 19.03.x
>> will work?
>>
>> Personally I would be in favor of sticking to the Docker branch 19.03 for
>> bullseye, rather than shipping a beta that will then never be updated to
>> later point releases, due to Debian policy for the stable suite.
>>
>>
>>
>> On Sun, Nov 15, 2020 at 11:09 PM Shengjing Zhu  wrote:
>>
>>> Package: docker.io
>>> Version: 19.03.13+dfsg1-3
>>> Severity: wishlist
>>> X-Debbugs-Cc: z...@debian.org
>>>
>>> Hi,
>>>
>>> docker has released 20.10.0-beta1 for a while. Not sure the plan for
>>> stable
>>> release.
>>>
>>> But if we want 20.10 in bullseye, I suggest starting to package
>>> 20.10.0-beta1
>>> and upload to experimental. So people have time to test.
>>>
>>> A big improvement in 20.10 is supporting cgroupv2.
>>>
>>> https://github.com/docker/docker-ce/blob/master/VERSION
>>> https://github.com/moby/moby/releases/tag/v20.10.0-beta1
>>> https://github.com/docker/docker-ce/blob/master/CHANGELOG.md
>>>
>>


Bug#974857: new upstream version 20.10.0-beta1

2020-11-18 Thread El boulangero
Hi Shengjing,

thanks for the message. I agree that we should start packaging docker
20.10.x in experimental.

Regarding docker 19.03.x: do you know if it will work at all in bullseye?
Right now it works for me, running Debian unstable. I guess it's because
both cgroup interfaces are available:

  $ grep cgroup /proc/filesystems
  nodev cgroup
  nodev cgroup2

Do you know if Bullseye will ship with both cgroup? Hence docker 19.03.x
will work?

Personally I would be in favor of sticking to the Docker branch 19.03 for
bullseye, rather than shipping a beta that will then never be updated to
later point releases, due to Debian policy for the stable suite.



On Sun, Nov 15, 2020 at 11:09 PM Shengjing Zhu  wrote:

> Package: docker.io
> Version: 19.03.13+dfsg1-3
> Severity: wishlist
> X-Debbugs-Cc: z...@debian.org
>
> Hi,
>
> docker has released 20.10.0-beta1 for a while. Not sure the plan for stable
> release.
>
> But if we want 20.10 in bullseye, I suggest starting to package
> 20.10.0-beta1
> and upload to experimental. So people have time to test.
>
> A big improvement in 20.10 is supporting cgroupv2.
>
> https://github.com/docker/docker-ce/blob/master/VERSION
> https://github.com/moby/moby/releases/tag/v20.10.0-beta1
> https://github.com/docker/docker-ce/blob/master/CHANGELOG.md
>


Bug#971789: FTBFS: Could not determine section for ./.gopath/src/github.com/docker/cli/man/man1/docker-attach.1

2020-10-14 Thread El boulangero
I could solve the issue by patching spf13/cobra as suggested by Tianon. See
[1] for the patch. I just uploaded the package.

Since docker.io has to embed spf13/cobra, I could patch it there. But if
other packages in Debian have the same issue, then maybe this patch should
be applied to golang-github-spf13-cobra-dev.

Also, not that apparently the upstream bug is at
https://github.com/spf13/cobra/issues/1049



[1]:
https://salsa.debian.org/docker-team/docker/-/blob/master/debian/patches/cli-fix-spf13-cobra-man-docs.patch

On Tue, Oct 13, 2020 at 7:27 PM Sascha Steinbiss  wrote:

> Hi,
>
> has anyone taken any action here already? Some of my packages are
> affected by this as well.
>
> Cheers
> Sascha
>


Bug#970525: docker.io: Unable to start minikube/kubernetes containers: unable to find user 0: invalid argument

2020-09-21 Thread El boulangero
Then the issue must lie in this commit:
https://salsa.debian.org/docker-team/docker/-/commit/ad52cffa31359262a8e9d44daddf896c3e063dd2

The docker.io package didn't build anymore, due to runc `1.0.0~rc92` which
landed in debian unstable. Shengjing Zhu came up with the patch to fix
that, but it was not a straightforward patch. The issue could be in this
patch. Or maybe there's more work required to make docker.io 19.03.x work
with latest runc (ie. more patching is needed, not less, sorry :/).

Let me say it another way: when you install docker-ce from Docker's repo,
you also get the containerd.io package, that ships the runc binary. All of
these components are basically provided altogether by docker, and they are
at versions that were tested together. While in Debian, these separate
components (containerd, runc) are packaged independently, and these are not
the same versions as the ones shipped by Docker. So sometimes we hit this
kind of issues with the Debian package.

And to be more correct: in Debian we actually bundle containerd within the
docker.io package, because nobody has the bandwidth to try to make docker
19.03.x build-against / work-with containerd 1.4.x. So we build the version
of containerd that is vendored in the docker source tree, and ship it in
the docker.io package. But runc is NOT bundled in, it is provided
independently by the runc package, ie. version `1.0.0~rc92`.

I hope that this clarifies a bit what is the issue here.

I CC Shengjing in case he knows more about this issue. I will also try to
have a look on my side as well.

In the meantime I guess you can downgrade to docker.io version
19.03.12+dfsg1-3 and maybe use `apt-mark hold` to prevent any further
upgrade.

Cheers,

  Arnaud


On Tue, Sep 22, 2020 at 6:35 AM Tianon Gravi  wrote:

> On Mon, 21 Sep 2020 at 13:48,  wrote:
> > On Sun, Sep 20, 2020 at 09:58:45AM +0700, El boulangero wrote:
> > > Do you know what's special with the `tianon/true` image? On what
> OS/release
> > > is it based?
> >
> > It's an image that contains only a single binary that returns 0. That
> > binary uses no libraries, not even libc.
> >
> > It's intended as an extremely light-weight image for purposes that don't
> > need a whole OS. See, for example,
> >
> https://stackoverflow.com/questions/37120260/configure-docker-compose-override-to-ignore-hide-some-containers
> >
> > It seems that something changed between 19.03.12+dfsg1-3 and
> > 19.03.12+dfsg1-4 that is somehow or other assuming the container
> > contains more infrastructure. If you determine that the bug is upstream,
> > feel free to forward it to them (and, ideally, revert whatever patch was
> > added to 19.03.12+dfsg1-4 that caused the problem in the mean time to
> > avoid breaking other software on the system).
>
> I don't think this is an upstream bug -- I'm using their "docker-ce"
> package (version "5:19.03.12~3-0~debian-buster") on a host I've got,
> and here's the result of some tests there:
>
> $ docker run --rm tianon/true && echo ok
> ok
>
> $ docker run --rm --user 0:0 tianon/true && echo ok
> ok
>
> $ docker run --rm --user 1000:1000 tianon/true && echo ok
> ok
>
> ♥,
> - Tianon
>   4096R / B42F 6819 007F 00F8 8E36  4FD4 036A 9C25 BF35 7DD4
>


Bug#970525: docker.io: Unable to start minikube/kubernetes containers: unable to find user 0: invalid argument

2020-09-19 Thread El boulangero
Hi,

I can indeed reproduce the issue. Note that this doesn't happen with the
`debian` image, only with the image `tianon/true`.

$ sudo docker run --rm -it debian echo ok
ok

Do you know what's special with the `tianon/true` image? On what OS/release
is it based?

Additionally, did you try with the docker package provided by docker.com?
See https://docs.docker.com/engine/install/debian/ . If you hit the same
problem, then you should report the issue upstream. If you don't, then
maybe there's something to investigate in the way we build the package for
Debian.

Cheers,

  Arnaud





On Fri, Sep 18, 2020 at 11:03 PM  wrote:

> Here's a simpler test case:
>
>   $ sudo dpkg -i docker.io_19.03.12+dfsg1-4_amd64.deb
>   (Reading database ... 257350 files and directories currently installed.)
>   Preparing to unpack docker.io_19.03.12+dfsg1-4_amd64.deb ...
>   Unpacking docker.io (19.03.12+dfsg1-4) over (19.03.12+dfsg1-3) ...
>   Setting up docker.io (19.03.12+dfsg1-4) ...
>   insserv: Script sysstat has overlapping Default-Start and Default-Stop
> runlevels (2 3 4 5) and (2 3 4 5). This should be fixed.
>   Processing triggers for systemd (246.5-1) ...
>   Processing triggers for man-db (2.9.3-2) ...
>   $ sudo systemctl restart docker
>   $ sudo docker run tianon/true && echo "ok"
>   docker: Error response from daemon: unable to find user 0: invalid
> argument.
>   ERRO[] error waiting for container: context canceled
>
> versus
>
>   $ sudo dpkg -i docker.io_19.03.12+dfsg1-3_amd64.deb
>   dpkg: warning: downgrading docker.io from 19.03.12+dfsg1-4 to
> 19.03.12+dfsg1-3
>   (Reading database ... 257350 files and directories currently installed.)
>   Preparing to unpack docker.io_19.03.12+dfsg1-3_amd64.deb ...
>   Unpacking docker.io (19.03.12+dfsg1-3) over (19.03.12+dfsg1-4) ...
>   Setting up docker.io (19.03.12+dfsg1-3) ...
>   insserv: Script sysstat has overlapping Default-Start and Default-Stop
> runlevels (2 3 4 5) and (2 3 4 5). This should be fixed.
>   Processing triggers for systemd (246.5-1) ...
>   Processing triggers for man-db (2.9.3-2) ...
>   $ sudo systemctl restart docker
>   $ sudo docker run tianon/true && echo "ok"
>   ok
>


Bug#969227: FTBFS with new runc 1.0.0~rc92 and libcap2 2.43

2020-08-30 Thread El boulangero
The patch test--fix-against-libcap2-2.43.patch actually fails the build for
me, in a sid chroot with libcap 2.43.


=== RUN   TestTarUntarWithXattr
archive_unix_test.go:267: assertion failed: string
"/tmp/docker-test-untar-origin293876876/2 = cap_block_suspend+ep\n" does
not contain "cap_block_suspend=ep": untar should have kept the
'security.capability' xattr
archive_unix_test.go:267: assertion failed: string
"/tmp/docker-test-untar-origin293876876/2 = cap_block_suspend+ep\n" does
not contain "cap_block_suspend=ep": untar should have kept the
'security.capability' xattr


This is not very important anyway, as this patch applies to a test that
requires root, hence is skipped on buildd.

The patch fix-build-against-runc-rc92.patch indeed fixes the build. Going
to upload  a new version of the docker.io package soon.

Thanks!

  Arnaud


On Sun, Aug 30, 2020 at 8:09 AM Dmitry Smirnov  wrote:

> On Sunday, 30 August 2020 3:01:34 AM AEST Shengjing Zhu wrote:
> > Please see the patches attached.
>
> Thank you very much!
>
>
> > BTW, is there any instruction to work with the docker.io git repo?
> > It seems `gbp buildpackage` or `gbp pq` are hard to use with it.
>
> Something like the following:
>
>   https://salsa.debian.org/onlyjob/notes/-/wikis/bp
>
> Start with "debian" directory, obtain and extract orig tarballs with
> 'origtargz' then build with your preferred method (e.g. pbuilder).
>
> MUT and complex Golang packages are ridiculously difficult to maintain
> with GBP...
>
> See also https://salsa.debian.org/onlyjob/notes/-/wikis/no-gbp
>
> --
> Best wishes,
>  Dmitry Smirnov.
>
> ---
>
> Truth — Something somehow discreditable to someone.
> -- H. L. Mencken, 1949
>
> ---
>
> A study on infectivity of asymptomatic SARS-CoV-2 carriers, concludes weak
> transmission. "The median contact time for patients was four days and that
> for family members was five days."
> -- https://pubmed.ncbi.nlm.nih.gov/32513410/
>


Bug#965123: docker.io: Please apply some upstream patches for podman 2.0

2020-07-17 Thread El boulangero
> The milestone page https://github.com/docker/cli/milestone/25?closed=1
seems to indicate that docker is close to release a 20.03 version.

Actually there are two git repos to look at, regarding the 20.03 docker
milestone, my bad.

For reference, here they are:
- https://github.com/moby/moby/milestone/76
- https://github.com/docker/cli/milestone/25


On Fri, Jul 17, 2020 at 1:08 PM El boulangero 
wrote:

> Hey Reinhard,
>
> glad that you could find a way without patching docker :)
>
> I'd prefer not to patch docker for other packages, especially when it's
> not trivial like this.
>
> The milestone page https://github.com/docker/cli/milestone/25?closed=1
> seems to indicate that docker is close to release a 20.03 version. That
> would be great news, and would solve this kind of problems.
>
> Best,
>
>   Arnaud
>
>
> On Fri, Jul 17, 2020 at 8:27 AM Reinhard Tartler 
> wrote:
>
>> On 7/16/20 10:06 AM, Reinhard Tartler wrote:
>>
>> > In order to get podman 2.0 to build, I had to backport some changes to
>> > the golang-github-docker-docker-dev package.
>>
>> Actually, never mind, I managed to patch podman 2.0 to not require these
>> backports:
>>
>> https://salsa.debian.org/debian/libpod/-/blob/experimental/debian/patches/old-docker-api.patch
>>
>> it's probably not ideal, but at least allows podman 2.0 to compile. I'll
>> let you decide
>> how to proceed with this bug.
>>
>> -rt
>>
>


Bug#942550: Move /usr/sbin/sendmail symlink to mstmp package

2020-07-17 Thread El boulangero
> msmtp-mta ships a minimal smtp daemon but it is (normally and on purpose)
> disabled by default so there is no daemon listening unless you enable it.

Ah indeed, that was not immediately clear to me. I just installed
msmtp-mta. The service is indeed disabled by default.

Thanks,

  Arnaud

On Fri, Jul 17, 2020 at 2:59 PM Emmanuel Bouthenot 
wrote:

> Hi,
>
> On Thu, Jul 16, 2020 at 09:36:17PM +0700, elboulang...@gmail.com wrote:
> > Is there anything wrong with the approach that Flavio suggests? It
> > also makes sense to me.
> msmtp-mta was created so that msmtp could be installed in parallel of a
> real MTA (see: #396527)
>
> > I'm considering to simply do `ln -sr /usr/bin/msmtp
> > /usr/sbin/sendmail` instead of installing msmtp-mta, because indeed, I
> > don't need a SMTP server, I just want to use msmtp as a drop-in
> > replacement for sendmail.
> That's the goal of msmtp-mta: provides sendmail compatibility without
> having to create symlinks manually.
>
> msmtp-mta ships a minimal smtp daemon but it is (normally and on purpose)
> disabled by
> default so there is no daemon listening unless you enable it.
>
> Regards,
>
> --
> Emmanuel Bouthenot
>   mail: kolter@{openics,debian}.orggpg: 4096R/0x929D42C3
>   xmpp: kol...@im.openics.org  irc: kolter@{freenode,oftc}
>


Bug#965123: docker.io: Please apply some upstream patches for podman 2.0

2020-07-17 Thread El boulangero
Hey Reinhard,

glad that you could find a way without patching docker :)

I'd prefer not to patch docker for other packages, especially when it's not
trivial like this.

The milestone page https://github.com/docker/cli/milestone/25?closed=1
seems to indicate that docker is close to release a 20.03 version. That
would be great news, and would solve this kind of problems.

Best,

  Arnaud


On Fri, Jul 17, 2020 at 8:27 AM Reinhard Tartler 
wrote:

> On 7/16/20 10:06 AM, Reinhard Tartler wrote:
>
> > In order to get podman 2.0 to build, I had to backport some changes to
> > the golang-github-docker-docker-dev package.
>
> Actually, never mind, I managed to patch podman 2.0 to not require these
> backports:
>
> https://salsa.debian.org/debian/libpod/-/blob/experimental/debian/patches/old-docker-api.patch
>
> it's probably not ideal, but at least allows podman 2.0 to compile. I'll
> let you decide
> how to proceed with this bug.
>
> -rt
>