Bug#1069507: marked as pending in dc3dd
Control: tag -1 pending Hello, Bug #1069507 in dc3dd reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/dc3dd/-/commit/c848b6a12af7414b95be2de78a70c5e900e23218 Fix FTBFS on armhf/armel (Closes: #1069507) This new FTBFS is due to the fact that the size of the time_t type went from 32-bit to 64-bit for armel and armhf architectures, cf [1]. It seems that getdate.y didn't support the situation at the time it was bundled into the dc3dd source code, but after digging a bit into the gnulib git history, we find that it was fixed back in 2009, cf [2]. Without being familiar with the code of dc3dd, it's hard to be 100% sure that it's the exact right fix, and it's not breaking anything, so I opened a bug upstream to see if I can get confirmation that it's indeed the right fix to apply, cf [3]. [1]: https://wiki.debian.org/ReleaseGoals/64bit-time [2]: https://git.savannah.gnu.org/cgit/gnulib.git/commit/?id=a68c9ab3cfc8ac7cf2a709b0c1aa93229f8635e6 [3]: https://sourceforge.net/p/dc3dd/bugs/23/ (this message was generated automatically) -- Greetings https://bugs.debian.org/1069507
Bug#1066634: marked as pending in crack
Control: tag -1 pending Hello, Bug #1066634 in crack reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/crack/-/commit/38e36cc1453ffbbcfd6a644114ef9fbe823acb96 Fix implicit-function-declaration errors. (Closes: #1066634) (this message was generated automatically) -- Greetings https://bugs.debian.org/1066634
Bug#1061077: marked as pending in golang-github-gookit-color
Control: tag -1 pending Hello, Bug #1061077 in golang-github-gookit-color reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/go-team/packages/golang-github-gookit-color/-/commit/bc3a6e46f2c0e2eae4cc4ff8ac01ea25ac9f3f9a Set TERM for Ubuntu build infrastructure (Closes: #1061077) (this message was generated automatically) -- Greetings https://bugs.debian.org/1061077
Bug#1066597: marked as pending in stegsnow
Control: tag -1 pending Hello, Bug #1066597 in stegsnow reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/stegsnow/-/commit/ea64c733608c7a5ad22c6f74c1f575c527b560e9 Add missing include (Closes: #1066597) (this message was generated automatically) -- Greetings https://bugs.debian.org/1066597
Bug#1068037: marked as pending in mdk4
Control: tag -1 pending Hello, Bug #1068037 in mdk4 reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/mdk4/-/commit/c3cb41f86f68603dedef371145af1143ec4a6247 Add patch to fix implicit-function-declaration Closes: #1068037 (this message was generated automatically) -- Greetings https://bugs.debian.org/1068037
Bug#1066489: marked as pending in p0f
Control: tag -1 pending Hello, Bug #1066489 in p0f reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/p0f/-/commit/83df5c38d86652c0f4b7501a682ba705dd443d75 Add patch to fix build with -Werror=implicit-function-declaration Closes: #1066489 (this message was generated automatically) -- Greetings https://bugs.debian.org/1066489
Bug#1065969: marked as pending in ike-scan
Control: tag -1 pending Hello, Bug #1065969 in ike-scan reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/ike-scan/-/commit/175d75e70040c11bc202b64125f684aa73913152 Add patch to fix acinclude.m4 with -Werror=implicit-function-declaration Closes: #1065969 (this message was generated automatically) -- Greetings https://bugs.debian.org/1065969
Bug#1060459: scalpel: Scalpel not working on Apple Silicon
Hello, and thanks for reaching out! On Thu, 11 Jan 2024 13:44:03 -0600 "Golden G. Richard III" wrote: > I have placed updated source distros for Scalpel 1.60 as well as the > newer (and more powerful) Scalpel 2.02 on GitHub via > https://github.com/nolaforensix/scalpel-1.60 and > https://github.com/nolaforensix/scalpel-2.02. My recommendation is to > rebuild the 1.60 package from the updated source and also consdering > adding 2.02. I have updated the package to latest commit on https://github.com/nolaforensix/scalpel-1.60. I will consider packaging the 2.02 when I have a bit of time, this week or next week hopefully. Best, -- Arnaud Rebillout / OffSec / Kali Linux Developer
Bug#1060116: marked as pending in wfuzz
Control: tag -1 pending Hello, Bug #1060116 in wfuzz reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/pkg-security-team/wfuzz/-/commit/e23b42572418d97649f8f097ae01f6f52f593df7 Build-Depends -= python3-future Closes: #1060116 (this message was generated automatically) -- Greetings https://bugs.debian.org/1060116
Bug#1054823: starlette: FTBFS: tests failed
The build failure is due to the library python3-httpx. This library uses python3-rfc3986, it's been upgraded lately [1], and it's now causing the breakage. There's a discussion about this issue upstream [2]. If we bump src:httpx to version 0.24.0, the dependency on python3-rfc3986 goes away, and the problem will be fixed. Looking at src:httpx now: Debian has 0.23.3-1, we need at least 0.24, and upstream is at 0.25.2. $ build-rdeps python3-httpx Reverse Build-depends in main: -- aioxmlrpc asgi-csrf asgi-lifespan dnspython fastapi greenbone-feed-sync nala ormar pontos python-a2wsgi python-authlib python-cobra python-duckpy python-falcon python-gvm python-tiny-proxy python-truststore python-uvicorn sqlmodel starlette Since I'm not involved in Python packaging, I don't feel comfortable doing this upgrade. Best, Arnaud [1]: https://tracker.debian.org/pkg/python-rfc3986 [2]: https://github.com/encode/starlette/discussions/1879 On Fri, 27 Oct 2023 21:48:44 +0200 Lucas Nussbaum wrote: > Source: starlette > Version: 0.31.1-1 > Severity: serious > Justification: FTBFS > Tags: trixie sid ftbfs > User: lu...@debian.org > Usertags: ftbfs-20231027 ftbfs-trixie > > Hi, > > During a rebuild of all packages in sid, your package failed to build > on amd64. > > > Relevant part (hopefully): > > debian/rules build > > dh build --with python3 --buildsystem=pybuild > > dh_update_autotools_config -O--buildsystem=pybuild > > dh_autoreconf -O--buildsystem=pybuild > > dh_auto_configure -O--buildsystem=pybuild > > dh_auto_build -O--buildsystem=pybuild > > I: pybuild plugin_pyproject:110: Building wheel for python3.11 with "build" module > > I: pybuild base:310: python3.11 -m build --skip-dependency-check --no-isolation --wheel --outdir /<>/.pybuild/cpython3_3.11_starlette > > * Building wheel... > > Successfully built starlette-0.31.1-py3-none-any.whl > > I: pybuild plugin_pyproject:122: Unpacking wheel built for python3.11 with "installer" module > > dh_auto_test -O--buildsystem=pybuild > > I: pybuild base:310: cd /<>/.pybuild/cpython3_3.11_starlette/build; python3.11 -m pytest tests > > = test session starts == > > platform linux -- Python 3.11.6, pytest-7.4.3, pluggy-1.3.0 > > rootdir: /<>/.pybuild/cpython3_3.11_starlette/build > > configfile: pyproject.toml > > plugins: anyio-3.7.0 > > collected 420 items > > > > tests/test__utils.py .. [ 1%] > > tests/test_applications.py .FF.FFF..F.FFF..F. [ 7%] > > tests/test_authentication.py .FF.FF [ 9%] > > tests/test_background.py [ 10%] > > tests/test_concurrency.py .F [ 10%] > > tests/test_config.py [ 11%] > > tests/test_convertors.py FFF [ 12%] > > tests/test_datastructures.py .. [ 17%] > > tests/test_endpoints.py FFF... [ 19%] > > tests/test_exceptions.py .FF.F [ 22%] > > tests/test_formparsers.py ..F [ 31%] > > tests/test_requests.py FFF...FFF.FF.F.FF. [ 41%] > > tests/test_responses.py FF. [ 48%] > > tests/test_routing.py FF..FF.FF.FF.F.F.FF..F.F.F..FFF..FFF.. [ 60%] > > .. [ 60%] > > tests/test_schemas.py .F [ 61%] > > tests/test_staticfiles.py FFF.FFF.FF.. [ 68%] > > tests/test_status.py .. [ 68%] > > tests/test_templates.py ..F.FF [ 71%] > > tests/test_testclient.py F.F..FF.FFF [ 76%] > > tests/test_websockets.py [ 84%] > > tests/middleware/test_base.py FF....FF..FF [ 89%] > > tests/middleware/test_cors.py FFF [ 93%] > > tests/middleware/test_errors.py .F [ 94%] > > tests/middleware/test_gzip.py F [ 95%] -- Arnaud Rebillout / OffSec / Kali Linux Developer
Bug#1036256: golang-github-pin-tftp: FTBFS in testing
On Mon, 18 Sep 2023 09:23:19 -0300 Thiago Andrade wrote: > I'm waiting for this upgrade. After I'll try to upgrade gobuster to > 3.6.0 version. Hello Thiago, the new package was uploaded and is now in unstable. However, after manually triggering the autopkgtests two times, I noticed that a test failed (2 times, on a different architecture each time, cf ppc64el and riscv64 logs at [1]). So it still looks like a test is flaky, or maybe there's a bug in the code. I pinged upstream to see if they can help [2]. Cheers, Arnaud -- [1] https://ci.debian.net/packages/g/golang-github-pin-tftp/ [2] https://github.com/pin/tftp/issues/87
Bug#1036256: golang-github-pin-tftp: FTBFS in testing
I tried to rebuild the package locally, works for me. Tried to run autopkgtests in GitLab CI: worked [1]. It hints at flaky tests indeed... I noticed that in Debian we package 2.2.0, however upstream is at version 3.0.0, with significant changes. We have only two reverse build-deps: gobuster and ignition. 1) gobuster already uses pin-tftp 3.0 2) ignition seems to use an older version of pin-tftp, but I could rebuild it with pin-tftp 3.0 successfully So I'm going to upload version 3.0 of pin-tftp, and I'll check if builds and autopkgtests succeed with this new version. -- [1]: https://gitlab.com/arnaudr/golang-github-pin-tftp/-/jobs/5108371487 -- Arnaud Rebillout / OffSec / Kali Linux Developer
Bug#1029803: command-not-found breaks dist-upgrade bullseye → bookworm
We were also hit by this issue in Kali Linux, it broke apt during a few hours until I could fix it at the repository level. I think that it would also have been better if the apt hook ended with "|| true", so that users still have a usable apt whenever cnf misbehaves. Ie. in file https://salsa.debian.org/jak/command-not-found/-/blob/main/data/50command-not-found: |"if /usr/bin/test -w /var/lib/command-not-found/ -a -e /usr/lib/cnf-update-db; then /usr/lib/cnf-update-db > /dev/null || true; fi";| Thanks! -- Arnaud Rebillout / Offensive Security / Kali Linux Developer
Bug#978830: about gtkhash ITA
Hello Sheng Wen, On Fri, 29 Jul 2022 09:08:20 +0800 =?UTF-8?B?eGlhbyBzaGVuZyB3ZW4o6IKW55ub5paHKQ==?= wrote: > Do you want ITA gtkhash package now? > This package had orphan now, see #1015845. > > If you don't want ITA it, I would do ITA. please feel free to ITA this package. I don't have much time to work on it myself. Don't forget to have a look at https://gitlab.com/kalilinux/packages/gtkhash and cherry-pick any commit you need from there! Thanks for reaching out, and thanks for taking care of gtkhash, Regards, -- Arnaud Rebillout / Offensive Security / Kali Linux Developer
Bug#993029: ranger: No preview for mp(e)g files (mime-type: image/x-tga) and fs saturation with .pam files
@l0f4r0 Can you still reproduce this issue? Is it still present in latest upstream tagged version (ie. v1.9.3 [1])? Is it still present in latest version from git master branch [2]? [1]: https://github.com/ranger/ranger/releases/tag/v1.9.3 [2]: https://github.com/ranger/ranger -- Arnaud Rebillout
Bug#978830: https://gitlab.com/kalilinux/packages/gtkhash
Dear maintainer, I had to update this package for Kali Linux. I updated it to latest upstream version 1.4, and cherry-picked an upstream patch to fix this FTBFS. You can find this package at https://gitlab.com/kalilinux/packages/gtkhash. Please feel free to cherry-pick all the commits you need from there. Alternatively, if you're not willing to maintain this package anymore, I'm OK to maintain it. Cheers, -- Arnaud Rebillout
Bug#975789: marked as pending in golang-github-xenolf-lego
Control: tag -1 pending Hello, Bug #975789 in golang-github-xenolf-lego reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/go-team/packages/golang-github-xenolf-lego/-/commit/35f13f91f7295148a5509cc58107f92713f728f3 Fix FTBFS against nrdcg-goinwx-dev 0.8 (Closes: #975789) (this message was generated automatically) -- Greetings https://bugs.debian.org/975789
Bug#975584: marked as pending in consul
Control: tag -1 pending Hello, Bug #975584 in consul reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/go-team/packages/consul/-/commit/9650adb23f7363d8e0d5a19522542e3be78b9f72 New upstream release (Closes: #975584) (this message was generated automatically) -- Greetings https://bugs.debian.org/975584
Bug#973084: marked as pending in docker.io
Control: tag -1 pending Hello, Bug #973084 in docker.io reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/docker-team/docker/-/commit/968a84186afa88241c56d501f40dfa97cf20dd8c Fix FTBFS due to go-md2man v2 (Closes: #973084) Import a bunch of upstream patches to handle the transition to go-md2manv2. In Debian at the moment, both golang-github-cpuguy83-go-md2man-dev and golang-github-cpuguy83-go-md2man-v2-dev are available, however when it comes to the binary, only go-md2man v2 is available. Mixing both version, ie. building spf13/cobra against v1, and then using go-md2man v2 to generate the manpages, does not work (the generated man pages are wrong). So the most straightforward solution here seems to patch, and completely use go-md2man v2, both for build and then to generate the man pages. It would be a bit easier if we wouldn't have to vendor spf13/cobra, and I hope I can drop this vendoring bits soon. (this message was generated automatically) -- Greetings https://bugs.debian.org/973084
Bug#971789: marked as pending in docker.io
Control: tag -1 pending Hello, Bug #971789 in docker.io reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/docker-team/docker/-/commit/5166c4c6e75ff4c1d1b8b8ebaa9ae367cc1ee272 Add patch to fix spf13/cobra (tianon) (Closes: #971789) (this message was generated automatically) -- Greetings https://bugs.debian.org/971789
Bug#969227: marked as pending in docker.io
Control: tag -1 pending Hello, Bug #969227 in docker.io reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/docker-team/docker/-/commit/ad52cffa31359262a8e9d44daddf896c3e063dd2 Fix build against runc 1.0.0~rc92 (zhsj) (Closes: #969227) (this message was generated automatically) -- Greetings https://bugs.debian.org/969227
Bug#958312: marked as pending in docker.io
Control: tag -1 pending Hello, Bug #958312 in docker.io reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/docker-team/docker/-/commit/a70a326c6123b4fb96d254ddf0985529ac80e8ad Add upstream patch to fix panic in libnetwork resolver (Closes: #958312) Signed-off-by: Arnaud Rebillout (this message was generated automatically) -- Greetings https://bugs.debian.org/958312
Bug#958312: Confirmation that the patch works
Thanks for the detailed bug report, I'm rebuilding a package to check that it's all good on my side. I'll probably upload a new revision tomorrow. On 4/20/20 8:33 PM, Patrick Georgi wrote: To follow up on my earlier statement that I'd test the patch: the deb sources + that patch don't show the segfault on my system anymore.
Bug#956502: docker: Error response from daemon: no status provided on response: unknown.
For what it's worth, I could rebuild containerd packages from the 1.2 series, starting from [1] and iterating with different containerd versions. Starting from containerd version 1.2.7, the containerd binary produced works. So it looks like we're looking for a change between 1.2.6 and 1.2.7. A notable change between those two versions is the update of the dependency containerd-ttrpc. [1]: https://salsa.debian.org/go-team/packages/containerd/-/commit/c9e2aa545934b326d7f81cb267369772ab51f3f7 On 4/13/20 10:58 AM, Arnaud Rebillout wrote: On 4/12/20 7:49 PM, Chris Lamb wrote: severity 956502 serious thanks Hi, docker: Error response from daemon: no status provided on response: unknown. This, too, happens for me. Downgrading to 19.03.6+dfsg1-2 (from 19.03.6+dfsg1-3) restores all functionality. Marking as RC merely to prevent migration to bullseye (which still has 19.03.6+dfsg1-2). Thanks! So far I had some success by replacing the binary /usr/bin/docker-containerd by the one mentioned by upstream, either one of: - https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.10-3_amd64.deb - https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.10-2_amd64.deb While the older version available on their mirror shows the same issue that we're hitting: - https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.6-3_amd64.deb It's unfortunate that upstream does not provide a source package for containerd.io, they only provide the binary package AFAIK. ~~ For more understanding on this bug ~~ Note that Docker upstream uses two different versions of containerd: - one version of containerd is used to build docker (ie. it's part of the dependency tree in the vendor/ directory). At the moment they use a commit that is on the master branch, somewhere between `v1.2.0` and `v1.3.0-beta.0`. - another version of containerd is used to build the containerd binary, then is packaged as `containerd.io`, and is installed as a dependency of `docker.io`. This package is built from a tagged version, on the `release/1.2` branch. So it's kind of "stable". While for Debian packaging, we use only the version that is mentioned in the vendor tree (so the commit from the master branch), for both purpose: building the docker package and also building the docker-containerd binary that is then installed along with docker. So here I would say that we need to find a patch in containerd, that should be on the 1.2 branch. That's my best guess.
Bug#956502: docker: Error response from daemon: no status provided on response: unknown.
On 4/12/20 7:49 PM, Chris Lamb wrote: severity 956502 serious thanks Hi, docker: Error response from daemon: no status provided on response: unknown. This, too, happens for me. Downgrading to 19.03.6+dfsg1-2 (from 19.03.6+dfsg1-3) restores all functionality. Marking as RC merely to prevent migration to bullseye (which still has 19.03.6+dfsg1-2). Thanks! So far I had some success by replacing the binary /usr/bin/docker-containerd by the one mentioned by upstream, either one of: - https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.10-3_amd64.deb - https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.10-2_amd64.deb While the older version available on their mirror shows the same issue that we're hitting: - https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.6-3_amd64.deb It's unfortunate that upstream does not provide a source package for containerd.io, they only provide the binary package AFAIK. ~~ For more understanding on this bug ~~ Note that Docker upstream uses two different versions of containerd: - one version of containerd is used to build docker (ie. it's part of the dependency tree in the vendor/ directory). At the moment they use a commit that is on the master branch, somewhere between `v1.2.0` and `v1.3.0-beta.0`. - another version of containerd is used to build the containerd binary, then is packaged as `containerd.io`, and is installed as a dependency of `docker.io`. This package is built from a tagged version, on the `release/1.2` branch. So it's kind of "stable". While for Debian packaging, we use only the version that is mentioned in the vendor tree (so the commit from the master branch), for both purpose: building the docker package and also building the docker-containerd binary that is then installed along with docker. So here I would say that we need to find a patch in containerd, that should be on the 1.2 branch. That's my best guess.
Bug#933002: docker.io: CVE-2019-13139
Dear Release Team, I'm new to the process of uploading to stable, I need your guidance on that one. From the buster announce: * The bug you want to fix in stable must be fixed in unstable already (and not waiting in NEW or the delayed queue) My issue with this particular bug (#933002) is that for now, docker.io doesn't build in unstable. It will take a while before it builds again, as there was changes in the dependency tree. On the other hand, fixing this bug in stable is just a matter of importing the patch from upstream and rebuilding the package. So how am I supposed to handle that? Waiting for docker.io to be fixed and built again in unstable will delay the fix in stable for weeks, I don't think it's a good option. Best regards, Arnaud
Bug#934962: syncthing: FTBFS with 'build cache is disabled by GOCACHE=off, but required as of Go 1.12'
On 8/17/19 1:02 PM, Bruno Kleinert wrote: […] cd _build/src/github.com/syncthing/syncthing && go run script/genassets.go gui lib/auto/gui.files.go build cache is disabled by GOCACHE=off, but required as of Go 1.12 make[1]: *** [debian/rules:54: override_dh_auto_configure] Error 1 make[1]: Leaving directory '/<>' make: *** [debian/rules:34: build] Error 2 dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2 […] You must set GOCACHE to a directory inside the build directory. This is needed since Go 1.12, as the error message hints. For an example see: https://salsa.debian.org/go-team/packages/golang-github-docker-docker-credential-helpers/commit/397ff4dd01216ede71468ccffd7b2d25078a5ff3 Cheers, Arnaud
Bug#933002: docker.io: CVE-2019-13139
On 8/13/19 12:35 PM, Salvatore Bonaccorso wrote: On Tue, Aug 13, 2019 at 11:31:41AM +0200, Arnaud Rebillout wrote: This is fixed in unstable. Thanks! Oh well, not fixed in unstable yet actually, as the package doesn't build anymore due to changes in the dependency tree... This one is marked as no-dsa. But if something is not yet marked it can as well mean we simply have not assessed it for buster or stretch. Feel free to CC the security team alias when unsure. For getting packages via a point release there are some steps outlined here: https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#upload-stable When involving security some guidelines are given at https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#s5.6.4 and https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#bug-security Thanks for all the references! Arnaud
Bug#933002: docker.io: CVE-2019-13139
This is fixed in unstable. Question from a non-experienced DM: what's the procedure to get this into stable? It seems that I shouldn't file a bug to release.debian.org, and instead get in touch with the security team. What's the workflow? Should I file a bug against the pseudo-package security.debian.org? Or should I just follow up on this bug and CC security? Thanks! Arnaud
Bug#933002: marked as pending in docker.io
Control: tag -1 pending Hello, Bug #933002 in docker.io reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/docker-team/docker/commit/10054348c2e9dcb9a1c26689921adb9cb809f452 Add upstream patch for CVE-2019-13139 (Closes: #933002) Signed-off-by: Arnaud Rebillout (this message was generated automatically) -- Greetings https://bugs.debian.org/933002
Bug#929662: docker.io: CVE-2018-15664 - upstream backport of patch for 18.09
Hi, thanks for reaching out. I applied the patch, that is no problem. However the new tests that were added makes my machine go crazy and reach the maximum number of process. Right now I'm configured like that: $ ulimit -u 62688 I will bumb this number but I also want to check a bit more in details what's happening and report that upstream, as I don't know if this is expected behavior or not. You can checkout the branch at https://salsa.debian.org/docker-team/docker/tree/arnaudr/cve-2018-15664 and try it by yourself if you're curious. In the meantime, I reached out to the release team at #930293 to prepare for the next unblock. So things are in progress, no need for help on this particular issue, but in general if you're interested in the docker package, then help with the packaging is more than welcome :) Arnaud On 6/9/19 9:31 AM, Afif Elghraoui wrote: > Hello, > > Is any help needed on this? Upstream has a backport of the patch for the > 18.09 series (same as Unstable): > > https://github.com/docker/engine/pull/253 > > Hopefully it won't be too much work to incorporate it. > > thanks and regards > Afif >
Bug#903635: docker.io: use of iptables-legacy is incompatible with nftables-based iptables
On 5/24/19 4:33 PM, Jonathan Dowland wrote: > Hi Arnaud - sorry I missed your messages until now. No problem :) > > On Fri, May 10, 2019 at 09:03:41AM +0700, Arnaud Rebillout wrote: >> As I mentioned above, there's a discussion with a work in progress to >> fix that upstream: https://github.com/docker/libnetwork/pull/2339 >> >> I don't think it will be ready in time for buster though. So I see two >> solutions going forward: >> >> - 1 Jonathan lower the severity of the bug so that it's not RC. > > I'd rather not do that, because I think RC is the right classification; > *but* I don't feel necessarily (given the circumstances) that docker.io > should be removed from Buster, so can I instead suggest we request that > this bug is ignored for Buster? I think we need to ask the release team > to do that (and tag accordingly) but I'll double-check the procedure. > >> - 2 I import the patch from github, even though it's work in progress. I >> will follow up and update the patch as soon as upstream release a proper >> fix, and it will be included in a point release of buster. > >> If I don't get any feedback from you Jonathan in the following days, >> I'll go for solution number 2 then. > > I bow to your judgement as maintainer as to whether the partial fix is > worth applying on its own. Will the patch in #2339 address the specific > issue of what happens on package install? The thing is, I don't know for sure. After reading all the conversation, it seems that it does fix the particular bug reported here. But upstream also points out that it's just a partial solution, that's why the patch is sitting there without anyone really merging it. It's not sure if an improved version of the patch will appear. The bug has been opened for a long time already (1.5 years), and upstream doesn't seem to care much. Myself, I don't have a test setup to reproduce the bug and then validate that the patch fixes it. And these days I can't afford the time to work on that. That's why I'm also reluctant to blindly import this patch (even though after looking at the diff itself, it looks rather trivial). Hence I think it would be safer to go for option 1 and request that the bug is ignored? Unless the reporter of the bug has the time and means to actually test the patch in #2339? For sure I will follow up on that during Buster lifecycle, hopefully upstream will fix this for real, and in any case I'll find the time at some point to properly test this patch. Regards, Arnaud
Bug#903635: docker.io: use of iptables-legacy is incompatible with nftables-based iptables
On 5/22/19 3:32 PM, Afif Elghraoui wrote: > You hadn't Cc'd Jonathan (but I am, now) and I doubt that he's > subscribed to this bug, so he probably never saw these messages. I'm > just checking in here as a concerned maintainer of a reverse-dependency > threatened with autoremoval. Hmm I'm a bit clumsy with the bugtracker, sorry, and thanks for following up :)
Bug#903635: docker.io: use of iptables-legacy is incompatible with nftables-based iptables
On Mon, 29 Apr 2019 07:46:22 +0700 Arnaud Rebillout wrote: > Actually this was fixed upstream lately, and the fix is in Debian > testing already. See > https://github.com/docker/libnetwork/pull/2339#issuecomment-487207550 > > There's still other iptables related bugs, the most outstanding being > #903635. If this bug could be solved, then users could just run docker > with `--iptables=false`. This is discussed upstream in the link above. > > In any case I will close this bug in the next changelog entry. > Hey, this message was intended for bug #921600, sorry for the confusion! So, let's get back on the track: this very bug, #903635. As I mentioned above, there's a discussion with a work in progress to fix that upstream: https://github.com/docker/libnetwork/pull/2339 I don't think it will be ready in time for buster though. So I see two solutions going forward: - 1 Jonathan lower the severity of the bug so that it's not RC. - 2 I import the patch from github, even though it's work in progress. I will follow up and update the patch as soon as upstream release a proper fix, and it will be included in a point release of buster. If I don't get any feedback from you Jonathan in the following days, I'll go for solution number 2 then. Cheers
Bug#903635: docker.io: use of iptables-legacy is incompatible with nftables-based iptables
Actually this was fixed upstream lately, and the fix is in Debian testing already. See https://github.com/docker/libnetwork/pull/2339#issuecomment-487207550 There's still other iptables related bugs, the most outstanding being #903635. If this bug could be solved, then users could just run docker with `--iptables=false`. This is discussed upstream in the link above. In any case I will close this bug in the next changelog entry.
Bug#903635: This is RC; breaks unrelated software
Looks like a fix was proposed at: https://github.com/docker/libnetwork/pull/2339/files However this fix didn't receive any feedback from upstream so far, and I'm not familiar with the docker codebase myself. So I'm a bit reluctant to import this patch. And on the other hand, after a quick look the patch looks pretty straightforward and harmless. Maybe someone else wants to have a look at this patch and give some feedback? On Wed, 24 Apr 2019 20:04:43 +0100 Jonathan Dowland wrote: > severity 903635 critical > thanks > > Justification: "makes unrelated software on the system (or the whole system) break" > > Installing docker.io changed my FORWARD chain policy to DROP, breaking > networking for unrelated virsh-based VMs that I had installed on the machine at > the time. This matches exactly the text for severity: serious. > > -- > > ⢀⣴⠾⠻⢶⣦⠀ > ⣾⠁⢠⠒⠀⣿⡁ Jonathan Dowland > ⢿⡄⠘⠷⠚⠋⠀ https://jmtd.net > ⠈⠳⣄ > >
Bug#923431: containerd: unsatisfiable build dependencies
On Thu, 28 Feb 2019 08:48:21 +0100 Ralf Treinen wrote: > containerd cannot satisfy its build-dependencies in sid The containerd package in sid is quite outdated. I tried to update it to a decent version in 2018, but in the end I was stuck by circular dependencies with the docker package, and I had to give up. And that's BTW one of the reasons why this package has been unmaintained for a while, it's kind of impossible to build it due to circular dependencies. This being said, zhjs uploaded a new version in experimental a few days ago. I have no idea how it builds and what's the status, but I suggest you look at it if you want to work on the containerd package. https://salsa.debian.org/go-team/packages/containerd Regards, Arnaud
Bug#921156: etcd: CVE-2018-1098 CVE-2018-1099
I looked into this a bit yesterday. As mentioned in the issue upstream at https://github.com/etcd-io/etcd/issues/9353, the fix has been merged in the master branch of etcd in March 2018, almost a year ago. The conversation also mentions that this will be part of the next release v3.4. However v3.4 has not been released yet. And I don't think we want to package a random commit from the master branch of etcd. So if we want to solve this bug simply by updating the package, we'll have to wait for v3.4 to be released. The other alternative is to cherry-pick the patch. If I'm not mistaken, the fix can be found in this MR: https://github.com/etcd-io/etcd/pull/9372/files. It's not a trivial patch. It's unlikely that we can apply it without modification on the etcd currently packaged in debian. I personally can't do that, as I know nothing about etcd anyway. I don't know if someone feels up to the task, or have a better idea about how to solve that. Cheers, Arnaud
Bug#920935: Bug #920935 in docker.io marked as pending
Control: tag -1 pending Hello, Bug #920935 in docker.io reported by you has been fixed in the Git repository and is awaiting an upload. You can see the commit message below and you can check the diff of the fix at: https://salsa.debian.org/docker-team/docker/commit/ce14a47751a90a86ff02749ece5e1f8891dac1bc Install containerd-shim as docker-containerd-shim (Closes: #920935). The containerd package in unstable already installs containerd-shim, so we can't hijack this name. Signed-off-by: Arnaud Rebillout (this message was generated automatically) -- Greetings https://bugs.debian.org/920935
Bug#908055: Processed: fixed 908055 in 17.12.1+dfsg-1
On Mon, 10 Sep 2018 08:18:56 +0200 Salvatore Bonaccorso wrote: > Hi Dmitry, > > On Mon, Sep 10, 2018 at 09:23:59AM +1000, Dmitry Smirnov wrote: > > On Thursday, 6 September 2018 2:19:24 PM AEST Salvatore Bonaccorso wrote: > > > > > fixed 908055 17.12.1+dfsg-1 > > > > > > Is this the first version which was using the fixed > > > golang-github-vbatts-tar-split? > > > > Yes it is. I've confirmed that by examining the build log. > > Perfect, thanks for confirming! > > Regards, > Salvatore > If I understand, this bug is fixed? Should we close it?
Bug#920935: docker.io: ships /usr/bin/containerd-shim, already provided by containerd
On Wed, 30 Jan 2019 18:23:14 +0100 Andreas Beckmann wrote: > > Here is a list of files that are known to be shared by both packages > (according to the Contents file for sid/amd64, which may be > slightly out of sync): > > /usr/bin/containerd-shim > > > Cheers, > > Andreas > > PS: for more information about the detection of file overwrite errors > of this kind see https://qa.debian.org/dose/file-overwrites.html Is it a situation where we should put a Conflict statement? Like `Conflicts: containerd` ?
Bug#920597: Last docker.io update - not start
> On Wednesday, 30 January 2019 7:59:24 PM AEDT Dominique Dumont wrote: >> But docker fails to start with: >> $ docker run -ti alpine sh >> docker: Error response from daemon: failed to start shim: exec: >> "docker-containerd-shim": executable file not found in $PATH: unknown. I can run `docker run -ti alpine sh` successfully with 18.09.1+dfsg1-4. I don't have any `docker-containerd-shim` binary installed anywhere, docker doesn't seem to need it and uses `containerd-shim` as expected: $ ls -l /usr/bin/containerd-shim -rwxr-xr-x 1 root root 5981528 Jan 28 06:16 /usr/bin/containerd-shim Could it be a matter of `systemctl restart docker`, or something like that? Or is there anything special in your setup that I should know, that could help me reproduce the issue?
Bug#919500: golang-github-grpc-ecosystem-grpc-gateway: build dependency on golang-google-genproto-dev must be bumped to (>= 0.0~git20190111.db91494)
Dear Go team, I pushed a fix on Salsa, can you please review and upload these changes? Thanks! Regards, Arnaud
Bug#918502: golang-github-opencontainers-runtime-tools: autopkgtest needs update for new version of golang-github-hashicorp-go-multierror
I pushed some changes to Salsa: * New upstream version 0.8.0+dfsg * Update patches * Add patch to build against hashicorp-multierror 1.0 Can someone upload this package, as I'm just DM and I don't have permission for that? I CC Dmitry as he's the original author of the package, but really, any DD from the Go team should feel free to tag a release and upload. For more context: There was very little changes upstream between 0.7 to 0.8, and no changes at all in the vendor tree, so the update was rather trivial. Additionally, we actually don't use this package (yet) to build any other package, so nothing can possibly break. Thanks, Arnaud
Bug#918502: golang-github-opencontainers-runtime-tools: autopkgtest needs update for new version of golang-github-hashicorp-go-multierror
Thanks for the report, I'm looking into that. The FTBFS is not fixed by updating opencontainers-runtime-tools to the latest version (0.8). Actually, the issue is that opencontainers-runtime-tools is using an outdated vendored copy of hashicorp-go-multierror. It doesn't build out of the box against the latest version of multierror. I'm cooking a little patch for that to work.
Bug#907432: [pkg-go] Bug#907432: golang-github-cloudflare-redoctober: FTBFS (too many arguments in call to activation.Listeners)
On 09/14/2018 04:56 PM, Shengjing Zhu wrote: > And for redoctober, it only has one line using go-systemd, it's > > activation.Listeners(true) > > From the go-systemd commit comment, they just removed the arg, and > assume it's default is true. > Oh, I didn't notice it was that trivial. Thanks.
Bug#907432: golang-github-cloudflare-redoctober: FTBFS (too many arguments in call to activation.Listeners)
The only reverse dependency for this package is golang-cfssl (also from cloudflare). Upstream never issued a release for redoctober. Hence we could just assume that redoctober is part of golang-cfssl's codebase, keep it embedded in golang-cfssl, and stop packaging it separately like we do now. That would solve the FTBFS, since the part of redoctober that is embedded in cfssl doesn't need systemd-go (which is causing the build failure mentioned in this bug). Otherwise we wait for upstream to update its embedded copy of systemd-go, but I have no idea how long it will take. Their copy of systemd-go is 2 years old. Last commit on redoctober was 6 months ago. I don't expect anything to happen anytime soon. Cheers, Arnaud
Bug#908055: docker.io: CVE-2017-14992
On 09/05/2018 10:22 PM, Shengjing Zhu wrote: > Dear docker.io maintainer, > > I'm not sure why the Built-Using field in docker.io doesn't contain > golang-github-vbatts-tar-split. Maybe dh-golang can't deal with the > docker.io repo. Not sure it's whose bug... Built-Using is supposed to reflect the build dependencies, isn't it? A quick look at the docker.io package in experiemental: there's around 130 build dependencies in debian/control, and only 33 packages in the Built-Using field of the binary package... So there seem to be a problem. A quick look at consul, another "big" package, also shows surprising numbers, around 50 build dependencies, and only ... 33 packages again in Built-Using!
Bug#906999: docker.io: FTBFS in buster/sid (too many arguments in call to activation.TLSListeners)
Hi Santiago, I can't do that as I'm just a contributor, but I'm sure Dmitry, who maintains the package, will take care of that soon. Thanks for the feedback, Arnaud On 08/23/2018 07:06 PM, Santiago Vila wrote: > On Thu, Aug 23, 2018 at 08:21:04AM +0700, Arnaud Rebillout wrote: >> On 08/23/2018 06:14 AM, Santiago Vila wrote: >>> [... snipped ...] >>> >>> # github.com/docker/docker/daemon/listeners >>> src/github.com/docker/docker/daemon/listeners/listeners_linux.go:65:43: too >>> many arguments in call to activation.TLSListeners >>> have (bool, *tls.Config) >>> want (*tls.Config) >>> src/github.com/docker/docker/daemon/listeners/listeners_linux.go:67:40: too >>> many arguments in call to activation.Listeners >>> have (bool) >>> want () >>> FAILgithub.com/docker/docker/cmd/dockerd [build failed] >>> >>> [...] >> This is due to the package golang-github-coreos-go-systemd, the version >> 17 breaks docker build. Apparently v17 made it to testing on August >> 19th, 4 days ago. >> >> We have docker 18.06 in experimental, which works with (actually, >> requires) golang-github-coreos-go-systemd v17. > IMHO, it would be better to upload 18.06 for unstable, as it fixes at > least one RC bug (this one). > > Thanks. >
Bug#906999: docker.io: FTBFS in buster/sid (too many arguments in call to activation.TLSListeners)
On 08/23/2018 06:14 AM, Santiago Vila wrote: > [... snipped ...] > > # github.com/docker/docker/daemon/listeners > src/github.com/docker/docker/daemon/listeners/listeners_linux.go:65:43: too > many arguments in call to activation.TLSListeners > have (bool, *tls.Config) > want (*tls.Config) > src/github.com/docker/docker/daemon/listeners/listeners_linux.go:67:40: too > many arguments in call to activation.Listeners > have (bool) > want () > FAILgithub.com/docker/docker/cmd/dockerd [build failed] > > [...] This is due to the package golang-github-coreos-go-systemd, the version 17 breaks docker build. Apparently v17 made it to testing on August 19th, 4 days ago. We have docker 18.06 in experimental, which works with (actually, requires) golang-github-coreos-go-systemd v17. Regards, Arnaud