Re: git-ubuntu 0.7.3 in edge/candidate/stable channels [Was Re: git-ubuntu 0.7.1 in edge/candidate/stable channels]
On Fri, Mar 2, 2018 at 2:00 PM, Nish Aravamudanwrote: > On Thu, Mar 1, 2018 at 5:20 PM, Nish Aravamudan > wrote: >> On 28.02.2018 [21:33:38 -0800], Nish Aravamudan wrote: >>> Hello all! >>> >>> I am happy to announce the release of git-ubuntu 0.7.1 to all channels >>> today. >> >> I encountered some unfortunate issues with snapcraft that hit >> xenial-updates just as I was building (LP: #1752481) and then realized >> we needed to snap the archive keyrings in order to verify recent archive >> files (LP: #1752656). I then made a typo fixing the latter, so we're now >> at 0.7.3. This should build successfully in all channels shortly. >> >> Thanks to Sergio Schvezov for the snapcraft assist and Steve Langasek >> for the keyring suggestion. > > Well, it didn't quite go as planned. I found another bug (so we're at > 0.7.4) and also the snapcraft fix didn't resolve all the issues. So > I'm temporarily using a PPA that reverts the xenial-updates of > snapcraft back to 2.35. > > Kyle Fazzari is working on a proper fix, but if it's not able to get > done soon, we might back out the snapcraft SRU in xenial-updates. For completeness, we ended up backing out the snapcraft SRU via a re-upload of 2.35 to xenial-updates. The snapcraft team is working on fixes to the bugs found. Meanwhile, a mass reimport ran last week and the existing whitelist + 1% of main was reimported. This resulted in 905 successful imports and 5 failed imports. The latter 5 will presumably be broken for a while, as we focus on upping our phasing before resolving those issues (LP: #1754898). I am working right now on an issue with our default repository repointing script. Additionally, roughly 150 manual imports remain in the ~usd-import-team space, but those source packages were never added to the whitelist. They need to be reimported, as well (and added to the whitelist). I am waiting to ensure that there are no pending MPs for those source packages before reimporting. Once that is done, we will start the 'keep-up' script again, which will spend some time catching up the repositories based upon Launchpad publishes since the mass reimport. Finally, we will then look to bump our phasing to 2% (of main), to approximate the disk usage as the phasing increases. Thanks for your patience as we continue to work through this process. -Nish -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
New Qt5 Uploader - Simon Quigley
Hello everyone, Please congratulate Simon on his successful Ubuntu Qt5 Uploaders application! Being part of the team, tsimonq2 now has upload rights to Qt5 packages in the Ubuntu archives. Cheers! -- Łukasz 'sil2100' Zemczak Foundations Team lukasz.zemc...@canonical.com www.canonical.com -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 03:36:11PM +0100, Julian Andres Klode wrote: > Acknowledged. I don't think we want to go ahead without dpkg upstream > blessing anyway. On the APT side, we don't maintain Ubuntu-only branches, > so if we get a go-ahead it would land in Debian immediately too. Good. > I had a quick look at Launchpad and I think it only needs a backport of > the APT commits to an older branch (or an upgrade to bionic, but that > sounds like more work :D) but I might be wrong. We'll probably also need a dpkg backport (preferably in xenial-updates) and some small changes to lib/lp/archiveuploader/. It's not hugely difficult but will need a bit of work. -- Colin Watson [cjwat...@ubuntu.com] -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
Hi Daniel, On Mon, Mar 12, 2018 at 2:11 PM, Daniel Axtenswrote: > Hi, > > I looked into compression algorithms a bit in a previous role, and to be > honest I'm quite surprised to see zstd proposed for package storage. zstd, > according to its own github repo, is "targeting real-time compression > scenarios". It's not really designed to be run at its maximum compression > level, it's designed to really quickly compress data coming off the wire - > things like compressing log files being streamed to a central server, or I > guess writing random data to btrfs where speed is absolutely an issue. > > Is speed of decompression a big user concern relative to file size? I admit > that I am biased - as an Australian and with the crummy internet that my > location entails, I'd save much more time if the file was 6% smaller and > took 10% longer to decompress than the other way around. Yes, decompression speed is a big issue in some cases. Please consider the case of provisioning cluoud/container instances, where after booting the image plenty of packages need to be installed and saving seconds matter a lot. Zstd format also allows parallel decompression which can make package installation even quicker in wall-clock time. Internet connection speed increases by ~50% (according to this [3] study which matches my experience) on average per year which is more than 6% for every two months. > > Did you consider Google's Brotli? We did consider it but it was less promising. Cheers, Balint [3] http://xahlee.info/comp/bandwidth.html > > Regards, > Daniel > > On Mon, Mar 12, 2018 at 9:58 PM, Julian Andres Klode > wrote: >> >> On Mon, Mar 12, 2018 at 11:06:11AM +0100, Julian Andres Klode wrote: >> > Hey folks, >> > >> > We had a coding day in Foundations last week and Balint and Julian added >> > support for zstd compression to dpkg [1] and apt [2]. >> > >> > [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892664 >> > [2] https://salsa.debian.org/apt-team/apt/merge_requests/8 >> > >> > Zstd is a compression algorithm developed by Facebook that offers far >> > higher decompression speeds than xz or even gzip (at roughly constant >> > speed and memory usage across all levels), while offering 19 compression >> > levels ranging from roughly comparable to gzip in size (but much faster) >> > to 19, which is roughly comparable to xz -6: >> > >> > In our configuration, we run zstd at level 19. For bionic main amd64, >> > this causes a size increase of about 6%, from roughly 5.6 to 5.9 GB. >> > Installs speed up by about 10%, or, if eatmydata is involved, by up to >> > 40% - user time generally by about 50%. >> > >> > Our implementations for apt and dpkg support multiple frames as used by >> > pzstd, so packages can be compressed and decompressed in parallel >> > eventually. >> >> More links: >> >> PPA: >> https://launchpad.net/~canonical-foundations/+archive/ubuntu/zstd-archive >> APT merge request: https://salsa.debian.org/apt-team/apt/merge_requests/8 >> dpkg patches: https://bugs.debian.org/892664 >> >> I'd also like to talk a bit more about libzstd itself: The package is >> currently in universe, but btrfs recently gained support for zstd, >> so we already have a copy in the kernel and we need to MIR it anyway >> for btrfs-progs. >> >> -- >> debian developer - deb.li/jak | jak-linux.org - free software dev >> ubuntu core developer i speak de, en >> >> -- -- Balint Reczey Ubuntu & Debian Developer -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 02:19:18PM +, Colin Watson wrote: > On Mon, Mar 12, 2018 at 10:02:49AM -0400, Jeremy Bicha wrote: > > On Mon, Mar 12, 2018 at 6:06 AM, Julian Andres Klode > >wrote: > > > We are considering requesting a FFe for that - the features are not > > > invasive, and it allows us to turn it on by default in 18.10. > > > > What does Debian's dpkg maintainer think? > > FWIW, I'd be quite reluctant to add support for this to Launchpad until > it's landed in Debian dpkg/apt; a future incompatibility would be very > painful to deal with. Acknowledged. I don't think we want to go ahead without dpkg upstream blessing anyway. On the APT side, we don't maintain Ubuntu-only branches, so if we get a go-ahead it would land in Debian immediately too. I had a quick look at Launchpad and I think it only needs a backport of the APT commits to an older branch (or an upgrade to bionic, but that sounds like more work :D) but I might be wrong. I think the format is versioned and there might be new versions eventually, so we might have to take care eventually to only keep generating files in an old format, but xz has the same problem. -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 10:02:49AM -0400, Jeremy Bicha wrote: > On Mon, Mar 12, 2018 at 6:06 AM, Julian Andres Klode >wrote: > > We are considering requesting a FFe for that - the features are not > > invasive, and it allows us to turn it on by default in 18.10. > > What does Debian's dpkg maintainer think? FWIW, I'd be quite reluctant to add support for this to Launchpad until it's landed in Debian dpkg/apt; a future incompatibility would be very painful to deal with. -- Colin Watson [cjwat...@ubuntu.com] -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 03:05:13PM +0100, Julian Andres Klode wrote: > On Mon, Mar 12, 2018 at 01:49:42PM +, Robie Basak wrote: > > On Mon, Mar 12, 2018 at 11:06:11AM +0100, Julian Andres Klode wrote: > > > We are considering requesting a FFe for that - the features are not > > > invasive, and it allows us to turn it on by default in 18.10. > > > > libzstd has only been stable in the archive since Artful. We had to SRU > > fixes to Xenial because it was added to Debian (and outside > > experimental) before the format was stable upstream. > > > > Of all the general uses of a new compression algorithm, I'd expect our > > distribution archival case to be near the end of a develop/test/rollout > > cycle. Are you sure we want to rely on it so completely by switching to > > it by default in 18.10? > > So the goal is to have it in 20.04, which means we should ship it now, so > we can do upgrades from 18.04 to it. Whether we change the default in > 18.10 or not, I don't know, but: Sure. I don't have any objection to making it available now for future use (apart from the usual post-FF required care etc. which the release team will decide upon). I can understand why it may be a goal for 20.04, but I assume that's subject to it having proven itself by then. So while it makes sense to start this by default in 18.10 to flush out any issues, that also pre-supposes that it will have proven itself in the future. A tough call I think, and not one I have enough information to have an opinion upon. I mention it to point out that the other side of the trade-off exists. > IMO, better 18.10 than later. We should gain experience with it, > and if it turns out to be problematic, we can switch the default back > and do no-change rebuilds for 20.04 :) > > That said, if we have problems, I expect people using zstd in filesystems > (btrfs) or backup tools (borg) to be off worse. I think there are certain classes of possible problems for which we will be worse off than the users in the use cases you point out. The publication of our archives is somewhat more permanent and we can't, for example, restore from backup using a different compression to repair our filesystem. It's providing an *automatic* and seamless upgrade path for affected Ubuntu users that could prove difficult. In some other cases where users have individually opted in, a seam isn't necessarily a problem; but it can be for us. Robie signature.asc Description: PGP signature -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 10:02:49AM -0400, Jeremy Bicha wrote: > On Mon, Mar 12, 2018 at 6:06 AM, Julian Andres Klode >wrote: > > We are considering requesting a FFe for that - the features are not > > invasive, and it allows us to turn it on by default in 18.10. > > What does Debian's dpkg maintainer think? We are waiting to hear from him in https://bugs.debian.org/892664 - last time we chatted on IRC, he was open to investigating zstd. -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 09:30:16AM -0400, Neal Gompa wrote: > On Mon, Mar 12, 2018 at 9:11 AM, Daniel Axtens >wrote: > > Hi, > > > > I looked into compression algorithms a bit in a previous role, and to be > > honest I'm quite surprised to see zstd proposed for package storage. zstd, > > according to its own github repo, is "targeting real-time compression > > scenarios". It's not really designed to be run at its maximum compression > > level, it's designed to really quickly compress data coming off the wire - > > things like compressing log files being streamed to a central server, or I > > guess writing random data to btrfs where speed is absolutely an issue. > > > > Is speed of decompression a big user concern relative to file size? I admit > > that I am biased - as an Australian and with the crummy internet that my > > location entails, I'd save much more time if the file was 6% smaller and > > took 10% longer to decompress than the other way around. > > > > Did you consider Google's Brotli? > > > > I can't speak for Julian's decision for zstd, but I can say that in > the RPM world, we picked zstd because we wanted a better gzip. > Compression and decompression times are rather long with xz, and the > ultra-high-efficiency from xz is not as necessary as it used to be, > with storage becoming much cheaper than it was nearly a decade ago > when most distributions switched to LZMA/XZ payloads. I want zstd -19 as an xz replacement due to higher decompression speed, and it also requires about 1/3 less memory when compressing which should be nice for _huge_ packages. > I don't know for sure if Debian packaging allows this, but for RPM, we > switch to xz payloads when the package is sufficiently large in which > the compression/decompression speed isn't really going to be matter > (e.g. game data). So while most packages may not necessarily be using > xz payloads, quite a few would. That said, we've been xz for all > packages for a few years now, and the main drag is the time it takes > to wrap everything up to make a package. We could. But I don't think it matters much. > > As for Google's Brotli, the average compression ratio isn't as high as > zstd, and is markedly slower. With these factors in mind, the obvious > choice was zstd. > > (As an aside, rpm in sid/buster and bionic doesn't have zstd support > enabled... Is there something that can be done to make that happen?) I'd open a wishlist bug in the Debian bug tracker if I were you. If we were to introduce a delta, we'd have to maintain it... -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 01:49:42PM +, Robie Basak wrote: > On Mon, Mar 12, 2018 at 11:06:11AM +0100, Julian Andres Klode wrote: > > We are considering requesting a FFe for that - the features are not > > invasive, and it allows us to turn it on by default in 18.10. > > libzstd has only been stable in the archive since Artful. We had to SRU > fixes to Xenial because it was added to Debian (and outside > experimental) before the format was stable upstream. > > Of all the general uses of a new compression algorithm, I'd expect our > distribution archival case to be near the end of a develop/test/rollout > cycle. Are you sure we want to rely on it so completely by switching to > it by default in 18.10? So the goal is to have it in 20.04, which means we should ship it now, so we can do upgrades from 18.04 to it. Whether we change the default in 18.10 or not, I don't know, but: IMO, better 18.10 than later. We should gain experience with it, and if it turns out to be problematic, we can switch the default back and do no-change rebuilds for 20.04 :) That said, if we have problems, I expect people using zstd in filesystems (btrfs) or backup tools (borg) to be off worse. -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 6:06 AM, Julian Andres Klodewrote: > We are considering requesting a FFe for that - the features are not > invasive, and it allows us to turn it on by default in 18.10. What does Debian's dpkg maintainer think? Thanks, Jeremy Bicha -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: [17.10] libssl-dev 1.0.2g is 1.0.0
Hello, On 11 March 2018 at 09:05, Frank Rehbergerwrote: > Hi > > distribution : artful (ubuntu 17.10) > package libssl-dev [1.0.2g] > > the package libssl-dev claims to be 1.0.2g, but it seems to be older > header-version 1.0.0, as it lacks the constant > > ./crypto/x509/x509_vfy.h:# define X509_V_ERR_INVALID_CALL > 65 > > It seems libssl binary package is also 1.0.0 > Ubuntu has patched openssl1.0 to retain ABI compatibility with 1.0.0 by introducing stub functions, and thus not requiring to recompile software that was compiled against 1.0.0, as it remains usable with newer Ubuntu releases that ship 1.0.2 series of OpenSSL. Thus the version numbers you see are correct - 1.0.2g release with 1.0.0 ABI. About the following defines: X509_V_ERR_INVALID_CALL 65 X509_V_ERR_STORE_LOOKUP 66 They appear to have been introduced in 5553a12735e11bc9aa28727afe721e7236788aab upstream on OpenSSL_1_0_2-stable branch. Which is shipped in: $ git tag --contains 5553a12735e11bc9aa28727afe721e7236788aab OpenSSL_1_0_2i OpenSSL_1_0_2j OpenSSL_1_0_2k OpenSSL_1_0_2l OpenSSL_1_0_2m OpenSSL_1_0_2n 1.0.2g pre-dates above, and thus these defines are not available. Bionic, to become 18.04 LTS, ships openssl1.0 1.0.2n and has above mentioned defines. W.R.T. security updates - ubuntu does not use upstream version numbers to rectify security issues, and instead all security vulnerabilities are patched as distro patches and an USN (Ubuntu Security Notice) is issued reverencing full package upload numbers and the matching CVEs these fix. Please see https://usn.ubuntu.com/ for more details. > > ii libssl-dev:amd64 1.0.2g-1ubuntu13.3 > amd64Secure Sockets Layer toolkit - > development files > ii libssl-doc 1.0.2g-1ubuntu13.3 > all Secure Sockets Layer toolkit - > development documentation > ii libssl1.0.0:amd64 1.0.2g-1ubuntu13.3 > amd64Secure Sockets Layer toolkit - shared > libraries > > > This could be a security issue, shipping a library 1.0.0 claiming to be > 1.0.2g > > > -- > ubuntu-devel mailing list > ubuntu-devel@lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel -- Regards, Dimitri. -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 11:06:11AM +0100, Julian Andres Klode wrote: > We are considering requesting a FFe for that - the features are not > invasive, and it allows us to turn it on by default in 18.10. libzstd has only been stable in the archive since Artful. We had to SRU fixes to Xenial because it was added to Debian (and outside experimental) before the format was stable upstream. Of all the general uses of a new compression algorithm, I'd expect our distribution archival case to be near the end of a develop/test/rollout cycle. Are you sure we want to rely on it so completely by switching to it by default in 18.10? Robie signature.asc Description: PGP signature -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
[17.10] libssl-dev 1.0.2g is 1.0.0
Hi distribution : artful (ubuntu 17.10) package libssl-dev [1.0.2g] the package libssl-dev claims to be 1.0.2g, but it seems to be older header-version 1.0.0, as it lacks the constant ./crypto/x509/x509_vfy.h:# define X509_V_ERR_INVALID_CALL 65 It seems libssl binary package is also 1.0.0 ii libssl-dev:amd64 1.0.2g-1ubuntu13.3 amd64Secure Sockets Layer toolkit - development files ii libssl-doc 1.0.2g-1ubuntu13.3 all Secure Sockets Layer toolkit - development documentation ii libssl1.0.0:amd64 1.0.2g-1ubuntu13.3 amd64Secure Sockets Layer toolkit - shared libraries This could be a security issue, shipping a library 1.0.0 claiming to be 1.0.2g -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
Hi, I looked into compression algorithms a bit in a previous role, and to be honest I'm quite surprised to see zstd proposed for package storage. zstd, according to its own github repo, is "targeting real-time compression scenarios". It's not really designed to be run at its maximum compression level, it's designed to really quickly compress data coming off the wire - things like compressing log files being streamed to a central server, or I guess writing random data to btrfs where speed is absolutely an issue. Is speed of decompression a big user concern relative to file size? I admit that I am biased - as an Australian and with the crummy internet that my location entails, I'd save much more time if the file was 6% smaller and took 10% longer to decompress than the other way around. Did you consider Google's Brotli? Regards, Daniel On Mon, Mar 12, 2018 at 9:58 PM, Julian Andres Klode < julian.kl...@canonical.com> wrote: > On Mon, Mar 12, 2018 at 11:06:11AM +0100, Julian Andres Klode wrote: > > Hey folks, > > > > We had a coding day in Foundations last week and Balint and Julian added > support for zstd compression to dpkg [1] and apt [2]. > > > > [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892664 > > [2] https://salsa.debian.org/apt-team/apt/merge_requests/8 > > > > Zstd is a compression algorithm developed by Facebook that offers far > > higher decompression speeds than xz or even gzip (at roughly constant > > speed and memory usage across all levels), while offering 19 compression > > levels ranging from roughly comparable to gzip in size (but much faster) > > to 19, which is roughly comparable to xz -6: > > > > In our configuration, we run zstd at level 19. For bionic main amd64, > > this causes a size increase of about 6%, from roughly 5.6 to 5.9 GB. > > Installs speed up by about 10%, or, if eatmydata is involved, by up to > > 40% - user time generally by about 50%. > > > > Our implementations for apt and dpkg support multiple frames as used by > > pzstd, so packages can be compressed and decompressed in parallel > > eventually. > > More links: > > PPA: https://launchpad.net/~canonical-foundations/+ > archive/ubuntu/zstd-archive > APT merge request: https://salsa.debian.org/apt-team/apt/merge_requests/8 > dpkg patches: https://bugs.debian.org/892664 > > I'd also like to talk a bit more about libzstd itself: The package is > currently in universe, but btrfs recently gained support for zstd, > so we already have a copy in the kernel and we need to MIR it anyway > for btrfs-progs. > > -- > debian developer - deb.li/jak | jak-linux.org - free software dev > ubuntu core developer i speak de, en > > -- > ubuntu-devel mailing list > ubuntu-devel@lists.ubuntu.com > Modify settings or unsubscribe at: https://lists.ubuntu.com/ > mailman/listinfo/ubuntu-devel > -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
Re: zstd compression for packages
On Mon, Mar 12, 2018 at 11:06:11AM +0100, Julian Andres Klode wrote: > Hey folks, > > We had a coding day in Foundations last week and Balint and Julian added > support for zstd compression to dpkg [1] and apt [2]. > > [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892664 > [2] https://salsa.debian.org/apt-team/apt/merge_requests/8 > > Zstd is a compression algorithm developed by Facebook that offers far > higher decompression speeds than xz or even gzip (at roughly constant > speed and memory usage across all levels), while offering 19 compression > levels ranging from roughly comparable to gzip in size (but much faster) > to 19, which is roughly comparable to xz -6: > > In our configuration, we run zstd at level 19. For bionic main amd64, > this causes a size increase of about 6%, from roughly 5.6 to 5.9 GB. > Installs speed up by about 10%, or, if eatmydata is involved, by up to > 40% - user time generally by about 50%. > > Our implementations for apt and dpkg support multiple frames as used by > pzstd, so packages can be compressed and decompressed in parallel > eventually. More links: PPA: https://launchpad.net/~canonical-foundations/+archive/ubuntu/zstd-archive APT merge request: https://salsa.debian.org/apt-team/apt/merge_requests/8 dpkg patches: https://bugs.debian.org/892664 I'd also like to talk a bit more about libzstd itself: The package is currently in universe, but btrfs recently gained support for zstd, so we already have a copy in the kernel and we need to MIR it anyway for btrfs-progs. -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
zstd compression for packages
Hey folks, We had a coding day in Foundations last week and Balint and Julian added support for zstd compression to dpkg [1] and apt [2]. [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=892664 [2] https://salsa.debian.org/apt-team/apt/merge_requests/8 Zstd is a compression algorithm developed by Facebook that offers far higher decompression speeds than xz or even gzip (at roughly constant speed and memory usage across all levels), while offering 19 compression levels ranging from roughly comparable to gzip in size (but much faster) to 19, which is roughly comparable to xz -6: In our configuration, we run zstd at level 19. For bionic main amd64, this causes a size increase of about 6%, from roughly 5.6 to 5.9 GB. Installs speed up by about 10%, or, if eatmydata is involved, by up to 40% - user time generally by about 50%. Our implementations for apt and dpkg support multiple frames as used by pzstd, so packages can be compressed and decompressed in parallel eventually. We are considering requesting a FFe for that - the features are not invasive, and it allows us to turn it on by default in 18.10. Thanks, Balint and Julian Raw Measurements === All measurements where performed on a cloud instance of bionic, in a basic bionic schroot with overlay, on an ssd. Kernel install (eatmydata, perf report, time spent in compression) --- Before: 54.79% liblzma.so.5.2.2 After: 11.04% libzstd.so.1.3.3 Kernel install (eatmydata) -- 12.49user 3.04system 0:12.57elapsed 123%CPU (0avgtext+0avgdata 68720maxresident)k 0inputs+1056712outputs (0major+159306minor)pagefaults 0swaps 5.60user 2.33system 0:07.07elapsed 112%CPU (0avgtext+0avgdata 81388maxresident)k 0inputs+1108720outputs (0major+171171minor)pagefaults 0swaps firefox 8.80user 3.57system 0:37.17elapsed 33%CPU (0avgtext+0avgdata 25260maxresident)k 8inputs+548024outputs (0major+376614minor)pagefaults 0swaps 4.52user 3.30system 0:33.14elapsed 23%CPU (0avgtext+0avgdata 25152maxresident)k 0inputs+544560outputs (0major+386394minor)pagefaults 0swaps firefox eatmydata --- 8.79user 2.87system 0:12.43elapsed 93%CPU (0avgtext+0avgdata 25416maxresident)k 0inputs+548016outputs (0major+384193minor)pagefaults 0swaps 4.24user 2.57system 0:08.54elapsed 79%CPU (0avgtext+0avgdata 25280maxresident)k 0inputs+544584outputs (0major+392117minor)pagefaults 0swaps libreoffice - 22.51user 7.65system 1:28.34elapsed 34%CPU (0avgtext+0avgdata 64856maxresident)k 0inputs+1376160outputs (0major+1018794minor)pagefaults 0swaps 11.34user 6.66system 1:18.04elapsed 23%CPU (0avgtext+0avgdata 64676maxresident)k 16inputs+1370112outputs (0major+1024989minor)pagefaults 0swaps libreoffice eatmydata 22.41user 6.82system 0:27.45elapsed 106%CPU (0avgtext+0avgdata 64772maxresident)k 0inputs+1376160outputs (0major+1035581minor)pagefaults 0swaps 10.86user 5.78system 0:17.70elapsed 94%CPU (0avgtext+0avgdata 64800maxresident)k 0inputs+1370112outputs (0major+1043637minor)pagefaults 0swaps -- debian developer - deb.li/jak | jak-linux.org - free software dev ubuntu core developer i speak de, en -- ubuntu-devel mailing list ubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel
ntfs-3g issue caused by Windows 10 creators update
Hello. I faced a problem with the latest available (in Ubuntu Xenial) ntfs-3g package. It was unable to mount large (3 TB) NTFS volume after one has been used in windows 10 creators update. The error message is "$MFTMirr does not match $MFT (record 28)" and it states that it is necessary to perform chkdsk /f on that volume. But even after this procedure the problem still appears. I also have another ntfs volume, which size is 240 GB and it mounts successfully with the current ntfs-3g package. Maybe the problem appears only with very large volumes. So I found this patch made for fedora: https://bugzilla.redhat.com/attachment.cgi?id=1370318 and applied it to deb-src version of ntfs-3g and now it finally works. Can we make this patch a part of officially supported ntfs-3g package for ubuntu xenial and later? -- Ubuntu-devel-discuss mailing list Ubuntu-devel-discuss@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss