Re: git-ubuntu 0.7.3 in edge/candidate/stable channels [Was Re: git-ubuntu 0.7.1 in edge/candidate/stable channels]

2018-03-15 Thread Nish Aravamudan
On 12.03.2018 [11:59:11 -0700], Nish Aravamudan wrote:
> On Fri, Mar 2, 2018 at 2:00 PM, Nish Aravamudan
>  wrote:
> > On Thu, Mar 1, 2018 at 5:20 PM, Nish Aravamudan
> >  wrote:
> >> On 28.02.2018 [21:33:38 -0800], Nish Aravamudan wrote:
> >>> Hello all!
> >>>
> >>> I am  happy to announce the release of git-ubuntu 0.7.1 to all channels
> >>> today.
> >>
> >> I encountered some unfortunate issues with snapcraft that hit
> >> xenial-updates just as I was building (LP: #1752481) and then realized
> >> we needed to snap the archive keyrings in order to verify recent archive
> >> files (LP: #1752656). I then made a typo fixing the latter, so we're now
> >> at 0.7.3. This should build successfully in all channels shortly.
> >>
> >> Thanks to Sergio Schvezov for the snapcraft assist and Steve Langasek
> >> for the keyring suggestion.
> >
> > Well, it didn't quite go as planned. I found another bug (so we're at
> > 0.7.4) and also the snapcraft fix didn't resolve all the issues. So
> > I'm temporarily using a PPA that reverts the xenial-updates of
> > snapcraft back to 2.35.
> >
> > Kyle Fazzari is working on a proper fix, but if it's not able to get
> > done soon, we might back out the snapcraft SRU in xenial-updates.
> 
> For completeness, we ended up backing out the snapcraft SRU via a
> re-upload of 2.35 to xenial-updates. The snapcraft team is working on
> fixes to the bugs found.
> 
> Meanwhile, a mass reimport ran last week and the existing whitelist +
> 1% of main was reimported. This resulted in 905 successful imports and
> 5 failed imports. The latter 5 will presumably be broken for a while,
> as we focus on upping our phasing before resolving those issues (LP:
> #1754898).

I have removed the repositories for the failure cases. They would not
update with the normal 'keep the repositories current' anyways, and we
would rather have consistent repository contents.

> I am working right now on an issue with our default repository
> repointing script.

This was PEBKAC.

> Additionally, roughly 150 manual imports remain in the
> ~usd-import-team space, but those source packages were never added to
> the whitelist. They need to be reimported, as well (and added to the
> whitelist). I am waiting to ensure that there are no pending MPs for
> those source packages before reimporting.

This was completed this morning.

> Once that is done, we will start the 'keep-up' script again, which
> will spend some time catching up the repositories based upon Launchpad
> publishes since the mass reimport.

This has been started and is churning through roughly 11 days of
publishing backlog.

> Finally, we will then look to bump our phasing to 2% (of main), to
> approximate the disk usage as the phasing increases.

I am looking into this step now.

Thanks,
Nish

-- 
Nishanth Aravamudan
Ubuntu Server
Canonical Ltd

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: zstd compression for packages

2018-03-15 Thread Benjamin Tegge
Am Dienstag, den 13.03.2018, 12:07 +1100 schrieb Daniel Axtens:
> 
> 
> On Tue, Mar 13, 2018 at 1:43 AM, Balint Reczey  al.com> wrote:
> > Hi Daniel,
> > 
> > On Mon, Mar 12, 2018 at 2:11 PM, Daniel Axtens
> >  wrote:
> > > Hi,
> > >
> > > I looked into compression algorithms a bit in a previous role,
> > and to be
> > > honest I'm quite surprised to see zstd proposed for package
> > storage. zstd,
> > > according to its own github repo, is "targeting real-time
> > compression
> > > scenarios". It's not really designed to be run at its maximum
> > compression
> > > level, it's designed to really quickly compress data coming off
> > the wire -
> > > things like compressing log files being streamed to a central
> > server, or I
> > > guess writing random data to btrfs where speed is absolutely an
> > issue.
> > >
> > > Is speed of decompression a big user concern relative to file
> > size? I admit
> > > that I am biased - as an Australian and with the crummy internet
> > that my
> > > location entails, I'd save much more time if the file was 6%
> > smaller and
> > > took 10% longer to decompress than the other way around.
> > 
> > Yes, decompression speed is a big issue in some cases. Please
> > consider
> > the case of provisioning cluoud/container instances, where after
> > booting the image plenty of packages need to be installed and
> > saving
> > seconds matter a lot.
> > 
> > Zstd format also allows parallel decompression which can make
> > package
> > installation even quicker in wall-clock time.
> > 
> > Internet connection speed increases by ~50% (according to this [3]
> > study which matches my experience)  on average per year which is
> > more
> > than 6% for every two months.
> > 
> > 
> The future is pretty unevenly distributed, and lots of the planet is
> stuck on really bad internet still.
> 
> AFAICT, [3] is anecdotal, rather than a 'study' - it's based on data
> from 1 person living in California. This is not really
> representative. If we look at the connection speed visualisation from
> the Akamai State of the Internet report [4], it shows that lots and
> lots of countries - most of the world! - has significantly slower
> internet than that person. 
> 
> (FWIW, anecdotally, I've never had a residential connection get
> faster (except when I moved), which is mostly because the speed of
> ADSL is pretty much fixed. Anecdotal reports from users in developing
> countries, and rural areas of developed countries are not encouraging
> either: [5].)
> 
> Having said that, I'm not unsympathetic to the usecase you outline. I
> just am saddened to see the trade-offs fall against the interests of
> people with worse access to the internet. If I can find you ways of
> saving at least as much time without making the files bigger, would
> you be open to that?
> 
> Regards,
> Daniel
> 
> [4] https://www.akamai.com/uk/en/about/our-thinking/state-of-the-inte
> rnet-report/state-of-the-internet-connectivity-visualization.jsp
> [5] https://danluu.com/web-bloat/

I want to mention that you can enable ultra compression levels 20 to 22
in zstd which usually achieve results comparable to the highest
compression levels of xz. There should be a level that matches the
results of xz -6 while still being faster than it.

Best regards,
Benjamin



-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: ext4 metadata_csum and backwards compatibility

2018-03-15 Thread Simon Deziel
On 2018-03-14 07:17 AM, Robie Basak wrote:
> 3) We backport metadata_csum support to Xenial in an SRU[1] without
> changing the default there. Xenial users will be able to fsck
> Bionic-created ext4 filesystems. There will be forward compatibilty
> problems when skipping across multiple LTSs (eg. Trusty accessing a
> Bionic-created ext4 filesystem), but not across any single LTS.

I'd vote for this ^, SRU to Xenial. When a new LTS arrives, I typically
test it extensively in VMs running off of an hypervisor running the
previous LTS. Being able to fsck the VM's filesystem is sometimes
convenient. Also, since the metadata checksum was enabled in 16.10+ I'd
rather not go back in terms of default enabled features, especially if
this one is now production ready.

Regards,
Simon



signature.asc
Description: OpenPGP digital signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: zstd compression for packages

2018-03-15 Thread Julian Andres Klode
On Wed, Mar 14, 2018 at 02:40:01PM -0300, Marcos Alano wrote:
> May be run some tests to find the sweet spot between size and speed?

Well, that's what we did, and the sweet spot is -19, the maximum
non-ultra level.

-- 
debian developer - deb.li/jak | jak-linux.org - free software dev
ubuntu core developer  i speak de, en

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


Re: Launchpad i386 build: Memory exhausted

2018-03-15 Thread Dmitry Shachnev
On Wed, Mar 14, 2018 at 09:56:46AM +0100, Cesare Falco wrote:
> I'm asking everyone for advice: assuming that no i386 build seems possible
> any more, should I:
> - stop maintaing Mame
> - remove i386 from the supported archs
> - ... (any suggestion is welcome here!)

As Colin said, you should try to reduce memory usage by the linker.

Try:

- replacing -g with -g1 or maybe no debug symbols at all;
- disabling caching of symbols tables (-Wl,--no-keep-memory);
- adding -Wl,--reduce-memory-overheads.
- switching to another linker (bfd vs. gold) if none of the above helps.

Usually just using -g1 saves lots of memory.

--
Dmitry Shachnev


signature.asc
Description: PGP signature
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel