Re: RFC: raising ca-certificates package Priority to standard or important

2021-01-22 Thread Peter Silva
https://www.eff.org/https-everywhere

https is getting everywhere. If you don't have ca's you cannot process them
properly.
I think https working is going to be important even for almost all embedded
cases.  Most iot deployments
include something like calling the mothership, which ought to be https...
apt is generally https.
I guess priority-wise it should be considered part of TLS or libSSL so that
whenever one of those is installed, the ca's are also installed.  Again...
omitting TLS makes something so crippled as to be next to useless... it's
like omitting networking entirely at this point.

On Fri, Jan 22, 2021 at 6:38 AM Steve McIntyre  wrote:

> Hey Julien,
>
> On Fri, Jan 22, 2021 at 12:00:56PM +0100, Julien Cristau wrote:
> >On Thu, Jan 21, 2021 at 02:47:25PM -0300, Antonio Terceiro wrote:
> >> On Thu, Jan 21, 2021 at 03:10:47PM +0100, Julien Cristau wrote:
> >> > And which of standard or important made most sense (AIUI, standard
> >> > means "installed by default in d-i" and important means "installed by
> >> > default in debootstrap").
> >>
> >> wget is already Priority: standard and recommends ca-certificates, so it
> >> seems to me that making it standard would be a noop in practice for most
> >> of the systems installed by d-i.
> >>
> >> On the other hand, all cases that I remember seeing a problem caused by
> >> missing ca-certificates was in systems not installed by d-i, such as
> >> containers, vm images, etc. Based on that, I would make it important.
> >
> >Here's my thinking on this:
> >I would expect "standard" to get installed on "general purpose" VM
> >images, and "important" *not* to get installed on "minimal" container or
> >VM images.  Looking at the docker debian image build script just now[1],
> >it seems to pull in required packages + iproute2 and ping, so it has its
> >own selection that doesn't include "important" priority.  So changing
> >the severity, by itself, won't change anything unless we go all the way
> >to "required" which feels like it'd be going too far (but then I also
> >don't think apt should be "required").
> >If there are specific examples where you think "important" would help
> >I'd be interested; right now I'm sort of favouring "standard" as good
> >enough.
>
> Sounds like good logic to me.
>
> Thanks for looking into this!
>
> --
> Steve McIntyre, Cambridge, UK.
> st...@einval.com
> Can't keep my eyes from the circling sky,
> Tongue-tied & twisted, Just an earth-bound misfit, I...
>
>


Re: What to do when DD considers policy to be optional? [kubernetes]

2020-04-10 Thread Peter Silva
On Fri, Apr 10, 2020 at 1:12 PM Russ Allbery  wrote:

> Dmitry Smirnov  writes:
>
> > Let's remember that Kubernetes was never in "stable" to begin with.
>
> > This is not to say that it couldn't be useful in "testing", "unstable"
> > or even "experimental". Many packages that may be considered not
> > suitable for "production" are nevertheless useful.
>
> Speaking as someone who previously supported having Kubernetes in Debian,
> I was using it directly from unstable.  For my purposes, it was fine that
> it was never part of a stable release.
>
> To be clear, I was mostly happy to have the clients.  The control plane
> and pod software for my purposes is provided by a platform provider, so is
> outside of my scope.
>
> --
> Russ Allbery (r...@debian.org)  
>
>
Debian policy is there to shape packages so they fit into stable. If we
"know" that a package will never make it into stable, is Policy relevant?
Why have packagers do work that will be forever useless? Getting that sort
of stuff out of unstable and testing, when we know it will never go
further, is also a net benefit reducing the noise and load for the devs.  A
release blocker on one of those packages, isn't.  Forcing people to get
stuff from unstable when they want to be running stable (for everything but
X) isn't great either.

A "package" that doesn't follow Debian Policy should be walled off and
obvious somehow. It's maintenance will be different than most of Debian.
Since Debian Policy is sort of the heart of Debian... it basically means
that such packages can be built *for* Debian, but can never be *part of*
Debian...  Having more software available *for Debian* is still a good
thing, even if it isn't part of it.  Expanding the user base by enabling
projects with ... ahem... different maintenance cultures... is still a win.
Supporting stuff to be built *for* debian allows much more newer, faster
paced, stuff to be made available for the operating system, without
jeopardizing the quality of stable.

fwiw, this is one of the things that Launchpad.net provides for Ubuntu,
that doesn't seem to be around for Debian.


Re: What to do when DD considers policy to be optional? [kubernetes]

2020-04-08 Thread Peter Silva
doesn´t this whole discussion just mean that k8 should just not be in
Debian?

It should be a third party package, perhaps with a third party repo, and
just not be in Debian at all.
If any means of packaging for a Debian release results in a package that is
essentially unsupported by upstream... what is the value of including it?
For stuff that moves too quickly... perhaps a
different repo like *forever-sid.d.o* could be set up... and have it built
against releases, so people
have current software for Debian... without it being part of Debian.

That repo would have different rules for it... that loosens things up for
this kind of hairball package that isn´t stable enough to benefit from
Debian stability.




On Wed, Apr 8, 2020 at 4:36 PM Thomas Goirand  wrote:

> On 4/8/20 6:14 PM, Marc Haber wrote:
> > On Sun, 5 Apr 2020 23:16:51 +0100, Wookey  wrote:
> >> On 2020-04-05 21:15 +0200, Marc Haber wrote:
> >>> having an obsolete version of the software distributed
> >>> with/through Debian is (rightfully) seen a liabilty by some upstreams,
> >>> not as an asset.
> >>
> >> I think a more interesting/important question is whether users like
> >> it, rather than whether upstreams like it.
> >
> > I think it is also important to have our users be taken seriously by
> > upstreams. There is software that doesn't move as fast any more. Using
> > a two years old version of those is often fine.
> >
> > Kubernetes, docker etc, however, are fast moving targets. Nobody in
> > the uptream community is willing to even consider answering a question
> > about a version that is two years old. The dialog will inevitably be
> > "well, first you update to our latest version and verify whether your
> > question still applies, then come back with your question" "but I am
> > using the version in Debian stable!" "well, Debian is stupid! Use
> > ".
> >
> > This is not doing our users a favor. And it hurts the Project.
>
> I don't agree with this *at all*. It is not in the interest of our users
> to be forced to update the software they use for their infrastructure
> every few months. They don't want that. If upstream think that this is
> what users want, well upstream is wrong then. And the stability of
> Debian (understand: not a moving target, rather than bug free) is one of
> our very good point.
>
> Also, the docker world is not the only one to be this way. It used to be
> like this in OpenStack too. In the OpenStack world, they haven't changed
> the way they release (ie: every 6 months), but the user survey has shown
> that almost every user is lagging 4 or 5 versions behind, because
> upgrading the infrastructure is both difficult and time consuming. Over
> time, they became very helpful for back-porting fixes to EOL versions too.
>
> The main issue is that upstream wants to be able to do fast development,
> and focus on the development rather than on their users. Taking care of
> a long term release is time consuming. Taking care of multiple old
> release is very annoying (backporting fixes may not be always obvious).
>
> So yeah, probably upstream will reply with "Debian is stupid". Let them
> say it if they want to: that doesn't make them right. It only shows they
> are completely ignorant of what their users want, and the need of
> downstream distributions.
>
> The more there's going to be users going at them asking them about a 2
> year old release, the more they will realize that Debian isn't stupid,
> and that this is the way the final users want to consume their work. So
> it's good for us, and beneficial to the project. It's doing our users a
> favor, and it doesn't hurt us.
>
> >> Quite a lot of users just want to use stuff and so long as it works
> >> for their purposes they really don't know or care if it is 2 years
> >> old.
> >
> > And if it doesnt they want to be able to google for their issues or
> > ask the upstream community. You cannot ask the kubernetes community a
> > question about a kubernetes version that is two years old.
>
> Of course you can! If they choose to not answer, or say bad things about
> Debian, that doesn't make them smarter and us stupid. It just shows how
> careless upstream is.
>
> >> I quite often find myself in this situation. Quite a lot of
> >> software is not something you want to care about - you just want to
> >> use it.
> >
> > Quite a lot, yes. But there is software that doesn't work that way.
>
> Could you please explain why any software would be different? It's just
> upstream that has a super ego and think they are different. Nothing
> more, nothing less.
>
> >> So long as most users will find it works for them, then I think it's
> >> still really useful for us to package stuff,
> >
> > That is not going to happen with kubernetes, docker, node.js et al at
> > this current time.
>
> Like many other software stack, hopefully they will learn by this
> mistake and the release management will change.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>


Re: email backend for fedmsg

2020-03-25 Thread Peter Silva
Most Sarrracenia stuff is tied to AMQP, but next-gen messages are called
v03 (version 3) they use a JSON payload
for all the information, and that makes it somewhat protocol independent.
There is also a 500 line MQTT demo
that implements a file replication network, using the same JSON messages,
and primed from an AMQP upstream.

https://github.com/MetPX/wmo_mesh

the peer code there is just a demonstration prototype, but it processes the
messages the same way as real Sarracenia.

That code has been run against mosquitto and EMQT, and I think another
broker, I forget... It worked without issues on all of them. MQTT interop
is flawless afaict.   note: we were using v3.  Have not played with v5.

Sarracenia essentially defines a JSON payload for advertising that a file
exists. That is a fairly popular problem, but if your problem isn´t that,
then you should define a different payload.  It could be used for file
replication, or orchestration/workload co-ordination, or other things in
the IFTTT style... but in the end, this is just one application of a
message bus, it doesn´t need to encompass all applications, but is a good
way to get a useful thing implemented with it, so people see that it is
useful.   I think applications need to define their messages, and trying to
be too general makes them harder to understand and apply.


On Wed, Mar 25, 2020 at 5:57 PM clime  wrote:

> 
>> I work in telecom for meteorology, and we ended up with a general method
>> for file copying (catchphrase: rsync on steroids*.) ( *every catchphrase is
>> a distortion, no dis to rsync, but in certain cases we do work much faster,
>> it just communicates the idea.) Sarracenia (
>> https://github.com/MetPX/Sarracenia) is a GPL2 app (Python and C
>> implementations) that use mozilla public license rabbitmq broker, as well
>> as openssh and/or any web server to do fastish file synching, and/or
>> processing/orchestration. The app is just json messages with file metadata
>> sent through the broker. Then you daisy chain brokers through clients.  No
>> centralization (every entity installs their own broker), No federated
>> identity required (authentication is to each broker, but they can pass
>> files/messages to each other.)
>> A firstish thing to do with it would be to sync the debian mirrors in
>> real-time rather than periodically.  Each mirror has a broker, they get
>> advertisements (AMQP messages containing JSON file metadata) download the
>> corresponding file, and re-advertise (publish on the local broker with the
>> local file URL) for downstream clients. You can then make a mesh of
>> mirrors, where, if each mirror is subscribed to at least two others, then
>> it can withstand the failure of any node.  If you add more connections, you
>> increase redundancy.
>> Once you have that sort of anchor tenant for an AMQP message bus, people
>> might want to use it to provide other forms of automation, but way quicker
>> and in some ways much simpler than SMTP.  but yeah... SMTP is a lot more
>> well-known/common. RabbitMQ is the industry dominant open solution for AMQP
>> brokers. sounds like marketing bs, but if you look around it is what the
>> vast majority are using, and there are thousands upon thousands of
>> deployments. It's a much more viable starting point, for stability, and a
>> lot less assembly required to get something going. Sarracenia makes it a
>> bit easier again, but messages are kind of alien and different, so it takes
>> a while to get used to them.
>> 
>
>
> Peter, I like the solution and for the mirrors it sounds great but I have
> a few nitpicks:
>
> - the file syncing part is makes a perfect sense for the debian mirrors
> but in general case you might only want to send a message and skip the file
> syncing part
> - I am currently, personally more intrigued by even more standard
> technologies than RabbitMQ and I believe that a good solution might lie
> there
>
> What I particularly like about Sarracenia is that it is decentralized
> because each host has its own broker - that I think is cool and I would
> like to potentially do something similar...
>
> clime
>
>
>
> On Wed, 25 Mar 2020 at 01:07, clime  wrote:
>
>> On Wed, 25 Mar 2020 at 01:00, clime  wrote:
>> >
>> > On Tue, 24 Mar 2020 at 22:45, Nicolas Dandrimont 
>> wrote:
>> > >
>> > > On Tue, Mar 24, 2020, at 21:51, clime wrote:
>> > > > On Tue, 24 Mar 2020 at 20:40, Nicolas Dandrimont 
>> wrote:
>> > > > >
>> > > > > Hi!
>> > > > >
>> > > > > On Sun, Mar 22, 2020, at 13:06, clime wrote:
>> > > > > > Hello!
>> > > > > >
>> > > > > > Ad. https://lists.debian.org/debian-devel/2016/07/msg00377.html
>> -
>> > > > > > fedmsg usage in Debian.
>> > > > > >
>> > > > > > There is a note: "it seems that people actually like parsing
>> emails"
>> > > > >
>> > > > > This was just a way to say that fedmsg never got much of a user
>> base in the services that run on Debian infra, and that even the new
>> services introduced at the time kept parsing emails.
>> > > >
>> 

Re: email backend for fedmsg

2020-03-24 Thread Peter Silva
MQTT is the best thing going for interop purposes.

On Tue, Mar 24, 2020 at 1:20 PM Jeremy Stanley  wrote:

> On 2020-03-24 13:09:35 -0400 (-0400), Peter Silva wrote:
> [...]
> > We could talk about the merits of various protocols (I see fedmsg
> > uses ZeroMQ) but that is a deep rabbit hole... to me, fedmsg looks
> > like it is making a ZeroMQ version of a broker (which is a bit
> > ironic given the original point of that protocol) trying to build
> > a broker ecosystem is hard. Using an existing one is much easier.
> > so to me it makes sense that fedmsg is not really working out.
> [...]
>
> In the OpenDev collaboratory we added an event stream for our
> services some years ago using the MQTT protocol (a long-established
> ISO/OASIS standard). I gather there was some work done to make
> fedmsg support MQTT as a result of that, so it might be an
> alternative to relying on ZeroMQ at least.
> --
> Jeremy Stanley
>


Re: email backend for fedmsg

2020-03-24 Thread Peter Silva
hi, totally different take on this...

We could talk about the merits of various protocols (I see fedmsg uses
ZeroMQ) but that is a
deep rabbit hole... to me, fedmsg looks like it is making a ZeroMQ version
of a broker (which is a bit ironic given the original point of that
protocol) trying to build a broker ecosystem is hard. Using an existing one
is much easier.  so to me it makes sense that fedmsg is not really working
out.

However,



I work in telecom for meteorology, and we ended up with a general method
for file copying (catchphrase: rsync on steroids*.) ( *every catchphrase is
a distortion, no dis to rsync, but in certain cases we do work much faster,
it just communicates the idea.) Sarracenia (
https://github.com/MetPX/Sarracenia) is a GPL2 app (Python and C
implementations) that use mozilla public license rabbitmq broker, as well
as openssh and/or any web server to do fastish file synching, and/or
processing/orchestration. The app is just json messages with file metadata
sent through the broker. Then you daisy chain brokers through clients.  No
centralization (every entity installs their own broker), No federated
identity required (authentication is to each broker, but they can pass
files/messages to each other.)

A firstish thing to do with it would be to sync the debian mirrors in
real-time rather than periodically.  Each mirror has a broker, they get
advertisements (AMQP messages containing JSON file metadata) download the
corresponding file, and re-advertise (publish on the local broker with the
local file URL) for downstream clients. You can then make a mesh of
mirrors, where, if each mirror is subscribed to at least two others, then
it can withstand the failure of any node.  If you add more connections, you
increase redundancy.

Once you have that sort of anchor tenant for an AMQP message bus, people
might want to use it to provide other forms of automation, but way quicker
and in some ways much simpler than SMTP.  but yeah... SMTP is a lot more
well-known/common. RabbitMQ is the industry dominant open solution for AMQP
brokers. sounds like marketing bs, but if you look around it is what the
vast majority are using, and there are thousands upon thousands of
deployments. It's a much more viable starting point, for stability, and a
lot less assembly required to get something going. Sarracenia makes it a
bit easier again, but messages are kind of alien and different, so it takes
a while to get used to them.




On Sun, Mar 22, 2020 at 8:24 AM clime  wrote:

> Hello!
>
> Ad. https://lists.debian.org/debian-devel/2016/07/msg00377.html -
> fedmsg usage in Debian.
>
> There is a note: "it seems that people actually like parsing emails"
>
> What about adding email backend to fedmsg then. Wouldn't it be an
> interesting idea? It could basically rely on postfix for sending
> messages, hence providing decentralization as well as high
> reliability. I think that amount of events that happen in distribution
> (like package update, package build) is never so huge that email
> infrastructure wouldn't handle it and also the machine mailing
> infrastructure could be optionally be separated from the human one if
> needed.
>
> So fedmsg would become a tiny wrapper over email that would just
> serialize and parse json data to and from email messages and check
> signatures.
>
> I am asking because I like the idea of distribution-independent
> infrastructure message bus that this project had.
>
> Btw. instead of json, yaml could be used so it is nicer to human eyes.
>
> clime
>
>


Re: RFC: Replacing vim-tiny with nano in essential packages

2020-03-19 Thread Peter Silva
try ssh into a windows machine.
the termcaps are all manner of fun.

On Thu, Mar 19, 2020 at 7:23 AM Adam Borowski  wrote:

> On Thu, Mar 19, 2020 at 11:34:10AM +0500, Lev Lamberov wrote:
> > Ср 18 мар 2020 @ 18:52 Adam Borowski :
> >
> > > Alas, our ed is basically:
> > > #!/bin/sh
> > > while read x;do echo '?';done
> >
> > That's not true. The ed package in the Debian archive is full GNU ed.
>
> I'm not talking about functionality under the hood, I'm bad-mouthing the
> user-friendliness.
>
> I used to be an ed user for a decade (I've coded on a game that offered
> only
> a line-based interface), but that was Beattie ed which was _massively_ more
> comfortable to use interactively than GNU ed (and probably way less
> powerful).  It was enough to bother with file transfers only for big edits.
>
> But that was a special case.  Today, even on a bad serial link, all you
> need
> is a visual editor that does _not_ obey termcap/terminfo.  In my
> experience,
> the 99% cause of breakage is:
> * weird ancient Unices: bad termcappage
> * Linux/BSD: wrong terminal size ("setterm --resize")
> (ignoring termcap works because last non-vt100ish terminals were made ~40
> years ago)
>
> Thus, I can't think of a scenario where ed would be preferred over a visual
> editor.  If such scenario exists, it's too obscure for the default small
> system.
>
>
> Meow!
> --
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ in the beginning was the boot and root floppies and they were good.
> ⢿⡄⠘⠷⠚⠋⠀   --  on #linux-sunxi
> ⠈⠳⣄
>
>


Re: RFC: Replacing vim-tiny with nano in essential packages

2020-03-18 Thread Peter Silva
fwiw... anyone who knows vi already knows ed, it's just the line mode
commands.
you save the : and that's it.

uh... fwiw, I had a mainframe typish system I had to admin 30 years ago...
being a mainframe, had no working TERMCAP, and the editor was ed.  yeah, a
bit painful, the only command that you don't use in ordinary vi is p.


On Wed, Mar 18, 2020 at 12:40 PM Theodore Y. Ts'o  wrote:

> On Mon, Mar 16, 2020 at 12:45:35PM +0100, Marco d'Itri wrote:
> > On Mar 16, Tomas Pospisek  wrote:
> >
> > > > Agreed: this is a very good idea since I really think that every
> default
> > > > install must provide something enough vi-compatible.
> > > I'd disagree. vi is very newbie unfriendly. OTOH I expect people that
> > Even if this were true (using vi is one of the most basic system
> > administration skills), I understand that we still provide nano.
>
> I've always considered /bin/ed the most basic system administration
> tool, since it doesn't require a working terminal or termcap entry.
> It works even if you are using an ASR-33 teletype.  :-)
>
> And at least for me, I find /bin/ed much more user friendly than vi,
> since it doesn't have as modal of a UI as vi.  (Vi has 3 modes, ed has
> only 2.)
>
> /bin/ed is also *much* smaller than even busybox.
>
> - Ted
>
>


Re: RFC: Replacing vim-tiny with nano in essential packages

2020-03-16 Thread Peter Silva
so maybe we just add nano-tiny as an option to vim-tiny.
because we understand vim is not newbie friendly, but for all the old
hands, nano is not friendly to us.
234K is a small price to pay.

On Mon, Mar 16, 2020 at 7:27 PM Guus Sliepen  wrote:

> On Mon, Mar 16, 2020 at 01:02:47PM +, Wookey wrote:
>
> > I hadn't realised how fat nano is (not the only consideration of
> > course, but zile is very good on this measure and surprisingly
> > functionfull).
>
> You are comparing apples with oranges! The nano package comes with a lot
> of help files and translations. You need to compare things to nano-tiny:
>
> > Instaled sizes:
> > zile: 365K
> > busybox: 786K
> > vim-tiny: 1547K
> > nvi: 1605K
> > busybox-static: 2045K
> > nano: 2469K
>
> nano-tiny: 234K
>
> --
> Met vriendelijke groet / with kind regards,
>   Guus Sliepen 
>


Re: moving mg from salsa to github?

2020-02-15 Thread Peter Silva
fwiw, looking at the repo on github.  There are tags.  They're just dates,
Ideally one would get an idea of what the tags are from upstream, but you
could just git clone using a tag. Also github allows you to easily get a
tarball given a tag:

wget https://github.com/hboetes/mg/tarball/20180927



On Sat, Feb 15, 2020 at 8:36 AM Geert Stappers  wrote:

> On Sat, Feb 15, 2020 at 02:16:27PM +0100, Harald Dunkel wrote:
> > Hi folks,
> >
> > I am maintainer for mg, currently on salsa. Problem is, upstream
> > doesn't release tar balls anymore, but moved the code to github.
> > No tags.
> >
> > How can I tell Salsa?
>
> Question seen.
> However I think the question doesn't an answer.
> Please revolt if you think otherwise.
>
>
> > Should I drop the upstream and pristine-tar
> > branches on Salsa and integrate the repository on github?
>
> No.
>
> > Would you suggest to move the debian part to github instead?
>
> No.
>
>
> > Every helpful comment is highly appreciated
> :-)
>
>
> The "problem" is "no more tarballs from upstream".
> (As in: The problem is NOT that upstream moved to some git repo server.)
>
> It is completely fine to keep all the Debian stuff at Salsa.
>
>
> IMHO boils the question of Original Poster down to
>   What, or which version, should be packaged,
>   when Upstream stopped doing releases?
>
>
> I see three possiblities:
>  * Talk with Upstream about version numbering
>  * Choose a version number scheme yourself
>  * Ask for further advice
>
>
> > Harri
> >
> > https://github.com/hboetes/mg
> > https://salsa.debian.org/debian/mg
> >
>
>
> Groeten
> Geert Stappers
> --
> Leven en laten leven
>
>


Re: Wifi en debian

2019-10-20 Thread Peter Silva
translation of spanish email:  disappointed wifi didn´t work out of the box
with Debian thinking wifi drivers are missing.

answer:

I don't think the drivers are the problem for wifi in Debian. Most, if not
all wireless chipsets have open source drivers which are included. The
problem is that many wireless chips require non-free firmware.  check out
the list here:

https://en.wikipedia.org/wiki/Comparison_of_open-source_wireless_drivers

Debian is very principled about not including non-free software, but you
can build an installation medium that includes the firmware so almost all
wifi should work out of the box.  see here:

https://www.debian.org/releases/buster/amd64/ch06s04.en.html



On Sun, Oct 20, 2019 at 5:42 PM riber enyos  wrote:

> Queridos debianos,
> Estos días estoy probando diferentes distribuciones de linux. Entre ellas
> Debian. Creía que era una de las mejores, pero me he dado cuenta de que es
> la peor, según mis gustos.
> ¿Como puede ser, que una distribución del 2019 venga sin los drivers del
> wifi instalados??? De todas las distros, y he probado bastantes, Debian es
> la única que viene sin wifi. He buscado en internet, y la única solución
> que dan es conectarse por cable para configurar la wifi. Pero se supone,
> que si me conecto por wifi es porque no tengo la posibilidad de conectarme
> por cable. ¿Tan difícil es dejar instalados los drivers? Hasta en las más
> básicas y pobres distros te puedes conectar por wifi. La verdad es que no
> entiendo el motivo.
> ¿Sabrían decirmelo?
> Muchas gracias.
> Riber.
>


Re: [Idea] Debian User Repository? (Not simply mimicing AUR)

2019-04-07 Thread Peter Silva
On Sun, Apr 7, 2019 at 11:10 PM Ben Finney  wrote:

> Peter Silva  writes:
>
> > […] the launchpad.net model, which supports backporting seamlesslly
> > and allows to support the same version on all distro versions, works
> > better for us. This is something a debian version of launchpad would
> > get us.
>
> How does it handle “seamlessly” changes that make a package incompatible
> with the already-released Debian stable? If it doesn't handle that, is
> it right to call that seamless?
>
>
For the package in question, the changes are bug-fixes, 99% upward
compatible.
so yes, you're right it can't be totally seamless, we have release notes to
cover breakage events.
and other explicit communications.


> If one needs to keep a close eye on changes to make sure they can still
> be installed even on a years-old OS, the resulting packages can be
> placed in a custom repository set up with the instructions at
> https://wiki.debian.org/DebianRepository/Setup>. What am I missing?
>
>
yes, it can be done, but it is a lot more work for individual packagers.

launchpad.net combines:
   - very few clicks to build custom repositories.
   - a build environment for each OS, so that it runs "debuild" in the
currently patched version of the OS for which the package it built.

It saves people from having to build their own custom repository, and from
having to maintain a build environment for all supported OS versions and
architectures.  on Ubuntu,  packages are built for 14.04, 16,04, 18.04,
18.10, 19.04, and I get all those just from clicking one box for each one.
I think it also propagates re-building of packages when a build-dependency
changes, without my knowledge or interaction.  It leverages the ubuntu
build-farm for third-party packages.

With debian, it's kind of all or nothing.  Etiher you're in Debian, and it
gets built on every platform using the build farm, or it's not, so you get
no help at all. Launchpad gives a nice middle road that suits us right now,
and if something similar were available for debian, it would provide a
stepping stone to being in Debian proper.


-- 
>  \ “I think Western civilization is more enlightened precisely |
>   `\ because we have learned how to ignore our religious leaders.” |
> _o__)—Bill Maher, 2003 |
> Ben Finney
>
>


Re: [Idea] Debian User Repository? (Not simply mimicing AUR)

2019-04-07 Thread Peter Silva
On Sun, Apr 7, 2019 at 8:41 PM Roberto C. Sánchez 
wrote:

> On Sun, Apr 07, 2019 at 05:50:37PM -0400, Peter Silva wrote:
> >
> >Hiring debian devs to get the packages into debian proper could make
> >sense. One thing that dampens our enthusiasm for that at the moment is
> >that our packages are still very unstable, in the sense that the we
> are
> >releasing materially better version incrementally, say once or twice a
> >month.  It is sort of analogous to a rolling release.  That works fine
> >with the launchpad model, but if it gets baked into debian, then we
> need
> >to support some random old version for many years. Perhaps once it has
> >stabilized, that would be something we could work with, but for now,
> the
> >[2]launchpad.net model, which supports backporting seamlesslly and
> allows
> >to support the same version on all distro versions, works better for
> us.
> >This is something a debian version of launchpad would get us.
> >
>
> You can already accomplish what you are describing today:
>
> - have packages uploaded to experimental
> - upload to unstable and file a release critical bug to prevent it
>   migrating to testing (look at https://bugs.debian.org/915050 for
>   instance)
>
> Both approaches get the package into Debian, available to users of
> unstable and/or experimental, as appropriate, and without risk of the
> package getting "baked in" to a Debian release for the long term.
>

OK for unstable and testing, but I want to provide packages for stable
versions of Debian using a separate repo that will be get frequent updates,
even though the OS is stable. I get that with launchpad.net. Your proposal
makes no version ever available for a stable version.  yes, it contradicts
the meaning of stable, but the idea is similar to the idea of using snaps,
where certain applications require current versions, while most of the OS
remains a stable platform.


Re: [Idea] Debian User Repository? (Not simply mimicing AUR)

2019-04-07 Thread Peter Silva
On Sun, Apr 7, 2019 at 1:27 PM Roberto C. Sánchez 
wrote:

> On Sun, Apr 07, 2019 at 10:13:58AM -0400, Peter Silva wrote:
> >fwiw,  our organization doesn't have any debian devs.  We have a few
> >packages that we develop and deploy
> >for our internal needs, and make available to the internet with public
> >repositories.  they are (perhaps not perfectly) debian compliant
> packages,
> >but we aren't blessed debian devs (and frankly cannot be bothered to
> >become them.)
>
> There are many Debian developers who work as consultants specifically on
> Debian-related work, either independently or as part of a company that
> offers Debian-related services.
>
> Since you have done most of the work, you could easily hire one of those
> folks to help with a small number of hours worth of effort to take the
> package through the process of getting it into Debian.
>
> You can post to the debian-jobs list or check the Debian consultants
> page on the main Debian website for candidates.
>
> Regards,
>
> -Roberto
> --
> Roberto C. Sánchez
>
>
Hiring debian devs to get the packages into debian proper could make sense.
One thing that dampens our enthusiasm for that at the moment is that our
packages are still very unstable, in the sense that the we are releasing
materially better version incrementally, say once or twice a month.  It is
sort of analogous to a rolling release.  That works fine with the launchpad
model, but if it gets baked into debian, then we need to support some
random old version for many years. Perhaps once it has stabilized, that
would be something we could work with, but for now, the launchpad.net
model, which supports backporting seamlesslly and allows to support the
same version on all distro versions, works better for us.  This is
something a debian version of launchpad would get us.


Re: is Wayland/Weston mature enough to be the default desktop choice in Buster?

2019-04-07 Thread Peter Silva
>
>
> > * RockPro64, used as a desktop (I'm typing these words on it):
> >   armsoc.  GNOME no workie.
>
> Hows the 3D performance on this?
>
>
https://www.cnx-software.com/2018/08/27/rockpro64-rk3399-board-linux-review-ubuntu-18-04/

71fps or es2gears?

but that was a year ago... likely better now?




> > * N900:
> >   didn't try.  I don't suspect it could work, though.
>
> N900, the 10 year old mobile phone?  Is GNOME in Debian configured to
> use OpenGL ES, which is the only flavour this device talks?
>
> > * qemu-kvm on work desktop:
> >   [Host GPU is i915 (HD530)]: black screen in default buster's GNOME
> >   (thus Wayland), Cinnamon at least gives a message.  Very likely
> something
> >   with qemu's configuration -- but work time is not supposed to be spent
> >   wrangling desktop environment problems, thus I did not investigate.
>
> I had no problem starting gnome-shell with the qlx stuff, not that it
> makes any sense to do that.
>
> > On every single of the above setups XFCE works perfectly.
>
> But have you tried GNOME on Xorg, which is the question of this thread?
>
> Bastian
>
> --
> Four thousand throats may be cut in one night by a running man.
> -- Klingon Soldier, "Day of the Dove", stardate unknown
>
>


Re: [Idea] Debian User Repository? (Not simply mimicing AUR)

2019-04-07 Thread Peter Silva
fwiw,  our organization doesn't have any debian devs.  We have a few
packages that we develop and deploy
for our internal needs, and make available to the internet with public
repositories.  they are (perhaps not perfectly) debian compliant packages,
but we aren't blessed debian devs (and frankly cannot be bothered to become
them.) (fwiw:
https://launchpad.net/~ssc-hpc-chp-spc/+archive/ubuntu/metpx-daily )

We have been publishing products on launchpad.net for years.  We were able
to do that relatively quickly.  We would love to be able to upstream to
debian, but haven't figured it out.  There is a much higher barrier to
entry.  Our only option at the moment is likely suse build service, but we
haven't bothered to figure that out either... We use ubuntu internally, and
that's enough for internal use, so the others are nice to haves.  for
nearly anything on launchpad, it should be pretty straightforward for adapt
to whatever gets done for debian.  It also provides an intermediate on-ramp
for gettting packages into Debian.

as non-debian devs, something like launchpad looks useful to us.

On Sun, Apr 7, 2019 at 9:54 AM Karsten Merker  wrote:

> On Sun, Apr 07, 2019 at 01:26:12PM +, Mo Zhou wrote:
>
> > The absense of a centralized, informal Debian package repository where
> > trusted users could upload their own packaging scripts has been
> > long-forgotten. As an inevitable result, many user packaging scripts
> > exist in the wild, scattered like stars in the sky, with varied
> > packaging quality. Their existence reflects our users' demand,
> > especially the experienced ones', that has not been satisfied by the
> > Debian archive. Such idea about informal packaging repository has been
> > demonstrated successful by the Archlinux User Repository (AUR). Hence,
> > it should be valuable to think about it for Debian.
> >
> > Assume that Debian has an informal packaging repository similar to AUR,
> > which distrbutes packaging scripts only and requires to be built
> > locally. According to my observation and experience, such a repository:
> >
> > 1. Allows packaging in some compromised manner to exist, which means
> > they dont fully comply with DFSG or Policy. This makes great sense for
> > several kinds of packages:
> >
> > (1) Packages that are extremely hard to made compliant to Policy. For
> > example, bazel the build system of TensorFlow and other Google products
> > - No Debian Developer can make it meet the Policy's requirement without
> > great sacrifice. The outcome doesn't worth the waste in time and energy.
>
> This is something that would probably be acceptable to me on
> Debian-hosted infrastructure, but ...
>
> > (2) Dirty but useful non-free blobs, such as nvidia's cuDNN (CUDA Deep
> > Neural Network) library, which dominates the field of high performance
> > neural network training and inference. I really hate reading NVIDIA's
> > non-free legal texts, and in such repository we can avoid reading the
> > license and just get the scutwork done and make users happy.
> >
> > (3) Data with obscure licensing. In this repository we can feel free to
> > package pre-trained neural networks or training data without carefully
> > examing the licensing.
>
> ... this is something that I personally have a big problem with
> because it would set a precedent that I don't want the Debian
> project to set.  We as a project host a non-free repository
> (which is fine for me), but before we take packages into
> non-free, we put a lot of effort into checking the licenses for
> problems (besides them being non-free).  Hosting a repository on
> Debian infrastructure that effectively states "we don't care for
> any license terms" is a no-go for me, even if it contains only
> packaging scripts and not the actual non-free components.
>
> Regards,
> Karsten
> --
> Ich widerspreche hiermit ausdrücklich der Nutzung sowie der
> Weitergabe meiner personenbezogenen Daten für Zwecke der Werbung
> sowie der Markt- oder Meinungsforschung.
>
>


Re: Bug#877900: How to get 24-hour time on en_US.UTF-8 locale now?

2019-02-07 Thread Peter Silva
iso_en ?  That sounds smart...

English for most of the world that aren't necessarily native English
speakers? https://en.wikipedia.org/wiki/International_English
Use ISO dates and stuff, and pick a random spelling. As a Canadian, I'm
pretty sure about colour, but unclear about whether we should standardize
on disc. Dates should be iso, even better if it used UTC as the timezone.
 This would be a default that would include US keyboard bindings (by
default.)
as the easiest thing to default to during installation, etc.. but perhaps I
should be disqualified, being both a unix greybeard, and a recovering ntp
admin.


On Thu, Feb 7, 2019 at 8:06 AM Adam Borowski  wrote:

> On Thu, Feb 07, 2019 at 02:55:33PM +0500, Roman Mamedov wrote:
> > So for those of us (the entire world), who have been relying on this
> behavior:
> >
> > > * en_US (.UTF-8) is used as the default English locale for all places
> that
> > >   don't have a specific variant (and often even then).  Generally,
> technical
> > >   users use English as a system locale
> >
> > How do we roll-back what you have done here, and still get en_US.UTF-8
> while
> > retaining the proper 24-hour time?
>
> > dpkg-reconfigure locales does not list "C.UTF-8" in the main "locales to
> > generate" list, but does offer it on the next screen as "Default locale
> for the
> > system environment". After selecting it, we get:
> >
> > # locale
> > LANG=C.UTF-8
> > LANGUAGE=
> > LC_TIME="en_US.UTF-8"
> > LC_ALL=en_US.UTF-8
> >
> > But still:
> >
> > # date
> > Thu 07 Feb 2019 09:53:47 AM UTC
>
> The root of this issue is worth raising on debian-devel:
>
> The en_US.UTF-8 locale has two purposes:
> • a locale for a silly country with weird customs (such as time going in
>   four discontinuous segments during the day, writing date in a
>   middle-endian format, an unit being shorter on land than surveyed but
>   longer than that in the  air, or another unit changing when measuring wet
>   vs dry vs slightly moist things)
> • base locale for the most of the world save for a few places (UK, AU, ...)
>   that have their specific locale -- and often even they use en_US for
>   consistency reasons.
>
>
> So I wonder what would be the best solution?  I can think of:
> • promoting C.UTF-8 in our user interfaces (allowing to select it in d-i,
>   making dpkg-reconfigure locales DTRT, making it the d-i default)
>   -- nice for Unix greybeards, but some users might want case-insensitive
>  sort, etc
> • inventing a new locale "en" without a country bias
>   -- good in the long term but problematic a month before freeze
>   -- could be good to have it anyway but not use it until after buster
> • ask glibc maintainers to revert the cherry-pick in #877900 for buster,
>   then pick a long-term solution
>
>
> One particular regression caused by this change is sorting no longer
> working: "12:01am" "1:01am" "12:01pm" "1:01pm" will be ordered wrong.
>
> On one hand, leftpondians may be entitled to their own locale.  On the
> other, let's punish the bastards for imperialism and imposing their own
> settings on the rest of the world. :p
>
>
> Meow!
> --
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ Remember, the S in "IoT" stands for Security, while P stands
> ⢿⡄⠘⠷⠚⠋⠀ for Privacy.
> ⠈⠳⣄
>
>


Re: FHS: Where to store user specific plugins / code

2018-03-01 Thread Peter Silva
another option:

-- it is best practice for daemons/services not to run as root.  They
should have an application specific user.

-- some tools can be run in a systemish way by a specific user, but
also by other users in a less official way (think web server on a high
port instead of port 80.)

-- user preferences are standardized by freedesktop.org
   https://specifications.freedesktop.org/basedir-spec/latest/ar01s03.html

So preferences/plugin settings would go under ~/.config/package/ for
whatever user is running the tool.
state files under ~/.cache/package/   The systemd.service file will
have User=, so the "normal"
service settings will be under ~.

It seems much cleaner to me than the httpd link farms, and allows
users to spin up their own daemons
(user httpd on a high port) with a natural location for settings.
This aligns with systemd where they have the --user flag to let users
manage their own daemons.












On Thu, Mar 1, 2018 at 7:26 AM, Ian Jackson
 wrote:
> Marvin Renich writes ("Re: FHS: Where to store user specific plugins / code"):
>> [stuff]
>
> I agree completely with Marvin's message.
>
> Ian.
>



Re: Call for volunteers: FTP Team

2017-08-17 Thread Peter Silva
PNQteam naturally leads to Pinocchio ...  which isn't in Toy Story,
but you can't have everything.


On Thu, Aug 17, 2017 at 6:05 PM, Geert Stappers  wrote:
> On Thu, Aug 17, 2017 at 11:22:03PM +0200, Joerg Jaspert wrote:
>> On 14767 March 1977, Jonathan Carter wrote:
>> >> it has been quite a while since the last call for volunteers, so here is
>> >> an update: Yeah, we still need people, and we want you. Well, that is,
>> >> if you are a Debian Developer, for this. If you are not and want to
>> >> help, read the last paragraph please.
>> > If someone hypothetically joins, are they allowed to rename the FTP team
>> > to something that doesn't include "FTP"?
>>
>> GopherTeam?
>
> PNQteam
>
> Processing New Queue   Team.  Or another name for a team that does good work
> which has _NOTHING_ "File Transfer Protocol" related.
>
> We already did absolete ftp://
>
> Doing s/ftp.debian.org/archive.debian.org/  is very 2017.
> I mean ftp.debian.org is very 1995, okay 2005
>
>
> Groeten
> Geert Stappers
> --
> Leven en laten leven
>



Re: A radically different proposal for differential updates

2017-08-15 Thread Peter Silva
Isn't there kind of a universal issue that tar and compression happen
sort of in the wrong order?  Wouldn't it make more sense to make files
that were .gz.tar (ie. compress the files individually, then have an
index into them via tar.)  Then tar works perfectly well for
extracting individual files without having to download the entire
thing.  For people who want the entire file, no problem.  For people
who want to minimize client side storage, they can fetch byte ranges
of files to extract the Table of contents of files.

The delta dpkg's are just the .gz.tar of the new files. tar already
has --append, that might be used to just add the delta's to the
existing files, so they become a single dpkg that contains multiple
versions.

Once you can seek (and remotely use byte ranges) there is a whole lot
of options, and it is compression after the index is built that takes
all those options away.



On Tue, Aug 15, 2017 at 2:51 PM, Julian Andres Klode  wrote:
> On Tue, Aug 15, 2017 at 09:26:24AM +0200, Christian Seiler wrote:
>> Hi there,
>>
>> I've come to believe that binary diff packages are not the best way of
>> solving this issue. Intead I'd like to propse a radically different
>> solution to this issue.
>>
>> The gist of it: instead of adding a format for how deltas work, I
>> propose to introduce a new format for storing Debian packages that will
>> be used for both the initial installation _and_ incremental updates.
>>
>> This idea was inspired by the following talk given by Lennart
>> Poettering about a new tool he's written (which is already packaged for
>> Debian by the way):
>> https://www.youtube.com/watch?v=JnNkBJ6pr9s
>>
>>
>>
>> Now to my proposal:
>>
>> A Debian package currently consists of two files: control.tar.gz and
>> data.tar.xz (or .gz). What I want to propose is a new format that does
>> not contain a data.tar.xz at all. Instead I'd like to split the
>> data.tar.xz into chunks and have the new format only contain an index
>> that references these chunks. Let me call this new format "cdeb" for
>> "chunked deb".
>>
> [...]
>> Anyway: thoughts? Regards, Christian
>
> It's of course an awesome idea. But:
>
> I generally agree with the idea of chunk stores. They'd improve
> things a lot. Now, instead of chunking the tarfiles, chunk both
> the individual files, and the tarfiles. Then, with an index for
> the individual files in control.tar listing the chunks, you can
> easily reconstruct just the files that changed on your system
> and avoid any rebuilding of debs for upgrades :D
>
> That said, I believe that this change won't sell. Replacing the
> basic format the repository is made of won't fare well. Too many
> tools (most of which probably are not known) rely on the presence
> of deb files in archives.
>
> So as sad as it might be, I think we probably have to settle for
> delta files.
>
> --
> Debian Developer - deb.li/jak | jak-linux.org - free software dev
>   |  Ubuntu Core Developer |
> When replying, only quote what is necessary, and write each reply
> directly below the part(s) it pertains to ('inline').  Thank you.
>



Re: Proposal: A new approach to differential debs

2017-08-13 Thread Peter Silva
o in spite of being the *default*, it isn't that universal, and in
any event, we can just decide to change the default, no? One can say
to people with bandwidth limitations, that their apt settings should
not delete packages after receipt, so that they can be used as the
basis for updates.  And these types of settings would appear to be
rather common already, so it isn't a huge change.

It strikes me as much simpler and lower to add zsync to the current
repo/apt tools, and that asking clients to do some caching to support
it is reasonable.


On Sun, Aug 13, 2017 at 1:19 PM, Christian Seiler <christ...@iwakd.de> wrote:
> On 08/13/2017 07:11 PM, Peter Silva wrote:
>>> apt by default automatically deletes packages files after a successful 
>>> install,
>>
>> I don't think it does that.
>
> The "apt" command line tool doesn't, but traditional "apt-get" does, as
> does "aptitude". This was documented in the release notes of Jessie and
> the changelog of the APT package when the "apt" wrapper was introduced.
>
> Regards,
> Christian



Re: Proposal: A new approach to differential debs

2017-08-13 Thread Peter Silva
> apt by default automatically deletes packages files after a successful 
> install,

I don't think it does that.  It seems keep them around after install,
and even multiple
versions.  I don't know the algorithm for the cache, but it doesn't do
what you say.

On my debian box, I have nothing but a straight install, changed nothing.

blacklab% cd /var/cache/apt/archives
blacklab% ls -al | wc -l
1450
blacklab% ls -al xsensors*
-rw-r--r-- 1 root root 17688 Mar  6 21:11 xsensors_0.70-3+b1_amd64.deb
blacklab% dpkg -l xsensors
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name  Version   Architecture  Description
+++-=-=-=-===
ii  xsensors  0.70-3+b1 amd64
hardware health information viewer
blacklab%

blacklab% ls -al xserver-xorg-video-nvidia_375*
-rw-r--r-- 1 root root 3129858 Feb 23 16:13
xserver-xorg-video-nvidia_375.39-1_amd64.deb
-rw-r--r-- 1 root root 3138320 May 28 11:28
xserver-xorg-video-nvidia_375.66-1_amd64.deb
-rw-r--r-- 1 root root 3101944 Jul 16 17:14
xserver-xorg-video-nvidia_375.66-2~deb9u1_amd64.deb
blacklab%

On Sun, Aug 13, 2017 at 12:43 PM, Julian Andres Klode <j...@debian.org> wrote:
> On Sun, Aug 13, 2017 at 10:53:16AM -0400, Peter Silva wrote:
>> You are assuming the savings are substantial.  That's not clear.  When
>> files are compressed, if you then start doing binary diffs, well it
>> isn't clear that they will consistently be much smaller than plain new
>> files.  it also isn't clear what the impact on repo disk usage would
>> be.
>
> The suggestion is for it to be opt-in, so packages can optimize their file
> layout when opting in - for example, the gzipped changelogs and stuff can be
> the rsyncable ones (this might need some changes in debhelper to generate
> diffable files if the opt-in option is set).
>
>>
>> The most straigforward option:
>> The least intrusive way to do this is to add differential files in
>> addition to the existing binaries, and any time the differential file,
>> compared to a new version exceeds some threhold size (for example:
>> 50%) of the original file, then you end up adding the sum total of the
>> diff files in addition to the regular files to the repos.  I haven't
>> done the math, but it's clear to me that it ends up being about double
>> the disk space with this approach. it's also costly in that all those
>> files have to be built and managed, which is likely a substantial
>> ongoing load (cpu/io/people)  I think this is what people are
>> objecting to.
>
> I would throw away patches that are too large, obviously. I think
> that twice the amount of space seems about reasonable, but then we'll
> likely restrict that to
>
> (a) packages where it's actually useful
> (b) to updates and their base distributions
>
> So we don't keep oldstable->stable diffs, or unstable->unstable
> diffs around. It's questionable if we want them for unstable at
> all, it might just be too much there, and we should just use them
> for stable-(proposed-)updates (and point releases) and stable/updates;
> and experimental so we don't accidentally break it during the
> dev cycle.
>
>>
>> A more intrusive and less obvious way to do this is to use zsync (
>> http://zsync.moria.org.uk/. ) With zsync, you build tables of content
>> for each file, using the same 4k blocking that rsync does. To handle
>> compression efficiently, there needs to be an understanding of
>> blocking, so a customized gzip needs to be used.  With such a format,
>> you produce the same .deb's as today, with the .zsyncs (already in
>> use?) and the author already provides some debian Packages files as
>> examples.  The space penalty here is probably only a few percent.
>>
>> the resource penalty is one read through each file to build the
>> indices, and you can save that by combining the index building with
>> the compression phase.  To get differential patches, one just fetches
>> byte-ranges in the existing main files, so no separate diff files
>> needed.  And since the same mechanisms can (should?) be used for repo
>> replication, the added cost is likely a net savings in bandwidth
>> usage, and relatively little complexity.
>>
>> The steps would be:
>>-- add gzip-ware flavour to package creation logic
>>-- add zsync index creation on repos. (potentially combined with first 
>> step.)
>>-- add zsync client to apt-* friends.
>>
>> To me, this makes a lot of sens

Re: Proposal: A new approach to differential debs

2017-08-13 Thread Peter Silva
You are assuming the savings are substantial.  That's not clear.  When
files are compressed, if you then start doing binary diffs, well it
isn't clear that they will consistently be much smaller than plain new
files.  it also isn't clear what the impact on repo disk usage would
be.

The most straigforward option:
The least intrusive way to do this is to add differential files in
addition to the existing binaries, and any time the differential file,
compared to a new version exceeds some threhold size (for example:
50%) of the original file, then you end up adding the sum total of the
diff files in addition to the regular files to the repos.  I haven't
done the math, but it's clear to me that it ends up being about double
the disk space with this approach. it's also costly in that all those
files have to be built and managed, which is likely a substantial
ongoing load (cpu/io/people)  I think this is what people are
objecting to.

A more intrusive and less obvious way to do this is to use zsync (
http://zsync.moria.org.uk/. ) With zsync, you build tables of content
for each file, using the same 4k blocking that rsync does. To handle
compression efficiently, there needs to be an understanding of
blocking, so a customized gzip needs to be used.  With such a format,
you produce the same .deb's as today, with the .zsyncs (already in
use?) and the author already provides some debian Packages files as
examples.  The space penalty here is probably only a few percent.

the resource penalty is one read through each file to build the
indices, and you can save that by combining the index building with
the compression phase.  To get differential patches, one just fetches
byte-ranges in the existing main files, so no separate diff files
needed.  And since the same mechanisms can (should?) be used for repo
replication, the added cost is likely a net savings in bandwidth
usage, and relatively little complexity.

The steps would be:
   -- add gzip-ware flavour to package creation logic
   -- add zsync index creation on repos. (potentially combined with first step.)
   -- add zsync client to apt-* friends.

To me, this makes a lot of sense to do just for repo replication,
completely ignoring the benefits for people on low bandwidth lines,
but it does work for both.

On Sun, Aug 13, 2017 at 9:20 AM, Paul Wise  wrote:
> On Sun, Aug 13, 2017 at 5:38 AM, Adrian Bunk wrote:
>
>> It sounds like something that would have been a cool feature 20 years
>> ago when I was downloading Debian updates over an analog modem.
>>
>> Today the required effort, infrastructure and added complexity would
>> IMHO not be worth it for a potential few percent of bandwidth decrease.
>
> Low-speed connections and low bandwidth quotas (especially on mobile)
> are still common enough around the world that delta upgrades make a
> difference right now, IIRC even Google uses them for Chrome.
>
> --
> bye,
> pabs
>
> https://wiki.debian.org/PaulWise
>