Re: Salsa CI news

2020-02-09 Thread Dmitry Smirnov
On Saturday, 8 February 2020 1:49:20 PM AEDT Paul Wise wrote:
> There is one attribute of how Debian does things that clashes with
> being able to do this; service maintainers need to be able to update
> code on a different schedule to Debian stable and even backports
> time-frames.

When service maintainers build their services from upstream repositories 
there is no concern for "stable" release schedule. They could use packages 
from "testing" or "unstable", especially those like Golang ones that almost 
always safe to install even to stable systems due to their statically linked 
nature and few run-time dependencies.

The problem is rather curious one. What would the Debian be without packages? 
Would there be anything left? When the whole project is largely about good 
packaging as a form of delivery of free software with added value in regards 
to integration, build-ability, consistency, respect to FHS and DFSG, 
continuous testing, etc., as a project we are projecting the idea that 
deploying and maintaining services installed from the package repository is 
good. Certainly sometimes it is much easier to deploy and upgrade software 
when it is installed from the official packages. But when service maintainers 
refuse to use packaged software for no good reason the message is not clear. 
Is package maintainer does not do a good enough job or is the whole concept 
is wrong? It is like we are saying, here, use the packages we have prepared 
for you but mind that they are not good enough for us (ourselves) to use...

In case of Gitlab I recognise that Salsa could not be maintained from the 
official packages. They are too fragile, often uninstallable even in 
"unstable" and depend on unofficial "fasttrack" repository. There are too 
many dependencies and things get broken far to often in too many places.
In theory Gitlab could be in a better shape with larger team, but in the 
current situation I see why Salsa operates from vendor container image and it 
is reasonable.

But case of Gitlab-Runner is different. The packaged version is mature 
enough. It is trivial to incorporate it into container image. I've been 
running it in production for 3.5 years ever since I've packaged it.
It appears to me that Salsa admins don't use packaged Gitlab-Runner simply 
because they don't want to, and I don't understand why.

I think ideally service maintainers should be package co-maintainers. But 
when there is a large burden to maintain either (the service and the 
package), I understand how people might focus just on one thing.
But again, if everything we do here is about providing packaged software, 
when it is ever the right thing not to use our own packages?
Certainly we don't trust upstream more than our fellow developers, right?

-- 
All the best,
 Dmitry Smirnov.

---

If liberty means anything at all, it means the right to tell people what
they do not want to hear.
-- George Orwell


signature.asc
Description: This is a digitally signed message part.


Re: call for ftpmaster's transparency

2020-02-09 Thread Dmitry Smirnov
On Monday, 10 February 2020 1:01:05 PM AEDT Sean Whitton wrote:
> AIUI, the reason REJECT comments aren't public is because it might
> sometimes make people feel embarassed.

Then many reviews of a packaging work that is done by mentors would be  
embarrassing but that's OK because everybody have a chance to learn from 
those reviews. (Only those who don't do anything are never embarrassed).


> ITPs are great for avoiding duplicated effort in most cases.  However,
> there are cases in which it is possible for someone to know pretty much
> for sure that there is no chance of any duplicated effort.  In such
> cases ITPs are busywork, which is demotivating to volunteers.

No. ITPs are opportunities to team up with others, not merely for de-
duplication.


> For example, if I break out some mature code from a project and make its
> first upstream release as an independent library, and then I want
> immediately to upload it to NEW so that the next release of the project
> it was broken out from can depend on the new library, there is no reason
> to file an ITP.  Since I am the upstream author and the code has only
> just been released, I can be confident no-one else is going to try to
> package it.

That might be a valid situation but nevertheless how much effort it is to 
file an ITP?? Not much even if filing new ITP is not mandated.
But when ITP is there, then it could be referred to as "blocked by" or 
"blocking" bug, it can have "affects" relationships with other packages, etc.

Also you never know how long your package will stay in the NEW queue and 
during this time lack of ITP could affect developers priorities.


> Another example is the Haskell team.  Due to the nature of the language,
> information on newly packaged libraries has to be committed to two
> different git repos, and so everyone working on Haskell in Debian is
> working from those two repos.  So again, no real danger of duplicated
> effort.

I've heard something similar about Rust team. Fair enough, maybe there are 
some legitimate cases when ITPs could be safely avoided. Though it is nice to 
have an ITP anyway as a record of what is being introduced -- for example 
when package is rejected, ITP could capture discussion and reason for 
rejection, which is always useful to have for posterity.

-- 
Best wishes,
 Dmitry Smirnov.

---

Richard Nixon got kicked out of Washington for tapping one hotel suite.
Today we're tapping every American citizen in the country, and no one has
been put on trial for it or even investigated. We don't even have an
inquiry into it.
-- Edward Snowden


signature.asc
Description: This is a digitally signed message part.


Re: call for ftpmaster's transparency

2020-02-09 Thread Dmitry Smirnov
On Sunday, 9 February 2020 9:04:25 PM AEDT Michael Lustfield wrote:
> This is an understandable perspective, but secrecy probably isn't the best
> word.

Probably. If I had a better linguistic faculties then I could have find a 
better word. But I have had to use what I was available...


> I personally think this sounds like a fantastic (and not very difficult)
> idea.

Thank you.


> Where do you propose the bug mail be sent for NEW/binNEW packages without
> an ITP?

Same channels as usual. How can one comment to ITP that does not exist?


> I suspect when you say, "member of ftp-masters team," what you mean is
> "FTP-Masters Trainee."

Correct.


> I agree that it could be valuable to see comments; however, they're almost
> always going to be from Trainees. Since we're not technically part of the
> team, it's important that we don't speak on behalf of the team. Publishing
> Trainee comments would effectively be doing that.

That's perfectly fine. I don't recall a single case when a package review was 
not appreciated. A review or even a question asked on ITP can be useful to 
correct a problem or to provide more background. Whether ITP feedback if 
provided by Trainee or not, it could be useful anyway.


> I would personally *LOVE* to see ITPs be a requirement for *ALL* new
> packages. Making it a requirement and expecting ftp-masters to ignore any
> upload until the ITP has existed for at least X days would be absolutely
> fantastic. It would fix some redundant library uploads (see
> golang/nodejs/etc.) and it would provide a mandatory level of review by
> the wider community.

I'm sure having it as a good practice would be enough without mandating a 
strong requirement to always have an ITP. There are might be legitimate cases 
to not file an ITP although I can think of only one such case when a source 
package is re-named...


> Back when I tried to get gitea packaged for main, I had a number of ITPs
> commented/closed mentioning the alternate library name or a reason it can't
> be packaged.

Makes sense.


> > I'd like Debian project leader to engage in the matter of improving
> > transparency of ftp-masters team operations and procedures.
> 
> This feels a lot like starting a GR and not allowing appropriate
> discussion. It's heavy-handed, isn't going to get anywhere, and is going
> to hurt feelings.

Project leader's duty is to facilitate communication. It is not wrong to at 
least make him aware of the problem.


> > I want to encourage a public discussion regarding opening of the
> > ftp-master mail list to the public. Currently reasons for unjustified
> > secrecy of ftp- master processes is not explained...
> 
> It's often said that emotions don't play well with productive discussions.
> Adding phrases such as "where it belong", using "secrecy" over "privacy",
> calling it "unjustified", and immediately jumping to demands of the DPL are
> accusatory and inflammatory, and will likely just get you ignored or start
> an unproductive flame war.

It is my general observation that bug reporters (or those who raise concerns 
on mail lists) naturally tend to be perceived over-emotionally. That's 
understandable because they are either affected by the issue or concerned 
enough to report it. And others probably take it less seriously or they would 
have reported it themselves...

Once again, "do not shoot the pianist" sort of speak... I've expressed the 
issue the best I could.


> Why do reviews take so long?

I'm not concerned about that, although it would be great to improve 
processing time. IMHO the bigger problem is that queue processing is 
unpredictable -- some packages, sometimes unimportant ones are processed very 
fast while others (including those that block some bug fixes) can stay in the 
queue for a very long time. It is very difficult to communicate urgency of a 
new package upload with ftpmasters. I'd probably use severity of pending ITP 
bug but I'm not sure if that would be effective or even the right thing to 
do...


> - The team is tiny

IMHO this is a very serious issue. There are too few ftp-masters, they are 
doing too much work, most certainly not delegating enough and not growing the 
team...


> - Much of the team seems very burned out

No wonder given the weight of responsibilities. I'm sure we are all much 
appreciate their hard work...

-- 
Regards,
 Dmitry Smirnov.

---

We occasionally stumble over the truth but most of us pick ourselves up
and hurry off as if nothing had happened.
-- Winston Churchill


signature.asc
Description: This is a digitally signed message part.


Re: call for ftpmaster's transparency

2020-02-09 Thread Mo Zhou
On Sun, Feb 09, 2020 at 07:01:05PM -0700, Sean Whitton wrote:
> One key problem with the current workflow is that it makes it very
> difficult to avoid reviewing identical files more than once.  That would
> be a big improvement.

(I was just talking with Michael about this several minutes ago.)

Just leaking a part of my WIP work.

My core data structure looks like this

  {path: [hash, stamp, username, status, annotation]}

The "hash " field is a salted hash, calculated like this

  hash(data=read(path), salt=read(neighbor_license()))

This data structure is a fine-grained (per-path level) "accept/reject"
record.  Each path is a node. The "status" of a tree can be
automatically computed form its decendant nodes.

When a package enters NEW again, files with matching hashes will
automatically reuse the last status assigned by human user, where status
= either "accept" or "reject".

There are still many other aspects from which I can reduce time
consumption for human and improve efficiency.



Re: call for ftpmaster's transparency

2020-02-09 Thread Sean Whitton
Hello,

On Sun 09 Feb 2020 at 04:04AM -06, Michael Lustfield wrote:

>> To make matters worse ftp-masters rarely leave their comments in ITP
>> issues. As I've recently learned that have profound effect on processing of
>> new packages.
>
> I personally think this sounds like a fantastic (and not very difficult) idea.

AIUI, the reason REJECT comments aren't public is because it might
sometimes make people feel embarassed.

> I would personally *LOVE* to see ITPs be a requirement for *ALL* new packages.
> Making it a requirement and expecting ftp-masters to ignore any upload until
> the ITP has existed for at least X days would be absolutely fantastic. It 
> would
> fix some redundant library uploads (see golang/nodejs/etc.) and it would
> provide a mandatory level of review by the wider community.

ITPs are great for avoiding duplicated effort in most cases.  However,
there are cases in which it is possible for someone to know pretty much
for sure that there is no chance of any duplicated effort.  In such
cases ITPs are busywork, which is demotivating to volunteers.

For example, if I break out some mature code from a project and make its
first upstream release as an independent library, and then I want
immediately to upload it to NEW so that the next release of the project
it was broken out from can depend on the new library, there is no reason
to file an ITP.  Since I am the upstream author and the code has only
just been released, I can be confident no-one else is going to try to
package it.

Another example is the Haskell team.  Due to the nature of the language,
information on newly packaged libraries has to be committed to two
different git repos, and so everyone working on Haskell in Debian is
working from those two repos.  So again, no real danger of duplicated
effort.

> Why do reviews take so long?
> - The team is tiny
> - Much of the team seems very burned out
> - The ones that are active tend to stick to source or "unloved" tasks
> - There are some very large and/or messy packages that need review
> - There are a lot of redundant tasks and frequently-made mistakes
>   + A little more automation could help that
> - (my opinion) The tools are archaic, cumbersome, and inefficient
>   + Fixing this would be a very (non-technically) difficult task
>   + An idea I have would help to bring transparency to the process...
> ^ it's missing an interest requirement :(

One key problem with the current workflow is that it makes it very
difficult to avoid reviewing identical files more than once.  That would
be a big improvement.

-- 
Sean Whitton


signature.asc
Description: PGP signature


Re: Best practices for Debian developers who are also upstreams?

2020-02-09 Thread Sean Whitton
Hello,

On Sun 09 Feb 2020 at 10:57PM +02, Otto Kekäläinen wrote:

> Most interesting was perhaps Seans git-remote-gcrypt. You even have a
> rpm spec file included which helps illustrate this is a true upstream
> and not a fully native Debian package.
>
> There does not seem to be any automation around release numbering —
> currently the repo tags, debian/changelog and redhat spec file all
> bumped manually and independently.
> https://github.com/spwhitton/git-remote-gcrypt/commit/069f6ea33149fbaa8bd83423b7a7b591fcfed43b

Indeed.  If the package was more active I would implement some script
which would bump all the versions, which could be run right after
releasing.

> - Build .deb directly at upstream for every upstream commit: allow
> upstream developers to notice if they break Debian packaging to the
> level that it stops building. Upstream PRs (or MRs) should not be
> accepted if they make changes without considering if they break .deb
> (or .rpm if that is also included in the upstream repository).
> Effectively this requires that upstream git master branch has a
> debian/ directory.
>
> - Since there will be a debian/ directory, upstream might as well run
> full Debian QA on every commit. This will help detect if upstream
> developes make grave mistakes, like try to run curl/git fetches
> duringthe build stage, introduce library symbols in a wrong way or
> embed in the build something that stops from being a reproducible
> build anymore. Lintian even complains about spelling mistakes. Such QA
> items are not Debian specific – all code everywhere should be without
> spelling mistakes, all programs should stay reproducible if they now
> are so etc.

Right.  My sbuild-prerelease alias could be used in your CI to do this.

-- 
Sean Whitton


signature.asc
Description: PGP signature


Re: Best practices for Debian developers who are also upstreams?

2020-02-09 Thread Richard Laager
On 2/8/20 7:57 PM, Simon Richter wrote:
> In my experience, it was often a good idea to separate my upstream and
> Debian hats.

+1

My current situation is that I'm a Debian packager who has then over
time become more involved in upstream and eventually ended up with a
commit bit.

On a different package, I'm in the process of taking over packaging for
something where I am already an upstream developer (or perhaps should
say "was", as I haven't been active upstream in some time, though that
may change here).

That sounds like it may be different than your situation, but I hope I
can still offer some useful comments.

> At the same time, Debian packages have a well-defined installation
> procedure, so it is often possible to build a quick fix for a bug that
> will give users a working package, but where the patch is not suitable
> for upstream inclusion, like changing an init script or unit file.

+1. In such a case, you can clean that up into a more general solution
for upstream and later drop the Debian patch. I've had that exact thing
happen.

>> - have debian/ in upstream repository, and a CI that does the bulk of
>> Debian QA on every upstream commit and immediately gives feedback to
>> upstream devs that "break" something from Debian QA point of view

I'm in the process of packaging something where upstream (which is not
me) puts their debian directory (for CI purposes) as packaging/debian.
That allows them to do real Debian packaging for CI, and update that as
necessary, but keeps it out of the way of the real "debian" directory.
You might consider that option. Upstream's CI presumably has to
move/copy/link ./packaging/debian to ./debian before kicking off the
Debian build, but that's just one extra step.

>> - bonus: import upstream releases as git operations

Do this either way! :)

In the packages I maintain, my debian/README.source says:
The `master` branch is upstream's `master` branch.
...
The `upstream/latest` branch contains the unpacked contents of the orig
tarball.  It is branched off the `master` branch, so the delta from
`master` (at the branch point) to `upstream/latest` contains generated
files that exist in the tarball but not in the upstream repository.

That said, I prefer to keep separate git checkouts for the package and
upstream to avoid the potential for accidents like pushing upstream pull
request branches to Debian Salsa or, far more likely, accidentally
pushing the current master to Debian Salsa.

> Maintaining debian/changelog inside a version control system is a bit of
> a category error, because it is a form of version control system log,

I'm a big fan of using git-buildpackage and `gbp dch`. My current
workflow (somewhat "copied" from e.g. lintian) is that debian/changelog
contains a stub like this until it is time to tag a Debian package release:

ntpsec (1.1.8+dfsg1-4+git) UNRELEASED; urgency=medium

  * UNRELEASED

 -- Richard Laager   Sat, 11 Jan 2020 22:49:28 -0600

Note that it is critical that the version in Salsa NOT be the same as
the version in the archive if you are using Salsa CI to test changes.
That's why the +git is added. If they are the same, when the CI code
tries to install the package, you'll have problems because of the
conflicting version numbers.

-- 
Richard



signature.asc
Description: OpenPGP digital signature


Re: Best practices for Debian developers who are also upstreams?

2020-02-09 Thread Otto Kekäläinen
Hello!

Currently the project where I am closest to this goal is in
rdiff-backup. The upstream debian/ is almost 1:1 to the downstream
debian/. I've made a dh_gencontrol override to ensure the upstream CI
always builds binaries with the correct version number inherited from
a git tag in upstream:
https://github.com/rdiff-backup/rdiff-backup/blob/master/debian/rules

What remains is to decide what to do to the debian/changelog. The
version there still leaks the source package version number that ends
up in *.changes and *.dsc..

Otherwise it is rolling pretty well. The upstream .travis-ci.yml
builds the .eb just like Debian will eventually do (apart from some
version and changelog differences) and all bugs I encounter in Debian
or elsewhere can be fixed directly on upstream for everybody's
benefit. My work on the quality of the software is directly beneficial
at upstream and "globally" and not just for Debian. I still have a
separate Debian repo on salsa.debian.org but in future I think I'll
have a debian/master branch directly in upstream repository and
configure to git-buildpackage the upstream branch as 'master'.

One of the best inspirations for this was
https://vincent.bernat.ch/en/blog/2019-pragmatic-debian-packaging and
examples at 
https://github.com/vincentbernat/pragmatic-debian-packages/tree/master
but I am still looking to optimize the process as a whole.

I don't feel there is any problem with "two hats". Pretty much
everything the Debian policy states about software quality applies
universally to any open source project. The things that are very
Debian specific will only be expressed in code under debian/ and not
affect other distros. If this works out well I'll expand to become
upstream in all projects I maintain in Debian if the upstreams trust
me enough to give write access. The closer a maintainer is to the
upstream, the better everything will be I reason.

- Otto



Re: Best practices for Debian developers who are also upstreams?

2020-02-09 Thread Otto Kekäläinen
Hello!

Thanks for all the examples (LXQt [where I looked at the sources of
lxqt-panel], git-remote-gcrypt, nbd, review, glibc).

Most interesting was perhaps Seans git-remote-gcrypt. You even have a
rpm spec file included which helps illustrate this is a true upstream
and not a fully native Debian package.

There does not seem to be any automation around release numbering —
currently the repo tags, debian/changelog and redhat spec file all
bumped manually and independently.
https://github.com/spwhitton/git-remote-gcrypt/commit/069f6ea33149fbaa8bd83423b7a7b591fcfed43b

In the other cases I did not see a real feedback loop where upstream
developers would be exposed to Debian policy requirements or Debian
QA.

Seems many misunderstood my list of goals. My emphasis is not on the
git branch layout or patch handling. So let me rephrase my goals from
a quality assurance perspective:

- Build .deb directly at upstream for every upstream commit: allow
upstream developers to notice if they break Debian packaging to the
level that it stops building. Upstream PRs (or MRs) should not be
accepted if they make changes without considering if they break .deb
(or .rpm if that is also included in the upstream repository).
Effectively this requires that upstream git master branch has a
debian/ directory.

- Since there will be a debian/ directory, upstream might as well run
full Debian QA on every commit. This will help detect if upstream
developes make grave mistakes, like try to run curl/git fetches
duringthe build stage, introduce library symbols in a wrong way or
embed in the build something that stops from being a reproducible
build anymore. Lintian even complains about spelling mistakes. Such QA
items are not Debian specific – all code everywhere should be without
spelling mistakes, all programs should stay reproducible if they now
are so etc.

Now some ask, if it reasonable to require all upstream developers to
follow Debian QA? No, I don't think so. If the CI runs automatically
and the QA is automatic, then developers will be served with
in-context errors and warnings at the time they are changing some
code. For everybody to know the Debian Policy by heart and so on is
naturally unreasonable, but reading a Lintian warning about a spelling
error or broken multi-arch is not hard or unreasonable, and when those
things are addressed the upstream code base benefits and all
distributions benefit.

The contrary is in my opinion unreasonable: to expect the Debian
maintainer to fix tens of upstream issues at Debian import time that
can be anything ranging like fixing a failing test suite or tracking
down why the build is no longer reproducible. Yes, the maintainer
could file N upstream bugs, but upstreams are less likely to fix them
(as they might have switched context and don't want to revisit
something they coded several months back) and even if upstreams fix
them, the maintainer would need to wait some weeks, then manually
cherry-pick and patch the Debian version, and then do extra work
following up on all those patches and dropping or refreshing them on
next upstream release.

Having a debian/ in upstream, having deb building as part of the
upstream CI, and keeping the delta of Debian debian/ and upstream
debian/ as small as possible makes life much easier for the Debian
maintainer, and in general all distros and the whole upstream code
base benefits of the extra QA.

- Otto



Bug#951018: ITP: ruby-tty-spinner -- Library for showing a spinner icon for terminal tasks that have non-deterministic time frame

2020-02-09 Thread Gabriel Filion
Package: wnpp
Severity: wishlist
Owner: Gabriel Filion 

* Package name: ruby-tty-spinner
  Version : 0.9.3
  Upstream Author : Piotr Murach 
* URL : https://ttytoolkit.org/
* License : expat
  Programming Lang: Ruby
  Description : Library for showing a spinner icon for terminal tasks that 
have non-deterministic time frame

tty-spinner provides a selection of different text-based animations that can
be shown when the user is waiting on a task run in terminal that will end some
time that is not known in the future, or for which the completion timestamp
can only be estimated.

Those tasks will usually be waiting for some I/O, for example a download or a
task that was requested from a service that will give an answer whenever the
response is ready.


This package is currently required for packaging puppet-developement-kit
(pdk), but might be needed for other ruby-based scripts expected to run in a
terminal.

I plan on maintaining this library within the ruby team. I will ask for
sponsorship from within the team to ensure that I follow the team's policies
properly.



Re: GSoC projects about SUSI.AI

2020-02-09 Thread Norbert Preining
Hi Mo,

> "Packages involved" means the packages that you intend to upload to our
> archive. The freeness of plain text files (e.g. code) is easy to judge.
> When you have doubt whether to upload a binary blob (pretrained model)
> to main or contrib, feel free to ask me publically.

Thanks, I will keep that in mind.

> For example, these looks like typical questionable models:
> https://github.com/Kitt-AI/snowboy/tree/master/resources/models

Not only that ;-) There is a binary blog, too, so this is anyway a no-go
for Debian.

> > What are the definitions of DFSG-compliant for models? I guess that is
> > written in [1]
> 
> I interpret DFSG-compliant model as the "Free Model" defined in [1].
> I need to update the document when I got enough free time to do so.

Yes, but also in the first post of you there was no actual link [1],
so that is why I asked what the actual location of [1] is!

Best

Norbert

--
PREINING Norbert   http://www.preining.info
Accelia Inc. + IFMGA ProGuide + TU Wien + JAIST + TeX Live + Debian Dev
GPG: 0x860CDC13   fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13



Re: Bug#950760: RFS: libbpf/0.0.6-1 -- eBPF helper library (development files)

2020-02-09 Thread Sudip Mukherjee
On Fri, Feb 7, 2020 at 7:58 PM Sudip Mukherjee
 wrote:
>
> On Fri, Feb 7, 2020 at 10:17 AM Sudip Mukherjee
>  wrote:
> >
> > On Thu, Feb 6, 2020 at 10:53 PM Christian Barcenas
> >  wrote:
> > >
> > > I just noticed that your packaging repo is currently empty.
> > > Would you be able to push your current progress to Github
> > > so that it's easier to review the source package?
> >
> > Pushed now. Its without any epoch in the version. I will add the epoch
> > and push again tonight after back from $dayjob.
>
> Also pushed in wip/epoch branch with the epoch in only binary packages
> for your review. I have not yet uploaded to mentors.

And I have now pushed to mentors the package with epoch only in the
binary packages and I hope that is the consensus here.


-- 
Regards
Sudip



Re: call for ftpmaster's transparency

2020-02-09 Thread Xavier
Le 09/02/2020 à 18:12, gregor herrmann a écrit :
> On Sun, 09 Feb 2020 04:04:25 -0600, Michael Lustfield wrote:
> 
>> I would personally *LOVE* to see ITPs be a requirement for *ALL* new 
>> packages.
> 
> Fine with me.
> 
>> Making it a requirement and expecting ftp-masters to ignore any upload until
>> the ITP has existed for at least X days would be absolutely fantastic. 
> 
> Ehm, please, no.
> I would find it highly interruptive for my work if I'd have to wait
> for X days.

+1: don't add another delay for NEW queue!

>> It would
>> fix some redundant library uploads (see golang/nodejs/etc.) and it would
>> provide a mandatory level of review by the wider community.
>> Back when I tried to get gitea packaged for main, I had a number of ITPs
>> commented/closed mentioning the alternate library name or a reason it can't 
>> be
>> packaged.
> 
> Maybe that's helpful for some teams, in the perl team our tools
> (dh-make-perl in particular) check for existing packages and existing
> wnpp bugs.

Same for JS Team, our npm2deb tool shows if library already exists

>> Why do reviews take so long?
> 
> As a side note: Not all reviews take long, there's seems to be quite
> some variance in the time they take.
> 
> 
> Cheers,
> gregor, who's usually very happy with the turnaround time of
> NEW packages

Same when I'm working in Perl Team, but not when I'm packaging Node.js
modules :-/

Cheers,
Xavier



Re: Y2038 - best way forward in Debian?

2020-02-09 Thread Ben Hutchings
On Sun, 2020-02-09 at 11:57 +0100, Florian Weimer wrote:
> * Ben Hutchings:
> 
> > If I recall correctly, glibc *will* provide both entry points, so there
> > is no ABI break.  But the size of time_t (etc.) exposed through libc-
> > dev is fixed at glibc build time.
> 
> Is this a Debian-specific decision?

I though that was the *upstream* decision, but perhaps that's still not
decided after all?

Ben.

> There has been a proposal upstream not to support 32-bit time_t for
> new applications at all, but I don't think we will go in that
> direction.
-- 
Ben Hutchings
The world is coming to an end.  Please log off.




signature.asc
Description: This is a digitally signed message part


Re: call for ftpmaster's transparency

2020-02-09 Thread gregor herrmann
On Sun, 09 Feb 2020 04:04:25 -0600, Michael Lustfield wrote:

> I would personally *LOVE* to see ITPs be a requirement for *ALL* new packages.

Fine with me.

> Making it a requirement and expecting ftp-masters to ignore any upload until
> the ITP has existed for at least X days would be absolutely fantastic. 

Ehm, please, no.
I would find it highly interruptive for my work if I'd have to wait
for X days.

> It would
> fix some redundant library uploads (see golang/nodejs/etc.) and it would
> provide a mandatory level of review by the wider community.
> Back when I tried to get gitea packaged for main, I had a number of ITPs
> commented/closed mentioning the alternate library name or a reason it can't be
> packaged.

Maybe that's helpful for some teams, in the perl team our tools
(dh-make-perl in particular) check for existing packages and existing
wnpp bugs.
 
> Why do reviews take so long?

As a side note: Not all reviews take long, there's seems to be quite
some variance in the time they take.


Cheers,
gregor, who's usually very happy with the turnaround time of
NEW packages

-- 
 .''`.  https://info.comodo.priv.at -- Debian Developer https://www.debian.org
 : :' : OpenPGP fingerprint D1E1 316E 93A7 60A8 104D  85FA BB3A 6801 8649 AA06
 `. `'  Member VIBE!AT & SPI Inc. -- Supporter Free Software Foundation Europe
   `-   NP: Ry Cooder: Poor Man's Shangri-La


signature.asc
Description: Digital Signature


Bug#950996: ITP: octave-matgeom -- computational geometry for Octave

2020-02-09 Thread Rafael Laboissiere

Package: wnpp
Severity: wishlist
Owner: Rafael Laboissiere 

* Package name: octave-matgeom
  Version : 1.2.2
  Upstream Author : David Legland  and Juan Pablo 
Carbajal 
* URL : https://octave.sourceforge.io/matgeom/
* License : BSD-2-clause and GPL-3+
  Programming Lang: Octave
  Description : Computational geometry for Octave

This package contain a geometry toolbox for 2D/3D geometric computing in 
Octave, a numerical computation software. It contains several hundreds of 
functions for the creation and manipulation of 2D and 3D shapes such as 
point sets, lines, polygons, 3D meshes, ellipses, etc.


This Octave add-on package is part of the Octave-Forge project.

A preliminary version of the Debianized package is available at:

https://salsa.debian.org/pkg-octave-team/octave-matgeom



Re: Heads up: persistent journal has been enabled in systemd

2020-02-09 Thread Hideki Yamane
Hi,

 Thanks for your heads up.

On Sat, 1 Feb 2020 04:05:55 +0100
Michael Biebl  wrote:
> with today's upload of systemd 244.1-2 I finally enabled persistent
> journal by default [1]. It has been a long requested feature.

 I read this thread and other info, my thought is


Pros)

 Well, as I read upstream's documentation[1], I prefer persistent journal
 if they implemented features as they said. Especially journald logging
 feature seems to be more reliable (tamper-proof) than normal syslog's one.
 e.g. if someone hacked your system and you found it, then logs are
 NOT reliable information easily. However, if you've enabled persistent
 journal, it's hard to falsify it.


 This change is non-destructive, users can change its default setting to
 go back to rsyslog as Michael notes.


Cons)

 There are some regression at "reading" logs (e.g. we must specify syslog
 facilities by number)[2]

 Users should learn new "How to use journalctl" things.


Note)
 
 Ubuntu already did this change on upgrades and it is smooth.
 



 So, my conclusion is

 Use "reliable logging system" is good for our users, so I prefer it as
 the default. If you need more flexibility, then you can install rsyslog
 or something for your logging system - yes, you have a choice! :)


[1] 
https://docs.google.com/document/pub?id=1IC9yOXj7j6cdLLxWEBAGRL6wl97tFxgjLUEHIX3MSTs
[2] https://github.com/systemd/systemd/issues/9716



-- 
Regards,

 Hideki Yamane henrich @ debian.org/iijmio-mail.jp



Re: Best practices for Debian developers who are also upstreams?

2020-02-09 Thread Florian Weimer
* Otto Kekäläinen:

> Is somebody else already doing something similar like this?

We are doing this with glibc in Fedora, which is not Debian, but kind
of similar.  We try to push all backportable fixes to the upstream
release branches (and master) and synthesize new pseudo-release
tarballs from those branches.  This means that (the equivalent of)
.orig.tar.xz reflects upstream contents, but necessarily an actual
release tarball.

This means that the set of downstream patches is reasonably small, and
grows only moderately due to ongoing bug-fixing.  In some cases, we
want to backport things into Fedora which do not seem appropriate for
upstream, and those end up as patch files.

The distribution packaging bits do not live in the upstream
repository, but we have special (glibc-specific) scripts which allow
us to merge in upstream changes (using git cherrypick and git rebase),
so this mostly a matter of training people to use these scripts.

I think this approach is only possible because we have stable release
branches upstream where we can backport the things we are interested
in (and other contributors are reasonably conservative), and we are
willing to respin the tarballs.  It simplifies matters for the
uploaders, but we cannot easily bisect individual changes/upstream
commits because they do not exist downstream, and the upstream
repository lacks the distribution customizations.



Re: Y2038 - best way forward in Debian?

2020-02-09 Thread Florian Weimer
* Ben Hutchings:

> If I recall correctly, glibc *will* provide both entry points, so there
> is no ABI break.  But the size of time_t (etc.) exposed through libc-
> dev is fixed at glibc build time.

Is this a Debian-specific decision?

There has been a proposal upstream not to support 32-bit time_t for
new applications at all, but I don't think we will go in that
direction.



Re: call for ftpmaster's transparency

2020-02-09 Thread Mo Zhou
Hi Niels,

On Sun, Feb 09, 2020 at 11:22:46AM +0100, Niels Thykier wrote:
> For the parts involving tooling, are there bugs/salsa issues describing
> the issue so a "non-FTP-team"-member can take a stab at fixing them?

First of all the major problem we are talking about, that the reviewing
process being so slow, is not well defined. Because what this is going
to optimize is not the machine code, but the human's working procedure.

So it's not easy for people to open a bug and describe the definite
problem they have found. For instance, bugs saying "hi dak devs, I found
the tool not friendly enough for human efficiency, please fix it" are
very likely terrible bug reports.

Something new needs to be invented (and eventually incorporated into
somewhere like dak). I happen to have a bottom-up idea that is still a
work-in-progress[1]. Albeit I believe what I'm doing can greatly
facilitate my NEW reviewing process (as a trainee), I still suffer from
the lack of time and energy to push the draft forward.

I had some further private discussion with Michael and Dmitry. People's
opinions on the solution differ. So I speculate that the community will
have to come up with some more concrete ideas and experiment them a
little bit to settle the whole NEW-SO-SLOW issue.

[1] see my old post: "Idea: intermediate  license review"



Re: call for ftpmaster's transparency

2020-02-09 Thread Jonas Smedegaard
Quoting Michael Lustfield (2020-02-09 11:04:25)
> On Thu, 06 Feb 2020 10:32:42 +1100
> Dmitry Smirnov  wrote:
> 
> > IMHO it is disturbing that one of the most essential processes in 
> > Debian -- acceptance of new and modified packages -- operates almost 
> > in secrecy.
> 
> This is an understandable perspective, but secrecy probably isn't the 
> best word.
> 
> > To make matters worse ftp-masters rarely leave their comments in ITP 
> > issues. As I've recently learned that have profound effect on 
> > processing of new packages.
> 
> I personally think this sounds like a fantastic (and not very 
> difficult) idea.

Me too.


> Where do you propose the bug mail be sent for NEW/binNEW packages 
> without an ITP?

I suggest (for now) to use our issue tracker only for packages with an 
ITP.


> > One of my packages spent a year in the NEW queue at some point 
> > raising to position number 4. Apparently before release of Buster 
> > (2019-07-06) member of ftp-masters team left an internal (invisible 
> > to the public) comment on my package that was not communicated to me 
> > until 7 months later when my package was rejected based on that 
> > comment. The comment could have been addressed without delay if it 
> > was left on the corresponding ITP issue where it belong.
> 
> I suspect when you say, "member of ftp-masters team," what you mean is 
> "FTP-Masters Trainee." FWIW- Trainees are not technically part of the 
> team. We get just enough access to be able to provide package reviews. 
> Those reviews are then either discussed with us or sent back in a 
> rejection/prod message.
> 
> I agree that it could be valuable to see comments; however, they're 
> almost always going to be from Trainees. Since we're not technically 
> part of the team, it's important that we don't speak on behalf of the 
> team. Publishing Trainee comments would effectively be doing that.

I suggest (for now) that ftp trainees CC an ITP (when such ITP exists) 
when they share their findings with ftp-masters.  To help avoid 
misunderstandings, such messages could begin with something like this:

  NB! This is no FTP-Masters ruling (just suggestions from a Trainee).

Is anything stopping Trainees from voluntarily changing their praxis 
right now to cc ITPs when available?


> > A precious time was lost but more importantly one can see that 
> > current process requires an extra effort to communicate with 
> > maintainers -- a something that would not be necessary if 
> > ftp-masters use the official channel that exist specifically to 
> > discuss introduction of new packages -- ITP bug reports. [...]
> 
> I would personally *LOVE* to see ITPs be a requirement for *ALL* new 
> packages. Making it a requirement and expecting ftp-masters to ignore 
> any upload until the ITP has existed for at least X days would be 
> absolutely fantastic. It would fix some redundant library uploads (see 
> golang/nodejs/etc.) and it would provide a mandatory level of review 
> by the wider community.
> 
> Back when I tried to get gitea packaged for main, I had a number of 
> ITPs commented/closed mentioning the alternate library name or a 
> reason it can't be packaged.

I think we don't need mandatory ITPs to get the ball rolling on better 
transparency.

I suggest that (for now) we just make transparency yet another argument 
for voluntarily filing ITPs.


 - Jonas

-- 
 * Jonas Smedegaard - idealist & Internet-arkitekt
 * Tlf.: +45 40843136  Website: http://dr.jones.dk/

 [x] quote me freely  [ ] ask before reusing  [ ] keep private


signature.asc
Description: signature


Bug#950988: ITP: libcglm -- Optimized OpenGL Mathematics library for C

2020-02-09 Thread Leon Marz
Package: wnpp
Severity: wishlist
Owner: Leon Marz 

* Package name: libcglm
  Version : 0.6.2
  Upstream Author : Recep Aslantas 
* URL : https://github.com/recp/cglm
* License : MIT
  Programming Lang: C
  Description : Optimized OpenGL Mathematics library for C

cglm is an optimized 3D math library written in C99 (compatible with C89).
It is similar to the original glm library except this is mainly for C.

This library is useful, when you're writing an OpenGL/Vulkan program in C
and don't want to switch to C++ just to be able to include glm.
It uses the same header-only approach like glm but also supports pre-compiled
function calls



Re: call for ftpmaster's transparency

2020-02-09 Thread Niels Thykier
Michael Lustfield:
> [...]
> 
> I too would love to engage in a civil discussion about ways to improve the
> situation. Let's start with this-
> 
> Why do reviews take so long?
> - The team is tiny
> - Much of the team seems very burned out
> - The ones that are active tend to stick to source or "unloved" tasks
> - There are some very large and/or messy packages that need review
> - There are a lot of redundant tasks and frequently-made mistakes
>   + A little more automation could help that
> - (my opinion) The tools are archaic, cumbersome, and inefficient
>   + Fixing this would be a very (non-technically) difficult task
>   + An idea I have would help to bring transparency to the process...
> ^ it's missing an interest requirement :(
> 

For the parts involving tooling, are there bugs/salsa issues describing
the issue so a "non-FTP-team"-member can take a stab at fixing them?

~Niels



Re: call for ftpmaster's transparency

2020-02-09 Thread Michael Lustfield
On Thu, 06 Feb 2020 10:32:42 +1100
Dmitry Smirnov  wrote:

> IMHO it is disturbing that one of the most essential processes in Debian
> -- acceptance of new and modified packages -- operates almost in secrecy.

This is an understandable perspective, but secrecy probably isn't the best word.

> To make matters worse ftp-masters rarely leave their comments in ITP
> issues. As I've recently learned that have profound effect on processing of
> new packages.

I personally think this sounds like a fantastic (and not very difficult) idea.

Where do you propose the bug mail be sent for NEW/binNEW packages without an
ITP?

> One of my packages spent a year in the NEW queue at some point raising to
> position number 4. Apparently before release of Buster (2019-07-06) member
> of ftp-masters team left an internal (invisible to the public) comment on
> my package that was not communicated to me until 7 months later when my
> package was rejected based on that comment. The comment could have been
> addressed without delay if it was left on the corresponding ITP issue where
> it belong.

I suspect when you say, "member of ftp-masters team," what you mean is
"FTP-Masters Trainee." FWIW- Trainees are not technically part of the team. We
get just enough access to be able to provide package reviews. Those reviews are
then either discussed with us or sent back in a rejection/prod message.

I agree that it could be valuable to see comments; however, they're almost
always going to be from Trainees. Since we're not technically part of the team,
it's important that we don't speak on behalf of the team. Publishing Trainee
comments would effectively be doing that.


> A precious time was lost but more importantly one can see that current
> process requires an extra effort to communicate with maintainers -- a
> something that would not be necessary if ftp-masters use the official
> channel that exist specifically to discuss introduction of new packages --
> ITP bug reports.
> [...]

I would personally *LOVE* to see ITPs be a requirement for *ALL* new packages.
Making it a requirement and expecting ftp-masters to ignore any upload until
the ITP has existed for at least X days would be absolutely fantastic. It would
fix some redundant library uploads (see golang/nodejs/etc.) and it would
provide a mandatory level of review by the wider community.

Back when I tried to get gitea packaged for main, I had a number of ITPs
commented/closed mentioning the alternate library name or a reason it can't be
packaged.

> I'd like Debian project leader to engage in the matter of improving
> transparency of ftp-masters team operations and procedures.

This feels a lot like starting a GR and not allowing appropriate discussion.
It's heavy-handed, isn't going to get anywhere, and is going to hurt feelings.

> As very minimum I recommend to change current ftp-master procedures to use
> ITP bugs instead of internal comments whenever possible, for the sake of
> transparency and to optimise communication.

I replied to this idea above.

> I want to encourage a public discussion regarding opening of the ftp-master
> mail list to the public. Currently reasons for unjustified secrecy of ftp-
> master processes is not explained...

It's often said that emotions don't play well with productive discussions.
Adding phrases such as "where it belong", using "secrecy" over "privacy",
calling it "unjustified", and immediately jumping to demands of the DPL are
accusatory and inflammatory, and will likely just get you ignored or start an
unproductive flame war.


I too would love to engage in a civil discussion about ways to improve the
situation. Let's start with this-

Why do reviews take so long?
- The team is tiny
- Much of the team seems very burned out
- The ones that are active tend to stick to source or "unloved" tasks
- There are some very large and/or messy packages that need review
- There are a lot of redundant tasks and frequently-made mistakes
  + A little more automation could help that
- (my opinion) The tools are archaic, cumbersome, and inefficient
  + Fixing this would be a very (non-technically) difficult task
  + An idea I have would help to bring transparency to the process...
^ it's missing an interest requirement :(

-- 
Michael Lustfield


pgpsLUD8ax93J.pgp
Description: OpenPGP digital signature


Re: Best practices for Debian developers who are also upstreams?

2020-02-09 Thread Wouter Verhelst
On Sat, Feb 08, 2020 at 10:07:48PM +0200, Otto Kekäläinen wrote:
> Hello!
> 
> I've ended up in being both the maintainer in Debian and an upstream
> developer for a couple of packages and I have been fantasizing about
> how to optimize my workflow so that I primarily fix all bugs and do QA
> directly on the upstream development version (=upstream git master)
> and then have only a very small overhead work then importing and
> uploading new upstream releases in Debian.

So, I have four packages that are in various ways similar to this:

- nbd, which I started maintaining in Debian before becoming upstream
  for it;
- SReview, which I started maintaining upstream before uploading it to
  Debian;
- fdpowermon, which I released first by uploading it to Debian;
- ola, which I maintain for Debian and do not really maintain upstream,
  but which upstream did give me commit rights to their repository for.

I have separate upstream and debian branches for nbd; however, if
someone reports a bug, I will fix it upstream first, and then (possibly,
if applicable and it's too soon to release an upstream version)
cherry-pick the patch to the debian branch. There is no debian/
directory in the upstream branch. Updating to a new upstream release for
Debian involves a simple "git merge" of the upstream branch, which will
always work if I never violated that policy. Needless to say, nbd is not
a native package.

I do not maintain separate branches for SReview or fdpowermon. The
difference between the two, however, is that SReview is uploaded to CPAN
as well as Debian. The CPAN upload does not contain the debian/
directory, and I do use the tarball of the CPAN upload as the
orig.tar.gz for the Debian upload. However, they're both built from the
same git checkout. Fdpowermon, on the other hand, I do not upload it to
CPAN (it's too simple for that), and is instead uploaded as a native
package to Debian.

For ola, upstream at one point committed a squash commit of all the
Debian patches I had committed to my debian branch at that point in
time. The next time I tried to merge their newest release by way of the
git tag, things went a bit haywire. I do not recommend this approach.

-- 
To the thief who stole my anti-depressants: I hope you're happy

  -- seen somewhere on the Internet on a photo of a billboard