Re: Let's start salvaging packages! -- let's start working on the text.
Dear Tobias, > [1] https://pad.riseup.net/p/debian-salvaging-packages-keep Thanks for moving forward with your proposal. I'll have a poke at the Etherpad in the upcoming days or so. Whilst you outline a plan of sorts, do you have any rough timetable in your head that you would like to share? That might help focus or motivate deeper discussion to some degree. > The discussion can and should of course continue, especially as it is > vacation time atm. Combined with post-Debconf tiredness/blues… Best wishes, -- ,''`. : :' : Chris Lamb `. `'` la...@debian.org / chris-lamb.co.uk `-
Re: What to do about packages with dead alioth lists as the maintainer.
peter green wrote... > Nearly 3 months ago there was a mass bug filing on packages with dead > alioth lists as maintainer. Many of these bugs are still open with no > maintainer response Yeah, but also a lot of people have already reacted. I got a lot of e-mails sent to -close lately ... > (note: it appears that the submitter of the bugs tried to usertag them > but failed to actually do so) Possibly. The tags were included in the announcement, nobody bothered to comment. > What should be done about these in the event that the maintainers > don't sort it out? is it reasonable to make a NMU promoting the first > co-maintainer to maintainer? is it reasonable to make a NMU orphaning > the package? (and if-so should the list of co-maintainers be left in > place?) In either case should a final warning be sent to the package's > co-maintainers? Orphaning the packages is quite harsh when it seems at least some uploaders are still active - although not necessarily on that particular package, so they certainly should see a gentle warning beforehand. The bottom line still is: Maintainers should be encouraged to do their chores, not be hushed away. Also, orphaning is rather the last resort, the list of packages in that state is already way too huge. Looking at the current state I'm quite undecided. It's hard to tell why some maintainers don't react. The first package I checked even had an upload recently, ignoring the RC bug. They don't know? They don't care? At the same time I'm not sure how much effort should be spent to enforce a rule (the Debian policy thingie on address in Maintainer:) - a rule that I consider important, that's why I did this MBF - while many people appear to have a more relaxed view on this. For the time being I'm thinking about technical means (yes, although they don't solve social problems): It seems these bugs do not block testing transition, that could be changed. And ftp-master could autoreject packages with a defunct alioth address, the list is public. Still, all this is an issue, and it should be resolved at freeze time. Christoph signature.asc Description: PGP signature
Bug#906008: ITP: lizzie -- GUI for analyzing games in real time using Leela Zero
Package: wnpp Severity: wishlist Owner: Ximin Luo * Package name: lizzie Version : 0.5 Upstream Author : featurecat * URL : https://github.com/featurecat/lizzie * License : GPL-3 Programming Lang: Java Description : GUI for analyzing games in real time using Leela Zero Features include: - show win rates and confidence levels for selected moves on the board - show best move sequence continuation, for these selected moves - displays a graph of winrate against move number - show whole game history including forked moves - interactive play including undo/redo - load and save games in SGF format
Re: GCC and binutils updates for buster
On Mon, Aug 13, 2018 at 1:19 AM, Manuel A. Fernandez Montecelo wrote: > 2018-07-30 22:36 Adrian Bunk: >> >> And the next burden will be if riscv64 gets added in bullseye. > > [*] Unlike other arches, this one is not restricted to a single vendor >so hardware can be annouced at any time from unexpected parties; >still, only a few months are left. Only a few months are left before buster, but Adrian was talking about bullseye, which is quite a long way off. -- bye, pabs https://wiki.debian.org/PaulWise
Bug#906002: ITP: node-has-object-spread -- Runtime detection of ES6 spread syntax
Package: wnpp Severity: wishlist Owner: ro...@debian.org X-Debbugs-CC: debian-devel@lists.debian.org * Package name: node-has-object-spread Version : 1.0.0 Upstream Author : Renée Kooi * URL : https://github.com/goto-bus-stop/has-object-spread * License : Apache-2.0 Programming Lang: JavaScript Description : Runtime detection of ES6 spread syntax This package is a unit test helper detecting if javascript (ES6) spread syntax is supported by the javascript engine. . Spread syntax allows an iterable such as an array expression or string to be expanded in places where zero or more arguments (for function calls) or elements (for array literals) are expected, or an object expression to be expanded in places where zero or more key-value pairs (for object literals) are expected. . This package is a build dependency of browserify, a JavaScript tool that allows developers to write Node.js-style modules that compile for use in the browser. . Node.js is an event-based server-side JavaScript engine.
Bug#905994: O: libtool
Package: wnpp I'm orphaning libtool. It currently has 1 RC bug, and the last NMU at least seems to cause a regression. Kurt
Bug#905987: ITP: node-make-generator-function -- Unit testing helper for Node.js returning a generator function
Package: wnpp Severity: wishlist Owner: ro...@debian.org X-Debbugs-CC: debian-devel@lists.debian.org * Package name: node-make-generator-function Version : 1.1.0 Upstream Author : Jordan Harband * URL : https://github.com/ljharb/make-generator-function * License : Expat Programming Lang: JavaScript Description : Unit testing helper for Node.js returning a generator function This package allows one to create portable unit tests using ES6 generators function. . This module returns a generator function (arbitrary), or undefined if generator syntax is unsupported. It supports also to return concise generator function, if supported by Javascript engine. . A generator is a function that can stop midway and then continue from where it stopped. A generator looks like a function but it behaves like a Javascript iterator.
Re: What to do about packages with dead alioth lists as the maintainer.
On Sun, Aug 12, 2018 at 07:13:06PM +0100, peter green wrote: > Nearly 3 months ago there was a mass bug filing on packages with dead > alioth lists as maintainer. Many of these bugs are still open with no > maintainer response > > https://bugs.debian.org/cgi-bin/pkgreport.cgi?include=subject%3Alists.alioth.debian.org;submitter=debian.axhn%40manchmal.in-ulm.de > > What should be done about these in the event that the maintainers don't > sort it out? is it reasonable to make a NMU promoting the first > co-maintainer to maintainer? is it reasonable to make a NMU orphaning the > package? (and if-so should the list of co-maintainers be left in place?) > In either case should a final warning be sent to the package's > co-maintainers? A side effect of these bugs is a query: "is anyone still maintaining this?". Thus, it feels really wrong to NMU to falsely claim so. Unlike single-person maintainers where there's MIA (which still doesn't detect a person active elsewhere but neglecting the package), there's no real way to detect an AWOL team. Except this. Thus, we have a nice list of no longer active teams. That's good, we should make this happen more often. So, what about sending a warning now then orphaning in, say, 3-6 months from now? Meow. -- ⢀⣴⠾⠻⢶⣦⠀ So a Hungarian gypsy mountainman, lumberjack by day job, ⣾⠁⢰⠒⠀⣿⡁ brigand by, uhm, hobby, invented a dish: goulash on potato ⢿⡄⠘⠷⠚⠋⠀ pancakes. Then the Polish couldn't decide which of his ⠈⠳⣄ adjectives to use for the dish's name.
What to do about packages with dead alioth lists as the maintainer.
Nearly 3 months ago there was a mass bug filing on packages with dead alioth lists as maintainer. Many of these bugs are still open with no maintainer response https://bugs.debian.org/cgi-bin/pkgreport.cgi?include=subject%3Alists.alioth.debian.org;submitter=debian.axhn%40manchmal.in-ulm.de (note: it appears that the submitter of the bugs tried to usertag them but failed to actually do so) What should be done about these in the event that the maintainers don't sort it out? is it reasonable to make a NMU promoting the first co-maintainer to maintainer? is it reasonable to make a NMU orphaning the package? (and if-so should the list of co-maintainers be left in place?) In either case should a final warning be sent to the package's co-maintainers?
Re: GCC and binutils updates for buster
2018-07-30 22:36 Adrian Bunk: And the next burden will be if riscv64 gets added in bullseye. Not likely, I think, since for example there's almost no hardware available for end-users to buy (or to use for buildds), and this will probably be the case at least until the freeze [*]. Another reason is that there're missing key components that need to get to the main upstream repos, like GDB, LLVM, Rust, JIT support for OpenJDK, etc. GDB is being upstreamed right now, but there's still way to go for the rest. [*] Unlike other arches, this one is not restricted to a single vendor so hardware can be annouced at any time from unexpected parties; still, only a few months are left. -- Manuel A. Fernandez Montecelo
Re: changing git tags on the remote repo
On 2018-08-12 14:35:22 +0200 (+0200), Carsten Schoenert wrote: [...] > that's a feature. > Normally you don't want this and nobody can delete tags unintentionally > as there is normally no reason to change history on a public git tree. > The normal case is to create new tag with the according commit SHA > reference. > > https://docs.gitlab.com/ee/user/project/protected_tags.html > > You can modify the behavior for your git tree, but really be careful if > you remove this protection! As said, you really don't want to do this! :) And probably the biggest reason _why_ you don't want to do this is that tag deletion/replacement doesn't propagate via pull or remote update. You can of course (with appropriate access) delete and replace a tag on the remote but people who have already cloned from it will never see that change (well, except for changes to "lightweight" tags but those are really just a symlink to a ref and not a typical tag object). Treating published tags as if they can't be changed is far more friendly to other users of your repositories. -- Jeremy Stanley signature.asc Description: PGP signature
Re: HELP WANTED: security review / pam experts for su transition
Hello again, My previous mail didn't result in any feedback, so let me try again with some more detailed questions that might be easier to discuss related to the PAM configuration of su (and su-l). As people are likely aware, the su takeover has now happened and login (src:shadow) no longer ships su in favour of it being shipped in util-linux (src:util-linux) instead. The /etc/pam.d/su configuration was carried over directly from the old src:shadow (login) su, and might be less then ideal for the util-linux su implementation. The new /etc/pam.d/su-l configuration didn't exist before, but mostly just includes the su pam config for now. Mainly mentioning su-l because we have the option to differentiate between what pam config we want for 'su -' vs 'su'. 1/ One new issue that has bitten some people is that shadow su used to, even when you ask for a new clean environment, still copy over DISPLAY and XAUTHORITY. Apparently some people relied on that even though X doesn't really give you any real privileges separation. On fedora the su pam configuration includes pam_xauth which I assume should solve the same problem. Should we add pam_xauth to /etc/pam.d/su as well? (For now it's just mentioned in util-linux.NEWS and left to the user to edit the pam configuration as they find suitable, but if we can't even figure out the right choice I doubt users will.) 2/ There are some longstanding issues with the pam configuration which existed before the switch, but seems reasonable to adress still. For example #711104 asks for 'su -' to reset umask. Should we include pam_umask? Maybe even in /etc/pam.d/su so both 'su' and 'su -' gets umask reset? cf. how the carried over /etc/pam.d/su already contains pam_limits. 3/ The fedora pam configuration seems to contain several more differences. Anyone interested in investigating and comparing them? Possibly our ancient su pam config should get a complete overhaul, rather than just poking at details one by one. Regards, Andreas Henriksson
Re: changing git tags on the remote repo
Hi Holger, Am 12.08.18 um 14:17 schrieb Holger Wansing: > Hi, > > Alf Gaida wrote: >> git push --tags --force - if one have the needed rights and the remote >> settings allow it. > > This goes at least so far, that I get a clear error message: > > remote: GitLab: You are not allowed to change existing tags on this project. that's a feature. Normally you don't want this and nobody can delete tags unintentionally as there is normally no reason to change history on a public git tree. The normal case is to create new tag with the according commit SHA reference. https://docs.gitlab.com/ee/user/project/protected_tags.html You can modify the behavior for your git tree, but really be careful if you remove this protection! As said, you really don't want to do this! :) -- Regards Carsten Schoenert
Re: changing git tags on the remote repo
Hi, Alf Gaida wrote: > git push --tags --force - if one have the needed rights and the remote > settings allow it. This goes at least so far, that I get a clear error message: remote: GitLab: You are not allowed to change existing tags on this project. Thanks anyway Holger -- Holger Wansing PGP-Finterprint: 496A C6E8 1442 4B34 8508 3529 59F1 87CA 156E B076
Re: changing git tags on the remote repo
git push --tags --force - if one have the needed rights and the remote settings allow it. Cheers Alf
Re: changing git tags on the remote repo
Holger Wansing wrote: > I am curious about how to change an already existing git tag afterwards > (means: change the commit it points to). > Locally, I can change an existing tag, and then create it newly. > But I cannot push it to the remote repo (get > "! [rejected]139 -> 139 (already exists) " > There is -f (--force) option to replace an existing tag and locally it seems > to work, since it says > "Tag '139' updated (was 02108ec)" > but the push to remote repo fails nevertheless. > Any help? Iirc you need to delete the remote tag first. https://stackoverflow.com/questions/5480258/how-to-delete-a-git-remote-tag cu Andreas -- `What a good friend you are to him, Dr. Maturin. His other friends are so grateful to you.' `I sew his ears on from time to time, sure'
changing git tags on the remote repo
Hi, I am curious about how to change an already existing git tag afterwards (means: change the commit it points to). Locally, I can change an existing tag, and then create it newly. But I cannot push it to the remote repo (get "! [rejected]139 -> 139 (already exists) " There is -f (--force) option to replace an existing tag and locally it seems to work, since it says "Tag '139' updated (was 02108ec)" but the push to remote repo fails nevertheless. Any help? Holger -- Holger Wansing PGP-Finterprint: 496A C6E8 1442 4B34 8508 3529 59F1 87CA 156E B076
Bug#905952: ITP: netgen-lvs -- Netlist comparison - Layout vs Schematic (LVS)
Package: wnpp Severity: wishlist Owner: Ruben Undheim * Package name: netgen-lvs Version : 1.5.105 Upstream Author : Tim Edwards * URL : http://opencircuitdesign.com/netgen/ * License : GPL Programming Lang: C Description : Netlist comparison - Layout vs Schematic (LVS) Netgen is a tool for comparing netlists, a process known as LVS, which stands for "Layout vs. Schematic". This is an important step in the integrated circuit design flow, ensuring that the geometry that has been laid out matches the expected circuit. Very small circuits can bypass this step by confirming circuit operation through extraction and simulation. Very large digital circuits are usually generated by tools from high-level descriptions, using compilers that ensure the correct layout geometry. The greatest need for LVS is in large analog or mixed-signal circuits that cannot be simulated in reasonable time. Even for small circuits, LVS can be done much faster than simulation, and provides feedback that makes it easier to find an error than does a simulation. The source package name "netgen" is reserved by another package, so reserving "netgen-lvs" for this program. I plan to maintain it in the Debian Electronics team.
Bug#905950: ITP: python-gdsii -- Library to handle GDSII files
Package: wnpp Severity: wishlist Owner: Ruben Undheim * Package name: python-gdsii Version : 0.2.1 Upstream Author : Eugeniy Meshcheryakov * URL : https://pythonhosted.org/python-gdsii/ * License : LGPL-3+ Programming Lang: Python Description : Library to handle GDSII files python-gdsii is a library that can be used to read, create, modify and save GDSII files. It supports both low-level record I/O and high level interface to GDSII libraries (databases), structures, and elements. I plan to maintain it in the Debian Python Modules team.
Re: Let's start salvaging packages! -- let's start working on the text.
Dear -devel, seems so as the discussion is more quiet than I've anticipated... So as an optimist, I'm assuming this is because the proposal has kind of rough consensus, so I will plan now for the next steps. The discussion can and should of course continue, especially as it is vacation time atm. So the next steps: - Editing Platform: To enable collaboration on the text (I have been approached by people wanting to help writing it) I've set up a etherpad at [1]. - As I'm in favour of enrico's idea to split the text into two parts: 1) description of the salvaging process for dev-ref 2) carving-out all the details to a wiki-page. - I try to reserve some time the next days for a very rough outline of the text, to provide further sparks to ignite the discussion. [1] https://pad.riseup.net/p/debian-salvaging-packages-keep -- tobi signature.asc Description: PGP signature
added additional information to bug #382390 (bugs.kde.org)
added additional information to bug #382390 https://bugs.kde.org/show_bug.cgi?id=382390 __ MyTwitterPage: http://twitter.com/OpenSimFan MyInstagrampage: http://instagram.com/dutchglory MyFacebookpage http://www.facebook.com/andre.verwijs MyGoogle+page AndréVerwijs-Google+https://plus.google.com/111310545842863442992
Corresponding source for a machine-learning data set (was: Concerns to software freedom when packaging deep-learning based appications.)
Ian Jackson writes: > […] In the case of a pretrained neural network, the source code is the > training data. > > In fact, they are probably not redistributable unless all the training > data is supplied, since the GPL's definition of "source code" is the > "preferred form for modification". For a pretrained neural network > that is the training data. One hopeful sign is that people are addressing the need for the source form of a machine-learning product. In this case, it's by offering a version control system customised to machine-learning data. https://blog.dataversioncontrol.com/data-version-control-tutorial-9146715eda46> Is that an approach we can recommend to upstream developers who publish machine-learning data sets? -- \ “A child of five could understand this. Fetch me a child of | `\ five.” —Groucho Marx | _o__) | Ben Finney