Re: Changes to abi=+time64 behavior (was Re: 64-bit time_t transition in progress)

2024-02-09 Thread Peter Green


So when introducing a new soname (no just a new package name), then one
should move to time64 even on i386 ?


The problem with doing this is that

1. A reverse dependency may depend on more than one library that uses time_t
   in it's API. Said reverse dependency would not be able to be sanely built
   if libfoo uses 32-bit time_t in it's API and libbar uses 64-bit time_t in
   it's API.
2. If any of your reverse dependencies are libraries that expose time_t in
   their API they would have a similar problem.





Re: Mapping Reproducibility Bug Reports to Commits

2021-11-14 Thread peter green


I am a researcher at the University of Waterloo, conducting a project to study 
reproducibility issues in Debian packages.

The first step for me is to link each Reproducibility-related bug at this link: 
https://bugs.debian.org/cgi-bin/pkgreport.cgi?usertag=reproducible-bui...@lists.alioth.debian.org
 to the corresponding commit that fixed the bug.

However, I am unable to find an explicit way of doing so programatically. 
Please assist.


There is no explicit link.

Most (but not all) debian packages are maintained in a VCS and there are fields 
in the source package
that identify the location and type of the VCS (almost certainly git nowadays), 
but there are multiple
different workflows used (git-buildpackage is the most common and normally uses a 
"patches-unapplied"
git tree, but there is also dgit which uses a "patches applied" git tree. Git 
trees may or may not
contain the upstream source. At least one language community uses a system 
where the git tree stores
files that are used to generate the Debian packaging rather than the final 
Debian packaging itself.

Also maintainer practices for strucuring commits vary, some maintainers update 
the changelog at the same
time as making the actual changes, others update the changelog in a batch later.

Sometimes bugs aren't even closed from the changelog at all but instead are 
closed by the maintainer
after the upload. Particularly if the maintainer is not sure whether a change 
will fix the bug.

With all that said, it's probably doable to develop heuristics that map bug 
numbers to commits in most
cases, an outline might be.

* Check if the package has a VCS and the relavent changelog can be found in 
said VCS, if there is no VCS give up and reffer the bug for human attention.
* Map the bug number to a changelog line (if there is no such mapping, give up 
and reffer the bug for human attention)
* Determine which commit added the changelog line (e.g. with git blame), see if 
there are actual code changes in that commit,
* if so take it as the probable commit, if not then search backwards a bit for 
a commit message that matches
the changelog line.

Another option having guessed a range of commits from the changelog and/or from 
comparing the VCS to the
source packages may be to run a bisection, this would likely require some 
effort to detect what workflow
is in use though.



Re: Re: Split Packages files based on new section "buildlibs"

2021-02-17 Thread Peter Green

> The same applies to the GNOME/GTK stack, where Flatpak is the way to go
> for active development. libgtk-3-dev is really only for building Debian
> packages from their point of view, too.

Perhaps, but what matters is not upstream's point of view but Debian
user's point of view.

My perception is that when C/C++ users on debian want a library to compile
stuff against their first port of call is "apt-get install libwhatever-dev".
Building C/C++ libraries from upstream sources is considered a last resort.

Whereas for rust users, noone seems to build or distribute binary libraries
and the done thing is to use cargo which automatically downloads and builds
the libraries your project needs. You have to actively go out of your way to
make cargo use rust libraries from Debian rather than those from crates.io

Upstream gnome developers may use flatpak, but I think upstream gnome
developers are a small subset of the people building stuff against gtk.





Re: Release status of i386 for Bullseye and long term support for 3 years?

2020-12-12 Thread peter green

   Then there was the short netbook boom, but AFAIR some early ones
   had 64bit CPUs but 32bit-only firmware.

My memory is that at the height of the boom the dominant processors
were the N270 and N280, which are 32-bit only. By the time 64-bit
netbook processors showed up the boom was on the decline.


There are at least two more:


5. People running Debian on virtual machines.

You can run an i386 VM with vmware or virtualbox with no special
hardware support. An x86-64 VM on the other hand requires VT-x
(or the AMD equivilent). While processor support for this is
the norm nowadays it's still often disabled by default
which can be a pain if you need to get IT support to access
bios setup on a machine.

i386 hardware is so numerous and widely spread, that every tiny fraction 
of i386 users might be more users than half of our release architectures 
combined. It is not even clear whether this is just an exaggeration or 
might be literally true:


i386 still gives 17281 popcon submissions, that is about
a tenth of amd64, but it's also over 10 times the next highest port

Now that probably doesn't reflect true usage, in particular
users who install using images tend to miss out on the popcon
question, but I still suspect that i386 is up there in the top
few most used architectures.



Dependencies on obsolete puppet-common transitional package (potential mass bug filing).

2020-03-22 Thread peter green

The puppet source package, recently recently dropped the puppet-common binary 
package. This package has been a transitional dummy package since stretch.

Unfortunately there are still a substantial number of packages depending on it. 
They are listed by maintainer at the end of this mail (the list is based on 
packages that currently have the issue in bullseye).

I filed bugs for the first couple I spotted, but then started to wonder if, 
given that nearly all of the packages involved are maintained by the openstack 
team, a more centralised approach is better. If I get no response however I 
will go ahead with a mass bug filing so the testing autoremoval system can do 
it's thing.

Debian Ruby Extras Maintainers 

ruby-rspec-puppet

Debian OpenStack 
puppet-module-aboe-chrony
puppet-module-antonlindstrom-powerdns
puppet-module-aodh
puppet-module-arioch-redis
puppet-module-barbican
puppet-module-ceilometer
puppet-module-ceph
puppet-module-cinder
puppet-module-cloudkitty
puppet-module-congress
puppet-module-debian-archvsync
puppet-module-deric-zookeeper
puppet-module-designate
puppet-module-glance
puppet-module-gnocchi
puppet-module-heat
puppet-module-heini-wait-for
puppet-module-horizon
puppet-module-icann-quagga
puppet-module-icann-tea
puppet-module-ironic
puppet-module-joshuabaird-ipaclient
puppet-module-keystone
puppet-module-magnum
puppet-module-manila
puppet-module-michaeltchapman-galera
puppet-module-murano
puppet-module-neutron
puppet-module-nova
puppet-module-octavia
puppet-module-openstack-extras
puppet-module-openstacklib
puppet-module-oslo
puppet-module-ovn
puppet-module-panko
puppet-module-placement
puppet-module-puppetlabs-haproxy
puppet-module-puppetlabs-rabbitmq
puppet-module-puppetlabs-rsync
puppet-module-rodjek-logrotate
puppet-module-sahara
puppet-module-swift
puppet-module-theforeman-dns
puppet-module-voxpupuli-alternatives
puppet-module-voxpupuli-collectd
puppet-module-voxpupuli-corosync
puppet-module-voxpupuli-ssh-keygen
puppet-module-vswitch

PKG OpenStack 
puppet-module-adrienthebo-filemapper
puppet-module-camptocamp-kmod
puppet-module-camptocamp-openssl
puppet-module-duritong-sysctl
puppet-module-nanliu-staging
puppet-module-puppetlabs-mongodb
puppet-module-puppetlabs-tftp
puppet-module-puppetlabs-vcsrepo
puppet-module-richardc-datacat
puppet-module-saz-rsyslog
puppet-module-saz-ssh
puppet-module-sbitio-monit

Debian OpenStack 
puppet-module-puppet-community-mcollective



re: git-buildpackage to be autoremoved due to python2 transition

2020-02-27 Thread peter green

Relevant packages and bugs:
  943107 git-buildpackage: Python2 removal in sid/bullseye

This bug is not marked as rc.

Nevertheless I believe that this bug report is in-fact a false positive. From 
what I can tell git-buildpackage, even in buster, does not (build-)depend on 
python 2 or any python 2 modules.

It does build-depend on python-pydoctor, but according to a recently entry in the 
pydoctor changelog that package "is a Python application and not used as a 
module"

It would make sense to change the build-dependency to pydoctor in the next 
upload, but it's probablly not worth making an upload just for that change.

  937132 nevow: Python2 removal in sid/bullseye

Depended on by pydoctor in testing, but not in unstable. Should stop being a 
problem for git-buildpackage when pydoctor migrates.

  938622 tahoe-lafs: Python2 removal in sid/bullseye

Listed as a "blocker" of the above bug but not currently in testing. Personally I 
advocate ignoring "blockers" that are not in testing, but I'm not sure if consensus has 
been reached on that.


Bugs which you may notice which are now not so relevant any more
because they have been fixed in sid (but not yet migrated):
  950216 [git-buildpackage] missing xsltproc autopkg test dependency
Fixed in sid; migration blocked by FTBFS due to pydoctor
breakage (#949232).  When pydoctor has migrated, reattempting
build (eg by re-upload) should fix this.

Builds happen in unstable, so there is no need to wait for pydoctor to migrate 
to testing before retrying the build. I just requested a retry and the package 
built succesfully. I'd expect it to migrate as soon as dak and britney process 
the binary.

  949232/948831 [pydoctor] needs to depend on cachecontrol
  952546 [pydoctor] d/copyright & DFSG issues
  937421 [pydoctor] Python2 removal in sid/bullseye

Should hopefully be fixed in a few days when pydoctor migrates to testing, i'm 
not seeing any obvious blockers for that right now.



re: Debian Buster will only be 54% reproducible (while we could be at >90%)

2019-03-06 Thread peter green

Because of their design, binNMUs are unreproducible, see #894441 [3] for
the details (in short: binNMUs are not what they are ment to be: the source
is changed and thrown away)

To be specific, the source tree is extracted, then an entry is added to 
debian/changelog and then the package is built. This modified source tree is 
not retained.

It seems to me that binnmus could be made reproducible by storing the 
debian/changelog modifications in the buildinfo, then re-applying it at 
reproduction time.



Re: Recreating history of a package

2019-02-16 Thread peter green

On 12/02/19 13:26, Ian Jackson wrote:

peter green writes ("Re: Recreating history of a package"):

https://github.com/plugwash/autoforwardportergit/blob/master/pooltogit will 
take dscs in a pool structure and import them into git repos (one per source 
package) using dgit, building the history based on the changelogs. It can even 
follow history across source package renames.

It also has the ability to use snapshotsecure to download parent versions from 
snapshot.debian.org , As the code stands it only uses that functionality to 
request immediate parents of local versions, but it could be easily modified to 
grab the entire history of the package (as defined by it's changelog).

Cool, thanks!  Have you considered making a package of it ?

In it's present form it is specialized to the needs of autoforwardportergit, so 
packaging it separately (if/when I get around to packaging autoforwardportergit 
it will be packaged as part of that) in it's present form doesn't make much 
sense. It would certainly be possible to generalize it, but that would require 
thought/decisions on how best to do that.

Thinking more about the possibility of importing the entire history of a source 
package it is more problematic than my off the cuff reply implied. It would be 
easy to modify pooltogit to try to retrieve the entire history, but for a large 
proportion of packages this would result in a failure to import for several 
reasons.

1. Changelogs sometimes include versions that were never uploaded to Debian. I 
suspect they also sometimes include versions that were uploaded but were 
superseded before they made it to a snapshot.
2. Snapshot.debian.org is only offered over plain insecure http. For recent 
versions the packages can be verified against the Packages/Sources files which 
can in turn be verified with gpg but older versions are more problematic to 
verify as the relevant packages/sources files are only signed with 1024 bit 
keys or not signed at all. This is made worse by the fact that 
snapshot.debian.org has an API to obtain the first snapshot a package is 
available in but not any API to find the last snapshot it was available in.
3. Some packages aren't on snapshot.debian.org at all due to age.
4. Some packages are blocked on snapshot.debian.org due to license issues.



Re: Recreating history of a package

2019-02-11 Thread peter green

Alternatively if one wanted to get more sophisticated than just
importing every version from snapshot in version number order, one
might write something to look inside the package at the changelogs to
try to discern the branch structure.

https://github.com/plugwash/autoforwardportergit/blob/master/pooltogit will 
take dscs in a pool structure and import them into git repos (one per source 
package) using dgit, building the history based on the changelogs. It can even 
follow history across source package renames.

It also has the ability to use snapshotsecure to download parent versions from 
snapshot.debian.org , As the code stands it only uses that functionality to 
request immediate parents of local versions, but it could be easily modified to 
grab the entire history of the package (as defined by it's changelog).



Re: Installer: 32 vs. 64 bit

2018-10-27 Thread peter green

Why are they creating 32-bit virtual machines?


At least with virtualbox 32-bit VMs can run on any host. 64-bit VMs require 
VT-x which is all too often disabled in the BIOS.



What to do about packages with dead alioth lists as the maintainer.

2018-08-12 Thread peter green

Nearly 3 months ago there was a mass bug filing on packages with dead alioth 
lists as maintainer. Many of these bugs are still open with no maintainer 
response

https://bugs.debian.org/cgi-bin/pkgreport.cgi?include=subject%3Alists.alioth.debian.org;submitter=debian.axhn%40manchmal.in-ulm.de

(note: it appears that the submitter of the bugs tried to usertag them but 
failed to actually do so)

What should be done about these in the event that the maintainers don't sort it 
out? is it reasonable to make a NMU promoting the first co-maintainer to 
maintainer? is it reasonable to make a NMU orphaning the package? (and if-so 
should the list of co-maintainers be left in place?) In either case should a 
final warning be sent to the package's co-maintainers?




Re: RFR: email about regressions [was: Dealing with ci.d.n for package regressions]

2018-05-31 Thread peter green

> In my perception, the biggest reason is a social one. The is resistance
> to the fact that issues with autopkgtests out of one's control can block
> one's package (this is quite different than in Ubuntu).

Can you elaborate on how this is different than in Ubuntu?  It sounds
pretty similar to me, except for being a delay instead of a block.  Or
did you mean that the social consequences are different?

(note: i'm only loosely familiar with ubuntu, please correct me if any of this 
is wrong).

Debian has a strong concept of package maintainership. Each maintainer (or sometimes 
team) essentially "owns" a package and is supposed to take responsibility for 
fixing issues in that package. Library maintainers can NMU their rdeps but there are 
time-consuming procedures surrounding that and of course the library maintainer may not 
be enough of an expert on the rdep to fix it.

So a big social question for library maintainers becomes "to what extent do issues 
in reverse dependencies get to block updates to my library?"

And the answer in recent times (particularly since the introduction of "smooth updates" 
to Britney) been "for the most part they don't". The library migrates to testing and the 
rdeps remain buggy in testing, if the bug is rc and left unfixed for long enough they will be 
removed, if the bug is not rc then it may just persist into the release.

Enforcing autopkgtests would radically change that dynamic.

Further Debian has this strange permissions system where any dd can technically 
upload any package and anyone can technically close any bug but if you want 
something more unusual doing you have to go through the small handful of people 
who have permission to do it. I expect a lot of DDs are worried that overriding 
a broken autopkgtest will fall into the latter category.

My understanding is that Ubuntu has a rather different social structure. 
Universe is an explicit second class citizen (does the ubuntu autopkgtest 
implementation allow universe packages to block main packages? if so how do you 
typically respond to that?), individual packages don't really have maintainers, 
many developers are paid by canonical etc.



Re: Removing packages perhaps too aggressively?

2018-02-01 Thread peter green

If you do reintroduce it, please note the extra steps (reopening bugs
in particular)

On that note one thing that doesn't seem to be easy/well documented is how to go about 
finding the bugs that affected a package at the time of it's removal. If I go to the bugs 
page for the package and select "archived and unarchived" I see a bunch of 
resolved bugs but other than opening them up individually I don't see a good way to tell 
the difference between ones that were actually fixed and ones that were open at the time 
of the removal.




autoforwardportergit call for testing.

2018-01-04 Thread Peter Green

I have been working on a tool called Autoforwardportergit for automating the 
process or merging
downstream changes with new versions from Debian. A downstream in this case 
could be anything
from your private modified versions of some packages to a major derivative 
distribution.

It was created for use by Raspbian but I have been working to generalize and 
document it such
that others can use it to. I have just written a tutorial which demonstrates 
how to set up and
use it.

At this point I would really appreciate people trying it out and telling me 
what they think.

It can be found at https://github.com/plugwash/autoforwardportergit . The 
tutorial can be found
at 
https://github.com/plugwash/autoforwardportergit/blob/master/tutorial/README.md

If there is sufficient interest I will package it and upload it to Debian.



Re: Alioth: the future of mailing lists

2017-09-17 Thread peter green

On 17/09/17 10:38, Alexander Wirt wrote:


If you currently manage a user-support or discussion list, or run one
of the big teams


Just because a team isn't big or established doesn't mean they don't need a 
place to discuss issues relating to their activities, some of which do not 
relate to any one particular package. Contributers should be able to 
self-organise within the project and for many dds email is their primary means 
of communication. Sure you can just cc everyone involved but then there is no 
archive and no way for new contributers to jump in.

I find the current proposal to drop alioth mailing lists without a real 
replacement to be a major step backwards.




Re: Summary of the 2038 BoF at DC17

2017-09-17 Thread peter green

Firstly: developers trying to be *too* clever are likely to only make
things worse - don't do it! Whatever you do in your code, don't bodge
around the 32-bit time_t problem. *Don't* store time values in weird
formats, and don't assume things about it to "avoid" porting
problems. These are all going to cause pain in the future as we try to
fix problems.

For the time being in your code, *use* time_t and expect an ABI break
down the road. This is the best plan *for now*.


I find this argument unconvincing.

If a library is new or is going to have an ABI break anyway then by moving to 
64-bit time in it's interfaces now it can avoid another ABI break down the road.

Similarly if someone is introducing a new version of a file format anyway 
moving to 64-bit time at the same time as making other changes avoids breaking 
things twice.




Re: Re: source-only uploads

2017-09-17 Thread peter green

Andrey Rahmatullin writes ("Re: source-only uploads"):
> On Fri, Sep 01, 2017 at 12:47:41PM +0200, Emmanuel Bourg wrote:
> > Just yesterday I completely broke a key package used to build
> > many Java packages, and I couldn't even rebuild it to fix the issue.
>
> Why? Does it B-D on itself?

And, if it does, can it not be built using stretch ?

Ian.

I don't know about Java but I had an issue with freepascal not so long ago 
(back when Jessie was stable and stretch was testing).

A change in glibc broke freepascal on powerpc stretch/sid to the point it 
wouldn't install. Freepascal needs itself to build. Sids freepascal would not 
build in jessie due to using newer debhelper features.

To fix this I had to take sid's freepascal, apply the upstream patch for the 
glibc issue, hack it up so it would build in a jessie environment, build it in 
a jessie environment on the porterbox, install the binaries from that build 
into a sid environmentin qemu (because self-built packages can't be installed 
on porterboxes).

This kind of stuff does happen and we need to be able to deal with it.

Having said that I believe the default should be to throw away maintainer-built 
binaries, they should only be accepted if the developer explicitly asks for it.



Re: Re: Making Debian ports less burdensome

2016-02-28 Thread peter green


I would have thought porters would be following the buildd/piuparts/CI
pages for their port (where available) and thus would not need to be
notified about arch-specific FTBFS or testing issues. If porters are
relying on package maintainers or some automation to notify them
instead of being pro-active about builds and testing, that isn't going
to produce a high-quality port.

There is a tool for checking uninstallable and out of date packages already:

The problem with the current tools is they don't do a good job of:

1: seperating out port specific issues verses general issues.
2: seperating out issues with packages that actually matter* verses low 
popcon leaf packages that could be removed with minimal impact.
3: seperating out very new issues verses older issues without bug 
reports verses issues that are already being actively discussed verses 
stale issues with bug reports but no patches verses stale issues with 
bug reports and patches.
4: linking the automatic detection of an issue to the bug report on the 
issue (afaik the dose pages have no mechanism for this, the buildd pages 
have "failing reasons" but they are little used because only buildd 
admins can set them). Also linking together reports of essentially the 
same issue from multiple sources (i.e. package is uninstallable 
*because* it failed to rebuild for a transition).
5: in the dose case seperating out arch specific packages (which are not 
allowed to be uninstallable) from arch all packages (which are allowed 
to be uninstallable), it is indicated in the list with an [all] tag but 
spotting the handful of uninstallable arch specific packages in the much 
larger list of uninstallable arch all packages for testing isn't easy 
(for unstable the information is even less useful because of the massive 
ammount of entries that have nothing to do with ports).
6: indicating issues that are both blocking transitions (so need to be 
dealt with urgently) and are specific to a handful of architectures.


Improving the existing tools might help with some of the issues but I 
can see advantages in a porter-focussed tool that collects information 
from multiple sources.


* Exactly where to draw the line on "actually matter" is potentially a 
subject of some debate.




Re: Who is the most energetic DD

2015-10-03 Thread peter green


In the past I don't believe any person can maintain
over than 500 packages by oneself. However when I'm
actually analysing Sources.gz from Debian Mirror,
astonishing things emerges.

There is really DDs who maintain over than 500 packages
in main section.
You mean there are DD's listed on the maintainer list for more than 500 
packages.


In the past before "team uploads" were formalised it used to be common 
practice to add yourself to the maintainer list when uploading a 
team-maintained package you were on the team for but not on the 
maintainer list for. Even if you had no particular interest in that 
package.  People working on getting large transitions to go through can 
end up touching a lot of packages.





Re: Re: is the whole unstable still broken by gcc-5?

2015-09-18 Thread peter green

The unusual problems with the g++-5 transition are:

Another big one is the descision was taken NOT to
change the sonames when making sub-transitions. I
presume this was done to avoid deviating from upstream
sonmaes but the flip side has been that major
sub-transitions have become "all or nothing".

Afaict this is the biggest contributor to the upgrade
pain that testing and unstable users are experiancing
and it will also likely be a major source of pain down
the line for people trying to migrate from jessie to
stretch.





scripts for merging local changes

2015-05-24 Thread peter green
I just hacked together some scripts (the code is in two parts, the 
overall control code is in bash script while the changelog processing is 
in python) to deal with merging local changes (provided in the form of a 
debdiff) with a new version from Debian (provided in the form of a dsc)


I still need to handle some corner cases like preventing a buildup of 
notices when forward porting from a forward port and handling the case 
where the local part of the version number needs to go somewhere other 
than the end to satisfy version constraints.


Pabs suggested that something like this should go in devscripts but it 
will need a fair bit of cleanup and generalisation before that can happen.


Anyway here it is in it's first rough-cut form. Please feel free to 
comment/criticise/improve


autoforwardporter.sh
Description: application/shellscript
#!/usr/bin/python3
#(C) 2015 Peter Michael Green plugw...@debian.org
#This software is provided 'as-is', without any express or implied warranty. In
#no event will the authors be held liable for any damages arising from the use
#of this software.
#
#Permission is granted to anyone to use this software for any purpose, including
#commercial applications, and to alter it and redistribute it freely, subject to
#the following restrictions:
#
#1. The origin of this software must not be misrepresented; you must not claim.
#that you wrote the original software. If you use this software in a product,
#an acknowledgment in the product documentation would be appreciated but is.
#not required.
#
#2. Altered source versions must be plainly marked as such, and must not be
#misrepresented as being the original software.
#
#3. This notice may not be removed or altered from any source distribution.

from debian import changelog
import sys
oldchangelogfile = sys.argv[1]
newchangelogfile = sys.argv[2]
distribution = sys.argv[3]
date = sys.argv[4]
f = open(oldchangelogfile)
c = changelog.Changelog(f)
entries = []
#unfortunately the changelog module doesn't let us directly access it's list
#of changes, only an iterator over it, so we have to make our own list.
#so we can perform a reverse iteration (the changelog module gives us the most
#recent entry first, we want oldest first)
for entry in c:
entries.append(entry)

newf = open(newchangelogfile)
newc = changelog.Changelog(newf)
#mutter (3.14.4-1+rpi1) stretch-staging; urgency=medium
print(newc.package+' ('+str(newc.get_version())+'+rpi1) '+distribution+'; urgency=medium')
print()

for entry in reversed(entries):
	print('  [changes brought forward from '+str(entry.version)+' by '+entry.author+' at '+entry.date+']')
	lines = entry.changes()[:] #copy this so we don't modify the libraries
	   #version of it.
	while (lines[0] == ''):
		 del lines[0]
	for line in lines:
		print(line)

print(' -- Raspbian forward porter r...@raspbian.org  '+date)
print()

Re: Proposal: enable stateless persistant network interface names

2015-05-09 Thread peter green


The main downside is that by nature the device names are not familiar
to current admins yet. For BIOS provided names you get e. g. ens0, for
PCI slot names enp1s1 (ethernet) or wlp3s0 (wlan). But that's a
necessary price to pay (biosdevname names look similar).

The stability of these names appears to be an illusion.

The path based names use the PCI bus number as their root. PCI bus 
numbers are dynamically allocated as the bios enumerates the busses. 
Other than the root PCI bus they aren't a fundamental chactertistic of 
the hardware. Installing or removing an expansion card can add or remove 
PCI busses from the system and hence risks changing bus numbers. I'm 
sure I even recall one case of a laptop with switchable graphics where 
switching graphics setup changed the PCI bus numbers.


Someone else has raised concerns about the stability of bios based names 
over bios updates.


I feel this change is likely to make things better for companies that 
want to deploy images to loads of identical machines and rarely modify a 
system but worse for those of us with more ad-hoc hardware arrangements. 
The current system really works quite well for individual machines with 
ad-hoc changes, my interfaces have consistent easy to remember names 
regardless of where I plug them in and if I do have to replace an 
interface card it's easy enough to open the persistent net rules file 
and give the replacement interface the number of the interface it replaced.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/554dbc0f.3070...@p10link.net



Re: Debian Archive architecture removals

2015-05-05 Thread peter green


  Perhaps we need a political decision here?

I think it's mostly a practical one, as I don't see much disagreement
about the objectives here: What is the best way to arrange things to
support 'released, supported, all-equal' ports vs 'best-effort, let
them get out of sync' 2nd-class ports (both on the way up ('upcoming')
and on the way down ('legacy')).
It seems to me that second class ports can be divided into three rough 
categories, 'new ports that are up and coming' (arm64 and ppc64el were 
in this category until recently, x32 could arguablly be included too), 
'legacy ports where maintinance has slipped to the point they got kicked 
out of the set of first class ports' (alpha, m68k, etc) and 'ports that 
despite being around for years never made it to the set of first class 
ports' (hurd-i386, ppc64, sparc64, sh4, powerpcspe, argubally x32)


Now on to the political side.

What should the expectations of maintainers of second class ports be? 
Should they expect reasonable patches to be merged? who gets to define 
reasonable? what if anything should their recourse be if/when 
maintainers either ignore or activately stonewall them? is it ok for 
maintainers of second class ports to NMU when they are ignored by 
package maintainers? if package mtainers stonewall maintainers of second 
class ports should they reffer the matter to the technical committe? 
Should porter expectations be different between upcoming, legacy, 
and never made it ports? should reports be clearly labelled so that 
maintainers can quickly tell if the reported problem relates to a first 
class or a second class port? should second class ports be labelled 
as unofficial?




--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/554945f0.9040...@p10link.net



Re: Re: Debian Archive architecture removals

2015-05-04 Thread peter green


Was that before or after arm64 and ppc64el migrated off ports to the
main archive?
I'm pretty sure ppc64el was never on debian-ports, it went straight from 
an IBM run repository to the main archive.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/55481c57.4010...@p10link.net



Re: Re: Qt4's status and Qt4's webkit removal in Stretch

2015-05-02 Thread peter green


  algobox

This one looks like a (partial) false positive: the outdated 0.8 source
is still around in Sid since the 0.9 version doesn’t build under sparc
(because it can’t…),

Looks like qtwebkit-opensource-src needs a symbols file update for sparc.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5544e79b.9080...@p10link.net



Re: r-base-core upload to unstable does not respect freeze policy

2014-11-20 Thread peter green


Hmmm, this is what I missed. :-(  I guess the only chance is to upload
to t-p-u, right? 
  
Afaict you could do a source amd64 arm64 armel armhf i386 mips mipsel 
powerpc ppc64el s390x upload to unstable so that binaries for all 
release architectures were supplied by you rather than by buildds.


It would be a PITA to do (build all the binaries, bring them back to a 
single box and mergechanges them into a single upload) but i'm pretty 
sure it would work.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/546e88c1.7020...@p10link.net



Re: Removing 2048 bit keys from the Debian keyrings

2014-08-31 Thread peter green

Jonathan McDowell wrote:

I would ask that DDs make some effort to help
those with weak keys get their new, stronger keys signed. Please sign
responsibly[4],
If you have signed someones old key is it considered responsible to 
sign their new key based on a transition statement signed by the old 
key? or is a new face-to-face meeting required? I've seen plenty of 
(sometimes conflicting) advice on signing keys of a person you have 
never signed keys for before but not much on the transition situation. 
(note: this is a general question to consider, I'm not personally in a 
position where it would apply)


My understanding is that the NSA and similar organisations can probablly 
crack 1024 bit keys but the cost of doing so (assuming there hasn't been 
some secret mathematical breakthrough) is likely sufficiently high that 
it would be cheaper to infiltrate debian the old-fasioned way (false 
passports, putting agents through the NM process etc). Is that 
understanding correct?



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5403149f.40...@p10link.net



Re: gnutls28 transition

2014-05-03 Thread peter green

Dimitri John Ledkov wrote:

Hello all,

gmp has been recently re-licensed and all architectures and ports have
the updated gmp in jessie/sid. Well, all but powerpcspe  x32 both of
which recently have negative slope on their build status graphs.
Thus GPLv2 and LGPLv3 compatible software packages can link against gnutls28.

Should we start transition to gnutls28 by default, for all packages
that are compatible?

Can powerpcspe  x32 porters try to get latest gmp built?
  
Personally I'd add a (build-)depends on the relicensed gmp in the next 
gnutls28 upload. That way packages can (build-)depend on the new gnutls 
and be assured of getting a GPLv2 compatible version.




--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53659469.8060...@p10link.net



RSA vs ECDSA (Was: Bits from keyring-maint: Pushing keyring updates. Let us bury your old 1024D key!)

2014-03-04 Thread peter green


I am not sure what's the timeframe for GnuPG 2.1.0[1] release, but would
it be possible to skip the RSA and go directly for ECDSA, before we
start deprecating DSA? Or at least have an option to do so? (Well,
unless GnuPG 2.1 release is too much far in the future.)
  
IMO we need to phase out 1024 bit RSA/DSA keys as soon as reasonablly 
practical.  Even if gnupg 2.1 was released tomorrow we would still have 
the problem of Debian stable releases and other distros carrying older 
versions.


Also ECDSA shares with DSA the serious disadvantage over RSA that making 
signatures on a system with a broken RNG can reveal the key.




--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5316bc2b.7040...@p10link.net



re: conflict between system user and normal user

2014-02-07 Thread peter green


What is the correct way to deal with this kind of problem ? I cannot find in 
the policy something
about conflict between system and non-system user.
  
I don't think there is much that can reall be done to fix the 
fundamental problem which is that system users and regular users have to 
live in the same namespace causing a risk of conflicts.


There are two things I can see you could do to impreove the situation 
with your package.
1: Fail early, it's better to have preinst fail than it is to start 
creating stuff with wrong permissions/ownership.
2: Choose a less generic name that is less likely to cause conflicts. Do 
you plan to use this user only for the db? if so tango-db might make 
sense, if not maybe something like tango-control-system.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52f4b091.5020...@p10link.net



Re: amd64 arch and optimization flags?

2014-02-07 Thread peter green


 this is dangerous it changes results, sometimes significantly (e.g. for
 complex numbers), only use if you don't care about correctness or have
 verified its still correct.

IME, audio processing software can get away with it. Csound and its 400+ 
library of opcodes has been built with this option and I have had no 
complaints yet.
  

Quite likely yes.
What is dangerous about this options? I see a warning in the gcc docs, 
  
Floating point numbers are an approximation of the real numbers. IEEE 
754 defines the format of the floating point numbers (on all common 
modern platforms) and also sets rules for how accurate that 
approximation should be and how exceptional conditions should be handled.


-ffast-math tells the compiler that you are more interested in speed 
than accuracy, repeatability and handling of exceptional conditions so 
it may use methods that don't follow the precise rules laid down in IEEE 754


but how can I know if my program relies on a precise definition of IEEE
floating point?
With difficulty, you really need to understand what exactly the code is 
doing and how senstive it is to accuracy in the calculations.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52f4b4a5.3070...@p10link.net



Re: Bits from the Release Team (Jessie freeze info)

2013-10-26 Thread peter green

Johannes Schauer wrote:

Until these two issues are fixed we will not be able to get an algorithmic
answer to the question of what constitutes the minimum required set of
packages.
  
There is also the complication of what I will call non-key self 
building compilers. fpc is an example


These are not needed to bootstrap the core of debian but if one wants to 
bootstrap all of debian they will need to be built. Since the only way 
to build them is with themselves they cannot be bootstrapped natively 
even with the help of build profiles. So the only way to bootstrap them 
is to either cross-build them or start with a binary from upstream.




--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/526c4c1c.3060...@p10link.net



Re: skipping bioinformatics on some architectures ?

2013-10-19 Thread peter green


Right now, we have the problem that an upload of a
 previously compiled source package that’s “totally unimportant” will be
 sorted before all source packages in state “uncompiled”.

Only if we also get a waiver that allows testing to go out-of-sync for these
arches.  Otherwise, no thanks.
  
For release architectures built by the the debian.org build services I 
agree that out of date packages should continue to be prioritised. If 
due to short term conditions a release architecture is behind on 
building then it's usually* more important to keep packages up to date 
so they don't block testing transitions than it is to attempt to build 
packages the architecture has never built before.


If a release architecture is getting behind on building on a long term 
basis then IMO either more buildd hardware should be obtained or the 
port should lose it's release status.


But that isn't what we are talking about here, we are talking about an 
architecture that was kicked out of testing and kicked out of the 
official archive years ago, degraded to an almost unusuable state and is 
now attempting to become usable again. For them it's probablly far more 
important to build as many important but not yet built pckages as 
possible than it is to make sure every package they have built is up to 
date.


* Though there are corner cases where a new package is actually pretty 
important because some other package has been updated to have a 
dependency on it.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52629e9d.5030...@p10link.net



re: Compatibility of libs for Berkeley DB (libdb5.1-dev or libdb4.8-dev)

2013-10-08 Thread peter green


I cannot install libdb4.8-dev + libdb4.8, because it conflicts with libdb5.1.
This does not seem to be true, the dev packages conflict but afaict the 
libraries themselves (at least the versions from debian squeeze and 
wheezy) do not. So as long as you don't need libdb5.1-dev installed you 
should be fine.


If you need to use other dev packages that depend on the db dev packages 
to build bitcoind then I would suggest building it in a squeeze chroot. 
Once you have built it it should be easy enough to install required 
binary libraries on your wheezy system as unlike dev packages they 
rarely conflict.


Please direct further queries to debian-user.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52544833.7060...@p10link.net



Is udev's new network naming really as stable as they claim? (was: Re: overriding udev rules)

2013-09-24 Thread peter green


They are stable as long as the kernel and the hardware do not change too
much; e.g. enabling the other graphics card in a hybrid setup
sometimes adds a PCIe bus, so all names shift around.
  
Or adding something like a firewire card which happens to be based on a 
PCIe to PCI bridge chip would also add a bus and therefore has the 
potential for names to shift arround.


The new scheme seems to have the same problem the original kernel scheme 
had but moved one level up. Instead of network names depending on the 
order in which the kernel enumerated network adaptors they now depend on 
the order in which the BIOS enumerated PCI busses* Is that an 
improvement over just letting the kernel assign names? is it an 
improvement over debian's scheme?


For servers the answer is probablly yes, servers often have a lot of 
network adaptors, adaptors may be replaced by maintinance personell who 
are different from those setting up the OS and the chances of people 
adding or removing hardware that creates extra PCI busses is pretty low.


For desktops on the other hand i'm inclined to belive the answer is no. 
Desktops rarely have more than one or two network adaptors but they are 
much more likely than servers to have things like graphics cards, 
firewire cards, serial cards etc added or removed which can mess with 
the PCI bus numbers.


* in quotes because on modern hardware a logical PCI bus may or may not 
represent a real PCI bus.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52413f50.9070...@p10link.net



re: Non-identical files with identical md5sums on Debian systems?

2013-08-06 Thread peter green

I do occasionally check for identical files on different systems by
comparing their md5sums. So, just out of interest, could someone tell me
(how to find out) how many non-identical files with identical md5sums
there are there on a typical (say, amd64) Debian system?

Assuming the output of md5 is random uncorrelated 128 bit binary numbers
and making a couple of other approximations we can approximate the 
number with the formula.


((n*n-1)/2)/(2^128)

Where n is the number of unique files on your system.

I used the command  cat /var/lib/dpkg/info/*.list | wc -l to get an
approximation of the number of debian files on my main debian
system with lots of stuff installed. I will assume all these files
are unique.

plugwash@debian:~$ cat /var/lib/dpkg/info/*.list | wc -l
304431

So the expected number of md5 collisions would be approximately

((304431*304430)/2)/(2^128)

Plugging that into octave gives us an answer of

octave:1 ((304431*304430)/2)/(2^128)
ans =  1.3618e-28

The bottom line is under practical conditions the only way you
are going to see two files with the same md5 is if someone went
out of their way to create them and send them to you.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/520180ac.90...@p10link.net



Re: epoch fix?

2013-05-07 Thread peter green


But either way, the problem is that .dsc and .deb version numbers are
not used only by dpkg.  Lots of tools use them, inside and outside of
Debian packages, inside and outside of Debian infrastructure.  We
cannot be sure that they all use dpkg's own interfaces to do so (e.g.
dpkg --compare-versions, perl -MDpkg::Version).
  

Yes

Not to mention that just because the debian archive only cares about 
version numbers within the last few releases does not mean other tools 
may not care about maintaining sane ordering over longer periods.


I strongly belive that a proposal that changes version numbers in a way 
that breaks the assumption that ab and bc implies ac is a horrible 
idea and a cure worse than the disease.




--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5189aa24.8020...@p10link.net



Re: History of Debian bootstrapping/porting efforts

2012-11-22 Thread peter green


Since yesterday, my tools can now finally turn the whole dependency
graph
Does this whole dependency graph include the implicit build-dependency 
every package has on build-essential?



The above case for example has no
alternative solution as the cycle is of length two and has no other way
of braking it than building pkg-config without libglib2.0-dev. Since
this is unlikely to be possible
I don't see why it would be impossible to hack up the glib source 
package to not rely on pkg-config. Whether that is a good idea or not is 
another matter.

 and since the assumption is that only
build dependencies might be dropped when necessary but not binary
dependencies, a possible solution might be cross compilation.
  
It seems pretty clear to me that there is a core of software that will 
need to be cross-built as the first stage of bootstrapping a port. 
Obviously essential and build-essential fall into this category but 
while i'm sure there are ways one could hack away the cycles and make 
things like pkg-config and debhelper natively bootstrapable I don't 
think there is much point in doing so.


What i'd ideally like to see is for a tool to be able to generate a 
directed acyclic graph of build jobs (some cross, some native, there 
should be an option in the tool as to whether to preffer native or 
cross-build jobs) that takes the user from having no packages for the 
target architecture to having a set of bootstrap packages that can be 
used to seed the regular building process.





--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/50aef1f4.6040...@p10link.net



library linking problems with multiarch cross-building

2012-10-05 Thread peter green
My previous experiments with multiarch cross-building had run into lots 
of build-dependency and tool problems and as a result I hadn't got 
arround to actually building anything beyond trivial test programs.


Unfortunately I have now discovered that when I try to  link against 
some libraries I get link errors.


root@debian:/# arm-linux-gnueabihf-gcc test.c -lglib-2.0
/usr/lib/gcc/arm-linux-gnueabihf/4.7/../../../../arm-linux-gnueabihf/bin/ld: 
warning: libpcre.so.3, needed by 
/usr/lib/arm-linux-gnueabihf/libglib-2.0.so, not found (try using -rpath 
or -rpath-link)
/usr/lib/gcc/arm-linux-gnueabihf/4.7/../../../../arm-linux-gnueabihf/bin/ld: 
warning: libpthread.so.0, needed by 
/usr/lib/arm-linux-gnueabihf/libglib-2.0.so, not found (try using -rpath 
or -rpath-link)
/usr/lib/gcc/arm-linux-gnueabihf/4.7/../../../../arm-linux-gnueabihf/bin/ld: 
warning: librt.so.1, needed by 
/usr/lib/arm-linux-gnueabihf/libglib-2.0.so, not found (try using -rpath 
or -rpath-link)
/usr/lib/arm-linux-gnueabihf/libglib-2.0.so: undefined reference to 
`pthread_rwlock_wrlock@GLIBC_2.4'


Any thoughts on what could be causing this and how best to fix it?


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/506f6d07.7090...@postgrad.manchester.ac.uk



thoughts on using multi-arch based cross-building

2012-09-30 Thread peter green
I've been attempting to use multi-arch for cross-building packages for 
raspbian (a debian derivative I am working on for armv6 hardfloat) and 
run into a few things which I thought i'd share and/or ask about.



Build-depends installation:
apt-get build-dep is fine if you are building an unmodified package from 
a repo but it's of no use if you have modified the build-dependencies to 
make them satisfiable.


dpkg-checkbuilddeps doesn't tell me what architecture the packages need 
to be for and i'm not sure it can (since to do so it would need to know 
whether packages that are not installed are multi-arch foreign or not).


Does a tool exist that can be told install the build-depends needed to 
build the debianised source tree in directory x for architecture y? if 
not IMO such a tool (or a new option in an existing tool) needs to be 
created.



Pkg-config:
A soloution needs to be found for this, so-far I have worked arround by 
hacking the package to be multi-arch foreign and then manually creating 
the symlink to the crosswrapper but there has to be a better soloution.



Packages that need a specific gcc version:
Sometimes packages need to be built using a non-default gcc version. We 
would rather they didn't but i'm sure there will always be such cases.


Conventionally in such cases I've added a build-depends on gcc-version 
version and then set CC to gcc-version but this obviously isn't 
suitable for cross-building.


Setting the CC environment variable is easy to fix (set it to 
triplet-gcc-version which afaict is fine for both native and cross 
building) but I can't think of a good and simple soloution to the 
build-dependency problem since the package name for the cross-compiler 
depends on the architecture.



Arch all development dependency packages:
In debian there are some development dependency packages, typically 
packages that depend on the latest version of a library. Since these 
packages don't contain anything that is actually architecture specific 
they are usually arch all. One example is tcl-dev.


The problem is that dpkg/apt always treat arch all packages the same as 
packages for the native architecture making these arch all packages 
useless for cross-building.


I see two possible soloutions to this
1: make those dependency packages arch any. This will take up a bit of 
archive space but since the packages in question are empty anyway it 
shouldn't be too bad.

2: introduce a concept of effective architecture(s) for arch all packages.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5068586a.50...@p10link.net



nacl and CPU frequency.

2012-09-22 Thread peter green
I'm trying to get nacl built on more architectures in debian, in 
particular I want to see it build on arm*.


In order to build successfully nacl needs to determine the CPU frequency 
(the CPU frequency determined at build time is not used in the final 
binaries afaict but if it's not determined then the build will fail as 
it will consider the implementation broken and if it can't find any 
non-broken implementations it won't build).


Currently the generic implementations (there are also some CPU specific 
implementations) do this by looking at various locations in /proc and 
/sys, specifically (in order)


/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
/sys/devices/system/cpu/cpu0/clock_tick
/proc/cpuinfo

If it fails to find the info in /sys or /proc it then falls back to 
trying lsattr and psrinfo but neither of theese seem to exist in current 
debian.


Unfortunately it seems /sys/devices/cpu/cpu0 doesn't exist on all 
systems (even if /sys is mounted) and while /proc/cpuinfo always seems 
to exist it doesn't always have CPU frequency information.


Now for my questions (the first is directed at developers in general, 
the second is directed at Sergiusz but if others know the answer I 
wouldn't mind them asking it)


Is there a portable way of determining CPU frequency in debian?
Do you known how important it is to have an accurate CPU frequency 
determination for nacl. e.g. if true CPU frequency can't be determined 
would it be ok to use bogomips instead?



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/505dcb22.70...@p10link.net



Re: nacl and CPU frequency.

2012-09-22 Thread peter green

Russell Coker wrote:

On Sun, 23 Sep 2012, peter green plugw...@p10link.net wrote:
  
In order to build successfully nacl needs to determine the CPU frequency 
(the CPU frequency determined at build time is not used in the final 
binaries afaict but if it's not determined then the build will fail as 
it will consider the implementation broken and if it can't find any 
non-broken implementations it won't build).



If the build process is trying to discover information that it then discards 
then why not just patch it to not do so?
It's not the build process itself doing the determination, it's the code 
being built and
tested (as I said the build process tries various implementations until 
it finds one that

works),

So sure I could patch the build process to force it to build the generic 
implementation
without testing it but if it then doesn't work for many users I won't 
really have gained

much.

Surely you as the DD can determine which architectures aren't broken
Not being able to read the clockspeed doesn't seem to be determined by 
debian
architecture, it seems to be kernel related. e.g. on my beagleboard XM I 
can see the

clocksepeed in /sys/ but on my imx53 I can't. On my amd64 laptop I can't see
the speed in /sys but I can see it in /proc/cpuinfo.





--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/505de535.9040...@p10link.net



Re: nacl and CPU frequency.

2012-09-22 Thread peter green


On 22/09/12 15:28, peter green wrote:
 I'm trying to get nacl built on more architectures in debian, in
 particular I want to see it build on arm*.
 
 In order to build successfully nacl needs to determine the CPU frequency


I think you need to analyse what it's doing with the CPU frequency,
because on most modern hardware the frequency changes due to power
saving and other stuff.
  

It would appear that nacl is designed to have a concept of measuring elapsed
time in CPU cycles. But when measurements in true CPU cycles (obtained
using inline assembler) are not available it falls back to using time values 
from the OS and multiplying them by the number of CPU cycles per second. 
This appears to be used for measuring performance of various things.


I therefore conclude that if it's not returning elapsed times in true CPU
cycles it probablly doesn't matter much if the supposed CPU speed and the
real CPU speed are not exactly the same.

As such unless someone objects I plan to patch the code to fallback to
using bogomips as a psuedo CPU speed if true CPU speed cannot be 
determined.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/505de779.9010...@p10link.net



assumptions about the build environment.

2012-09-21 Thread peter green
While working on debian one thing I have not managed to find is 
documentation on what packages can and can't assume about the build 
environment. Does such documentation exist and if not should it be created.


Some specific cases i'm wondering about:

I just discovered that on my beagleboard XM (under armhf sid) nacl 
(which previously build on a debian experimental armhf buildd but not a 
debian unstable armhf buildd) will build if /sys is mounted but will not 
build if it is not mounted. Can packages assume that /sys will be 
mounted in the build environment or not?


IIRC it is generally established that packages are not allowed to rely 
on an internet connection during build but if one is present are they 
allowed to assume it's non-broken. I recently came accross a package ( 
sslh ) which fails to build in the presense of nxdomain hijacking. Is 
that a bug?


Some time ago I found that a package (I think it was openjdk but I don't 
remember for sure) which relied on uname -r such that linux32 had to be 
used to build it in an i386 chroot on an amd64. However since then I'm 
pretty sure i've seen similar cases with other packages on other 
architectures being treated as bugs.




--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/505cbf60.5020...@p10link.net



Re: assumptions about the build environment.

2012-09-21 Thread peter green

peter green wrote:
Some time ago I found that a package (I think it was openjdk but I 
don't remember for sure) which relied on uname -r

sorry I meam -m not -r


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/505cc36b.6010...@p10link.net



re: dpkg-buildpackage now sets DEB_BUILD_HOST etc for you?

2012-03-29 Thread peter green

Now, you can build packages without using dpkg-buildpackage by calling
rules directly, and in that case the rules file would need to call
dpkg-architecture, but someone would have to convince me that that was
an interface worth supporting for non-native builds

The big reason it's worth supporting IMO is that with most packages you
can resume after a failld build by manually running debian/rules 
build. When fixing compile errors in a large package I don't want to 
have to restart the build from scratch after every file I fix.


Of course I will do a proper build with dpkg-buildpackage at the end
but only after i've fixed all the compile errors.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4f74b2cd.5070...@p10link.net



suggestion: build-blockers field in control file

2011-11-29 Thread peter green
Some packages have runtime dependencies on packages that they do not 
have corresponding build-dependencies for. This leads to the building of 
uninstallable packages which in turn leads to problems with testing 
transition of packages.


Currently there are two workarounds for this situation

1: manually alter the package's architecture list to limit building to 
those architectures where runtime dependencis

2: add an artificial build-dependency

Neither is ideal, the first must be manually undone if and when the 
dependencies do become available. The second is an abuse of the 
build-depends field (the package isn't REALLY needed for building) and 
causes pacakges to be unnessacerally installed in build environments 
(both on autobuilders and for those manually building the package) 
wasting time and network bandwidth.


I therefore propose a new control field for source packages 
build-blockers. Autobuilder management systems should generally treat 
build-blockers the same as build-depends but the systems that actually 
do the building do not need to take any notice of them.


What do others think?


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4ed5314b.1070...@p10link.net



re: what if a package needs to be recalled

2011-11-20 Thread peter green

Just curious, let's say version 15.xxx of a package is released but then
found to be faulty, and upstream isn't releasing a new version soon.

OK.. faulty is a rather vauge term


Can the developer somehow recall it?

Not really, it's probablly theoretically possible to remove a package from 
debian and then
reupload a lower version but it would require the intervention of the 
ftpmasters and
it wouldn't achive much because as you say people would have already upgraded.


Or he can repackage 14.xxx as 15.xxx.1 but then other
packages depending on  14 etc. will get the version wrong and the
numbering will be misleading.

It's possible to use a version number like 15.xxx+really14.xxx but it's ugly to 
say the least

It's also possible to use an epoch e.g. 1:14.xxx, downside of that approch is 
that the
package has to carry the epoch forever.

Afaict it's pretty rare that a package that is so badly broken that reversion is considred 
the only reasonable course of action makes it into debian. 



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4ec994c2.6070...@postgrad.manchester.ac.uk



Bug#601455: general: can't stop daemon using /etc/init.d/foo stop when, disabled via /etc/default/foo

2011-10-20 Thread peter green

Many packages seem to provide ENABLE/DISABLE variables in
/etc/default/foo, providing a confusing red herring for this
task --- a second method which does not work nearly as well,
as you pointed out
Though there are some situations where it is nessacery. Consider 
vtund for example which has seperate enable/disable flags for 
running in server and client modes (with the potential for 
multiple seperate client instances).



A complicating factor is that the sysadmin may already have customized
some ENABLE/DISABLE settings and a move like this should not override
their settings.  So perhaps packages should stop advertising the
ENABLE/DISABLE vars in /etc/default/package, but continue to respect
them when set.

regardless of any plan to discourage use of the /etc/default
mechanism (I think removing it altogether is not really 
reasonable) I think the original bug of being unable to stop
a dameon after disabling it in /etc/default still needs to be 
fixed.





--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4e9ff3ee.1090...@p10link.net



Re: python 2.7 in wheezy

2011-10-07 Thread peter green

Ummm ... don't we strongly encourage all package maintainers to read
d-d-a?  If not, we should.  It is very low traffic and sometimes
important.



Sure: “All developers are expected to be subscribed to this list.” [0],
but Oliver was referring to “users”. On the other hand, his example mail
(To: duplic...@packages.debian.org) is obviously sent to developers, so
I'd guess no harm is done for our users.
Remember the removal mails also end up in the history on packages.qa.debian.org 
which afaict is the main point of reference for those who are trying to find 
out what is going on with a package.


Not everyone who has an interest in why the package they rely on is suddenly
no longer in testing is a dd or a package maintainer.



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4e8f9a4f.80...@p10link.net



Re: Using -Werror in CFLAGS for a debian package build

2011-05-22 Thread peter green

(note: this message quotes from multiple mails by different people)

WouterFirst and foremost, I do not believe that setting -Werror in a
Wouterdebian/rules file is the best way to maintain a package; -Werror is a
Wouterdevelopment option which should not be used in a release build (which a
WouterDebian package is). When a package sets -Werror in its debian/rules
Wouterfile, it *will* FTBFS when the default compiler is changed, while not
Wouternecessarily breaking the package itself. I don't think it adds value.
The thing is that there are some warnings that really SHOULD be errors as
code that generates them is almost certainly wrong. Generally a build failure 
is less serious than a subtuly broken package. 

A package maintainer could try and identify such warnings individually 
but I don't think many maintainers would be willing to go to that effort.


Maybe what is really needed is a -Werror=serious or similar option that
turns the worst warnings (stuff like converting a pointer to/from an 
integer of the wrong size, incompatible implicit declarations and other 
stuff that indicates something is SERIOUSLY wrong) into errors while

leaving minor warnings (things that really just indicate that the code
could do with a little cleanup) as warnings.

RussI was a bit dubious about it as well, for the reasons you state, but
Russplease note that GCC 4.6 introduced a major new warning about set but
Russunused variables and, so far, every large or medium C code base that I
Russhave has had at least once instance of that warning.  And I'm usually
Russpretty picky about such things.
Russ
RussIf -Werror had not been disabled for this warning, my guess is that nearly
Russevery package using -Wall -Werror not previously tested with 4.6 would
RussFTBFS.
Is this really THAT big a deal? Is it really worth making dubious changes
to build dependencies (gcc in this case but a similar saga is going on with
dash) to temporarlly hide (and therefore make harder to fix) FTBFS bugs that 
are usually trivial to fix in the package that suffers from them (worst case
you can just change the cflags in this case or set CONFIG_SHELL in the 
dash/configure case.


Yes it may mean some packages may need a sourceful nmu rather than a binnmu
to transition but is that really such a huge deal?


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4dd93475.8010...@p10link.net



Re: piuparts-MBF: prompting due to modified conffiles which where not modified by the user

2009-08-25 Thread peter green

So, what do you suggest for this? Of course, this file _is_ a conffile
(i.e. should never be automatically overwritten, so just moving it
over to /var/lib is not just compiling with a different path set). If
I don't automatically upgrade the file, users will end up with a
confused daemon unable and unwilling to move on. How should I proceed
with this?


My understanding is in this situation the correct soloution is not to ship the 
file in the package at all and then have the maintainer scripts create/edit it 
on install/upgrade.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



re: What happens to snapshot.debian.net?

2009-03-24 Thread peter green

I am trying to get ridge on the problem with lvm2. Therefore I have to
get some old packages from snapshot.debian.net. Unfortunately it seems
to be broken for some time now. 

While the syntax you use doesn't seem to work anymore and /archive appears 
empty it seems you can still browse directly to a year.

e.g. for 2009 http://snapshot.debian.net/archive/2009/ (the trailing / matters)

interestingly it appears the requests are being proxied to 
hikaru.fsij.org



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: mass bug filing for undefined sn?printf use

2009-01-16 Thread peter green

IMHO any bugs filed merely due to the presence of the code without the
 means to trigger the error in normal builds should be wishlist.
What is particularlly insiduous about this issue is that it could 
easilly be activated by accident if the maintainer or a NMUer builds and 
uploads a new version of the package on a system/chroot that happens to 
have hardening-wrapper installed (most likely left over from building a 
previous package).


IMO because it can lead to packages that were not previously broken 
breaking after a rebuild this deserves a severity of at least normal



--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: [Fwd: Re: Debian Live Lenny Beta1]

2008-09-06 Thread peter green

I have tried the amd64-version on a Lenovo R61 as well as on my
Macbook.  Maybe I should try the i386 on the Macbook because it did
not boot properly and I could not use the keyboard.


someone already reported this (it's a problem with syslinux), but i have
almost no to no hope that this will get fixed; don't have access to
macbook hardware.



Unfortunately I don't think this is a bug in syslinux, I have seen it happen with lilo and with windows CDs as well. 
I belive it is a bug in apples compatibility service module. It seems to be somewhat sensitive to the exact boot method. 
I haven't  tried debian-live but with other CDs I find it usually succeeds if I remove all power from the machine before 
starting.  Then the hold C key during bootup to boot the CD and then release the C key as soon as I hear the CD drive 
spin up. If I boot through rEFIt it always fails.







--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



re: 1 of 400 dpkg databases corrupt?

2008-08-23 Thread peter green

I suspect this mean that 0.25% (1 of 400) of the machines reporting to
popcon.debian.org got a corrupt/inconsitent dpkg database.
Afaict dpkg has no mechanism in place for detecting or recovering from database curruption. The format of the datbase 
means that curruption tends to lead to dpkg forgetting a number of packages exist and thinking one package has a very

long description.

Next time the user uses apt it will complain about broken packages and advise 
the user to use apt-get -f install. Apt
will then reinstall forgotten packages that other packages depend on but if 
nothing depends on the package it will be
left present on the system but not in the dpkg database.






--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Include on first iso: m-a, build essential, kernel headers

2008-07-17 Thread peter green

It looks like the search you tried is just broken.

The search tool works but it is rather dumb and the instructions are misleading.
_i386 will only find packages with _i386 in the filename. So it will NOT find 
arch all packages like module-assistant. There does not seem 
to be a way to search for arch all packages within only images for a particular 
architecture.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Considerations for lilo removal

2008-06-16 Thread peter green



I am wondering if it is a good idea to remove lilo entirely. At the
moment, lilo has been pulled from testing, and the code is in a shape

Can either version of grub handle all the cases that lilo can? for 
example can either of them handle the situation where root is on lvm and 
there is not a seperate /boot partition? last I checked d-i defaulted to 
lilo in that situation.


If not then removing lilo will leave d-i with no ability to install a 
bootloader in those situations and worse leave some users with no 
upgrade path.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: what about an special QA package priority?

2008-05-21 Thread peter green

none*. And not cleaning up yourself also improves performance for short
running apps.

How so?


The libraries request memory from the kernel in pages (4k on i386, will vary
on other architectures), they then run thier own heap management system within
those pages. Some memory managers will return pagess to the OS when they become
completely empty others will not.

When the application quits the kernel cleans it up, every page it owns is 
reclaimed
without having to even look at the memory manager structures inside.

in other words freeing the memory you have allocated before quitting takes time 
and achives nothing usefull.





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



re: Sorting out mail-transport-agent mess

2008-05-15 Thread peter green
2) Introduce a default-mta package (currently) depending on exim4. All 
packages requiring a MTA should depend on default-mta | mail-transport-agent.  
This will have the extra advantage that we (and others like CDDs and derived 
distros) easily could swap default MTA.
What concerns me about this approach is that it could easilly end up with 
dist-upgrades swapping out users mail systems without warning. I would consider 
such behaviour unacceptable as it could easilly cause mail loss if the user has

a customised configuration.

It seems to me that the ideal soloution would be to fix apt/the repositry system
so that the defaults for a virtual package can be explicitly designed.

Failing that how about a default-mta virtual package that is provided by 
*exactly
one* real package. That way it is easy to change the default but upgrades should 
stay with the version that they have. Under that system changing the default MTA

would require changes to two packages which seems manageable to me.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



[OT] Need old Packages.gz and Release Files

2008-04-26 Thread peter green

I have had an accident on my Debian-Archiv-Server and  unfortunatly  the
files Packages.gz,  Packages.bz2,  Sources.gz,  Sources.bz2  and
Release from the directories

Afaict snapshot.debian.net has woody down to r4 and all point release of sarge 
and etch.

For older point releases of woody you could grab the jigdo/template files for the CDs from 
http://cdimage.debian.org/cdimage/archive/jigdo/ , you will have to reconstruct a single big 
packages/release file set from the broken down ones on the CDs yourself though and 
unfortunately downloads of those .jigdo files seem to be hanging for me at the moment.


For potato I doubt you have much hope except for the last point release (which 
is availible on archive.debian.org).



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Firefox bugs mass-closed.

2007-10-20 Thread peter green

We encourage people to not file duplicate bug reports, and check the BTS
first. So I check the BTS, the bug is there, I don't file a new one (I
do send a me too). 6 weeks later, the bug is closed because the
submitter's email is bouncing and he's on vacation anyway.
It sounds like what is really required is the ability for a bug to have multiple 
reporters recorded. Then any queries about a bug can be sent to all the 
reporters not just the one who happened to submit the report first.




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]