Bug#893702: Please stop build-depending on pdftk

2018-03-22 Thread Vagrant Cascadian
Control: affects 892539 diffoscope

On 2018-03-21, Matthias Klose wrote:
> pdftk still still depends on GCJ, and is likely to be removed when gcj is
> removed. Please stop build-depending on pdftk.

FWIW, there's a reference to a fork of pdftk that doesn't require gcj:

  https://bugs.debian.org/892539


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Please review the draft for week 139's blog post

2017-12-26 Thread Vagrant Cascadian
On 2017-12-26, Ximin Luo wrote:
> This week's blog post draft is now available for review:
>
> https://reproducible.alioth.debian.org/blog/drafts/139/

The links to some of alioth's git repositories (maybe only jenkins
related ones) are apparently not valid at the moment; I'm assuming this
is due to the deprecation of alioth.debian.org, and/or a migration away
From alioth in progress?

  
https://anonscm.debian.org/git/reproducible/jenkins.debian.net.git/commit/?id=874ff3e9
  
https://anonscm.debian.org/git/reproducible/jenkins.debian.net.git/commit/?id=dd9b5305

Which makes me wonder if we'll need to update all the old posts as well
to still have valid links...

Not sure what the "correct" links are, so not sure what to update to.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Making schleuder build reproducibly

2017-11-07 Thread Vagrant Cascadian
On 2017-11-04, Holger Levsen wrote:
> On Mon, Oct 30, 2017 at 06:21:39PM +0100, Georg Faerber wrote:
>> @dkg: It seems, there is still a bug / race in dirmngr, which leads to
>> errors like "can't connect to '127.0.0.1': no IP address for host" and
>> in turn "marking host '127.0.0.1' as dead". See the attached debug log for
>> details, the log was taken on October 1st with dirmrngr out of unstable.
>> I'm happy to debug this further, if needed.
>
> indeed, random success+failure is visible for 3.2.1-1 on armhf:
>
> https://tests.reproducible-builds.org/debian/rb-pkg/buster/armhf/schleuder.html

While there are some successes, they seem to be the rare minority, if
you click on filter by test-history for armhf on the upper left.

The last armhf build failure was due to unreachable local keyserver. The
armhf builders all have firewalls, but that wouldn't explain why it
sometimes succeeds and usually fails, and I didn't set up anything
specifically firewalling localhost...


live well,
  vagrant

___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds


Bug#876239: reproducible: re-deploy odxu4 (disk failure)

2017-10-01 Thread Vagrant Cascadian
Package: jenkins.debian.org
Severity: normal
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

odxu4-armhf-rb.debian.net had a critical disk failure.

It's been reinstalled, so the ssh keys have changed:

256 SHA256:qdZCQDO2crGdXaDopH9OP5qen3XHml3Y68/BsEmmP8I root@odxu4a (ECDSA)
256 SHA256:hINS3Xrk/fLz87Ogak/iZIJY9vy4NeGFVCi+5/0O7DA root@odxu4a (ED25519)
2048 SHA256:2rwh4vwqwICL+52nUbDFF7iksu2hBYKEdQ6l7RL4cTo root@odxu4a (RSA)

hostname, ip address, ssh port should all be the same, and it hasn't
been removed from jenkins git... so everything just needs to be
re-deployed on the new install.

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Please review the draft for week 126's blog post

2017-09-24 Thread Vagrant Cascadian
On 2017-09-24, Mattia Rizzolo wrote:
> On Sun, Sep 24, 2017 at 07:14:22PM +0100, Chris Lamb wrote:
>> Naturally, please let know if there are other things like this.
>
> What I was about, is that the sentence implies that what's *right after*
> is all the changelog of the release, which is not right.
> I've committed a change to the template, could you please check it out
> whether it looks alright to you?

It does appear to imply that all the following changes are actually in
the released version, which I don't think is actually correct.

For example, I don't think "--auto-build" was part of reprotest 0.7,
even if it was committed in the same week 0.7 was released ... ideally
there would be an automated way to distinguish between commits not
included in the released version and those that are.

Looking at:

  
https://anonscm.debian.org/git/reproducible/misc.git/tree/reports/bin/generate-draft.template?id=c1a30a0

I guess referencing the changelog helps, though who actually follows
links? :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Bug#876239: reproducible: re-deploy cbxi4a (disk failure)

2017-09-19 Thread Vagrant Cascadian
Package: jenkins.debian.org
Severity: normal
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

cbxi4a-armhf-rb.debian.net had a critical disk failure.

It's been reinstalled, so the ssh keys have changed:

256 SHA256:C8nL+YAOEhjWYH2tkeoP00sfiWi4bI2ZlI400idPBqU root@cbxi4a (ECDSA)
256 SHA256:oxxy+996R6nClC9ors/Py20vsJEjN2HrGgzvhsSwKfw root@cbxi4a (ED25519)
2048 SHA256:EDUXTiDwa7/W2WAooNdHJNTJJd+GJZypCsqcJ7axLEM root@cbxi4a (RSA)

hostname, ip address, ssh port should all be the same, and it hasn't
been removed from jenkins git... so everything just needs to be
re-deployed on the new install.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Bug#876212: reproducible: re-add armhf node ff64a

2017-09-19 Thread Vagrant Cascadian
Package: jenkins.debian.org
Severity: normal
Tags: patch
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

Welcome back ff64a-armhf-rb.debian.net. Specs and fingerprints are
unchanged.

With the 4.14-rc1 kernel, the firefly-rk3399 boards have working cpufreq
support, and so the CPU can run at full speed (on most of the cores, at
least).

Branch for jenkins.debian.net "welcome-back-ff64a" available at:

  
https://anonscm.debian.org/cgit/users/vagrant/jenkins.debian.net.git/log/?h=welcome-back-ff64a

Hopefully everything needed to enable these is in those commits.

Thanks for maintaining jenkins!

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Bug#876055: Environment variable handling for reproducible builds

2017-09-18 Thread Vagrant Cascadian
On 2017-09-18, Vagrant Cascadian wrote:
> On 2017-09-18, Russ Allbery wrote:
>> Daniel Kahn Gillmor <d...@fifthhorseman.net> writes:
>>> On Sun 2017-09-17 16:26:25 -0700, Russ Allbery wrote:

>>> Does everything in policy need to be rigorously testable?  or is it ok
>>> to have Policy state the desired outcome even if we don't know how (or
>>> don't have the resources) to test it fully today.
>>
>> I don't think everything has to be rigorously testable, but I do think
>> it's a useful canary.  If I can't test something, I start wondering
>> whether that means I have problems with my underlying assumptions.
>>
>> In particular, for (1), we have no comprehensive list of environment
>> variables that affect the behavior of tools, and that list would be
>> difficult to create.  Many pieces of software add their own environment
>> variables with little coordination, and many of those variables could
>> possibly affect tool output.
>
> There is a huge difference between variables that *might* affect the
> build as an unintended input that gets stored in a resulting packages in
> some manner, and variables that are designed to change the behavior of
> parts of the build toolchain.
>
> I consider unintended variables that affect the build output a bug, and
> variables designed and intended to change the behavior of the toolchain
> expected, reasonable behavior.

Ok, after discussing on IRC a bit, I figured it might be worth expanding
on that point a bit...


The envioronment variables (and other variations) used by the
reproducible builds test infrastructure:

  https://tests.reproducible-builds.org/debian/index_variations.html

I'll try and summarize the rationale for each of the variables used,
many of which have had actual impacts on the result of the builds:


CAPTURE_ENVIRONMENT, BUILDUSERID, BUILDUSERNAME

Some builds capture the entire environment, or most of the environment;
setting arbitrary environment variables can help detect this.

TZ

The timezone used can change the results of embedded timestamps.

LANG, LANGUAGE, LC_ALL

The locale and language settings definitely change the strings embedded
in some binaries, if tool output is translated.

PATH, USER, HOME

Some builds embed these.

DEB_BUILD_OPTIONS=parallel=N

The level of parallelism can change the build output, although other
values in DEB_BUILD_OPTIONS values might be reasonably expected to
change output (e.g. noautodbgsym).


None of the above variables should change the resulting built package,
with the possible exception of some other values of DEB_BUILD_OPTIONS.

On the other hand, I would expect variables such as CC, MAKE,
CROSS_COMPILE, CFLAGS, etc. to reasonably and likely change the result
of the built package. They are, in a sense, part of the build toolchain
environment.


Without generating comprehensive blacklists and/or whitelists, is it
plausible to come up with a policy description of the above two classes
of variables? Given the above lists, it seems relatively obvious to me
that there are basically two classes of variables, but I'm at a loss for
how to really describe it in policy.

You could give a reasonable test of:

  Is this variable intended to change the results of the binary, or is
  it changing the build as an unintended side-effect?

That does require reasoned interpretation, though. I envision such tests
being used in bug reports relating to reproducibility issues, on a
case-by-case basis.


It doesn't solve the testability issue on a policy level, but that could
possibly be addressed outside of policy through best practices for
reproducibility documentation.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Pkg-zsh-devel] Bug#764650: zsh: FTBFS with noatime mounts

2017-09-18 Thread Vagrant Cascadian
On 2017-09-18, Axel Beckert wrote:
> Control: retitle -1 zsh: FTBFS with noatime mounts (e.g. on reproducible 
> builds armhf nodes)
...
> We still see this issue with reproducible builds on armhf in unstable as
> well as stretch:
> https://tests.reproducible-builds.org/debian/rb-pkg/stretch/armhf/zsh.html
> https://tests.reproducible-builds.org/debian/rb-pkg/unstable/armhf/zsh.html
> https://tests.reproducible-builds.org/debian/rbuild/stretch/armhf/zsh_5.3.1-4.rbuild.log
> https://tests.reproducible-builds.org/debian/rbuild/unstable/armhf/zsh_5.4.2-1.rbuild.log
> (Cc'ing the Reproducible Builds Folks for their information.)
>
> While I don't know for sure if those nodes use noatime, it explains
> well why exactly this tests only fails on the slowest architecture of
> reproducible builds.

I can confirm that the reproducible builds armhf builders use noatime on
the filesystem used for building packages.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Bug#874682: reproducible: New and old armhf/arm64 nodes (Jetson-tx1, Jetson-tk1)

2017-09-08 Thread Vagrant Cascadian
Package: jenkins.debian.org
Severity: normal
Tags: patch
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

One new machine ready for incorporation into the armhf build network,
and one old one ready to be reactivated.

Welcome back jtk1a-armhf-rb.debian.net. Specs and fingerprints are
unchanged.

jtx1c-armhf-rb.debian.net:
Jetson-tx1, quad-core (big.LITTLE Cortex-A53/A57), ~3.5GB ram,
   native sata SSD ~120G
ssh port: 2254
ssh fingerprints:
256 SHA256:Us0dqF9E5ZSqzh0Pjb/DHCr0XxVFTsVXEzrtZEMEUfk root@jtx1c (ECDSA)
256 SHA256:m/tBhpGDcX+x7N9jqdhgyR8JhcBMY0pOtC0GnfzAKmc root@jtx1c (ED25519)
2048 SHA256:93NStk0XCQuqw+DBoCBCL+RyFlFngxnoPUzQdcH+P1k root@jtx1c (RSA)

Branch for jenkins.debian.net "armhf-jtk1a-jtx1c" available at:

  
https://anonscm.debian.org/cgit/users/vagrant/jenkins.debian.net.git/log/?h=armhf-jtk1a-jtx1c

Hopefully everything needed to enable these is in those commits.

Thanks for maintaining jenkins!

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: GCC build-path patch getting blocked

2017-09-04 Thread Vagrant Cascadian
On 2017-08-16, Ximin Luo wrote:
> It looks like the GCC reviewer that looked at my patch this time
> around, really doesn't like environment variables. They seem to be
> happy to support the variable (including the syntax) as a command-line
> flag however.

> The original patch fixed ~1800 packages, which were unreproducible due
> to a combination of (a) __FILE__, (b) CFLAGS et al being embedded into
> the output, and (c) packages/upstreams not honoring CFLAGS in the
> first place, and (d) possibly other reasons.
...
> For these reasons, I have the following proposal, as a work around for the 
> time being:
>
> 1. Patch GCC to support BUILD_PATH_PREFIX_MAP as a command-line flag. this 
> will at least fix packages affected by (a).

What about a compromise, patching GCC to support a commandline flag
"--respect-build-path-prefix-map-variable" that tells it to respect the
BUILD_PATH_PREFIX_MAP as an environment variable?

The commandline flag could essentially be a boolean, and would fix (a)
as well as (b).

Or maybe GCC is just fundamentally opposed to environment variables at
all?


This is perhaps a correlary to the additional variable added to some of
the latex toolchain that basically said "really, use SOURCE_DATE_EPOCH
for the current time please, please, really"


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Bug#868569: New armhf/arm64 nodes (Jetson-tx1, Jetson-tk1)

2017-07-16 Thread Vagrant Cascadian
Package: jenkins.debian.org
Severity: normal
Tags: patch
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

I've made a jenkins.debian.net branch "new-armhf-jtk1b-jtx1b" that adds
the recently announced boards:

  https://anonscm.debian.org/cgit/users/vagrant/jenkins.debian.net.git/

  
https://anonscm.debian.org/cgit/users/vagrant/jenkins.debian.net.git/commit/?h=new-armhf-jtk1b-jtx1b

Hopefully it's everything needed to deploy to the new machines.

Thanks for working on Debian's jenkins infrastructure!

live well,
  vagrant

On 2017-07-13, Vagrant Cascadian wrote:
> Two new machines ready for incorporation into the armhf build network.
>
> Thanks to Nvidia, Debian and Freegeek for the donations that
> made this expansion phase possible!
>
> jtk1b-armhf-rb.debian.net:
> Jetson-TK1, nvidia tegra-k1 (cortex-a15) quad-core, 2GB ram
> ssh port: 2252
> ssh fingerprints:
> 256 MD5:67:0f:2e:8c:c7:4e:00:cd:70:67:5b:0e:b9:cd:6a:03 root@jtk1b (ECDSA)
> 256 SHA256:kbhhukXo+8XvSE6UXX84aoz8Ho1UkHjl8cD1TZEQWPk root@jtk1b (ECDSA)
> 256 SHA256:4KN325sMN5Xqs7xrZsMtsa4/fNJ/QLp0bGAGJ7a+gPI root@jtk1b (ED25519)
> 256 MD5:1f:26:49:7d:64:f5:21:ff:61:0f:96:74:10:20:09:88 root@jtk1b (ED25519)
> 2048 MD5:eb:f9:e2:18:d3:fe:5d:2f:eb:dd:56:5d:f3:ba:8c:d1 root@jtk1b (RSA)
> 2048 SHA256:j+qyOQOqKUhTsmmPxPqBAyvIKnlSfLgzj8h0hi7jVo8 root@jtk1b (RSA)
>
> jtx1b-armhf-rb.debian.net:
> Jetson-tx1, quad-core (big.LITTLE Cortex-A53/A57), ~3.5GB ram,
>   native sata ~500GB disk
> ssh port: 2253
> ssh fingerprints:
> 256 MD5:72:92:55:e9:81:b4:be:fa:f8:94:3a:f6:80:1c:e2:0e root@jtx1b (ECDSA)
> 256 SHA256:1Z99Rvm4USICiWUWy75tIVbV02eIMyNRW7gkZS5BE3Y root@jtx1b (ECDSA)
> 256 MD5:85:90:14:70:6b:d3:5c:ab:9a:b2:23:c0:c2:fc:6d:95 root@jtx1b (ED25519) 
> 256 SHA256:4sKcNcBtgHaVCP/BDN3Ke60JD7SuVglEvdRI2IKLg3o root@jtx1b (ED25519)
> 2048 MD5:64:e5:d0:dd:fc:24:10:54:b4:54:4c:55:ae:86:08:cf root@jtx1b (RSA)
> 2048 SHA256:uih3N0O1BOaRNdayWyTTb7iXFTV24vj7zG6Eunmu1Ak root@jtx1b (RSA)
>
>
> live well,
>   vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

New armhf/arm64 nodes (Jetson-tx1, Jetson-tk1)

2017-07-13 Thread Vagrant Cascadian
Two new machines ready for incorporation into the armhf build network.

Thanks to Nvidia, Debian and Freegeek for the donations that
made this expansion phase possible!

jtk1b-armhf-rb.debian.net:
Jetson-TK1, nvidia tegra-k1 (cortex-a15) quad-core, 2GB ram
ssh port: 2252
ssh fingerprints:
256 MD5:67:0f:2e:8c:c7:4e:00:cd:70:67:5b:0e:b9:cd:6a:03 root@jtk1b (ECDSA)
256 SHA256:kbhhukXo+8XvSE6UXX84aoz8Ho1UkHjl8cD1TZEQWPk root@jtk1b (ECDSA)
256 SHA256:4KN325sMN5Xqs7xrZsMtsa4/fNJ/QLp0bGAGJ7a+gPI root@jtk1b (ED25519)
256 MD5:1f:26:49:7d:64:f5:21:ff:61:0f:96:74:10:20:09:88 root@jtk1b (ED25519)
2048 MD5:eb:f9:e2:18:d3:fe:5d:2f:eb:dd:56:5d:f3:ba:8c:d1 root@jtk1b (RSA)
2048 SHA256:j+qyOQOqKUhTsmmPxPqBAyvIKnlSfLgzj8h0hi7jVo8 root@jtk1b (RSA)

jtx1b-armhf-rb.debian.net:
Jetson-tx1, quad-core (big.LITTLE Cortex-A53/A57), ~3.5GB ram,
  native sata ~500GB disk
ssh port: 2253
ssh fingerprints:
256 MD5:72:92:55:e9:81:b4:be:fa:f8:94:3a:f6:80:1c:e2:0e root@jtx1b (ECDSA)
256 SHA256:1Z99Rvm4USICiWUWy75tIVbV02eIMyNRW7gkZS5BE3Y root@jtx1b (ECDSA)
256 MD5:85:90:14:70:6b:d3:5c:ab:9a:b2:23:c0:c2:fc:6d:95 root@jtx1b (ED25519) 
256 SHA256:4sKcNcBtgHaVCP/BDN3Ke60JD7SuVglEvdRI2IKLg3o root@jtx1b (ED25519)
2048 MD5:64:e5:d0:dd:fc:24:10:54:b4:54:4c:55:ae:86:08:cf root@jtx1b (RSA)
2048 SHA256:uih3N0O1BOaRNdayWyTTb7iXFTV24vj7zG6Eunmu1Ak root@jtx1b (RSA)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: source-only builds and .buildinfo

2017-06-21 Thread Vagrant Cascadian
On 2017-06-21, Ian Jackson wrote:
> Daniel Kahn Gillmor writes ("Re: source-only builds and .buildinfo"):
>> On Tue 2017-06-20 18:10:49 +0100, Ian Jackson wrote:
>> > A .buildinfo file is not useful for a source-only upload which is
>> > veried to be identical to the intended source as present in the
>> > uploader's version control (eg, by the use of dgit).
>> >
>> > Therefore, dgit should not include .buildinfos in source-only uploads
>> > it performs.  If dgit sees that a lower-layer tool like
>> > dpkg-buildpackage provided a .buildinfo for a source-only upload, dgit
>> > should strip it out of .changes.
>> 
>> I often do source-only uploads which include the .buildinfo.
>> 
>> I do source-only uploads because i don't want the binaries built on my
>> own personal infrastructure to reach the public.  But i want to upload
>> the .buildinfo because i want to provide a corroboration of what i
>> *expect* the buildds to produce.
>
> This is an interesting use case which dgit should support.

Agreed!


> But I think this is not what dgit push-source should do.  Sean's
> proposed dgit push-source does not do any kind of binary package
> build.  I think this is correct.  But this means there are no binaries
> and nothing for the .buildinfo to talk about.

Yes, this makes sense for the most part.


> Do the "source-only uploads" that you are talking about mention the
> hashes of these locally-built .debs in their .buildinfo, then ?

That's the goal, sure.

I've done this with all my recent source-only uploads, and then gone
back and verified that the buildd machines produced (in most cases), the
same hashes for the .deb files.

For example, this references the buildinfo of simple-cdd 0.6.5 I
uploaded with a source-only changes file in:

  
https://buildinfo.debian.net/30f7000b0025b570c7ae2202fc6fd79e4ca27798/simple-cdd_0.6.5_all

And this is a buildinfo produced over a month later on the reproducible
builds build network, on a different architecture (i386), with a
different build environment, that produced the same hashes:

  
https://buildinfo.debian.net/1d300b71445ac7d756e93546a7e6b36d3c1882c7/simple-cdd_0.6.5_all

And you can check the .buildinfo in the build logs on the buildd
produced the same sha1 hashes:

  
https://buildd.debian.org/status/fetch.php?pkg=simple-cdd=all=0.6.5=1494884527=0

And then you can compare the hashes of simple-cdd packages in the
archive are the same hashes listed.

Given that at least three machines, of differing architecture, with over
a month between the packages in the build toolchain, produced the same
binary packages... I have *some* confidence that this package is
reproducible.

It's not the most complicated package, but it demonstrates that it is
now possible, for a reasonable portion of the archive, to at least
manually verify many of the builds. Some of this could be automated...


> Certainly `dgit push' will not do anything to any .buildinfo you may
> have.  I think maybe that your use case should be supported by having
> a version of dgit push which drops the .debs from the .changes, but
> leaves the .buildinfo ?  Is that how you construct these uploads now ?

I use sbuild's --source-only-changes option, which creates two .changes
files, one with the debs (ARCH.changes), and one
without(source.changes). In both cases, the .buildinfo referenced in
.changes includes hashes of the .deb files.


> (Also: is there anything right now that verifies your assertions about
> the .debs?  Not that the lack of such a thing would make the
> .buildinfos useless, but my experience is that without closing that
> loop it is likely that the arrangements for generating the .buildinfo
> are wrong somehow in a way we haven't spotted.)

There's nothing corroborating the results of .deb files in the archive
against tests.reproducible-builds.org build results, but that does
rebuild all packages in the archive with permutations of the build
environment, and logs when they aren't reproducible.

The archive is keeping the .buildinfo files uploaded with packages,
though they aren't, to my knowledge, exposed yet. But it would allow for
retroactive verification of said packages once the .buildinfo files are
available. A few relevent bugs on ftp.debian.org regarding this:

  https://bugs.debian.org/763822
  https://bugs.debian.org/862073
  https://bugs.debian.org/862538
  https://bugs.debian.org/863470


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Old and New armhf/arm64 nodes (Firefly-rk3399, Jetson-tx1, Odroid-C2, Raspberry PI2)

2017-06-03 Thread Vagrant Cascadian
On 2017-06-03, Holger Levsen wrote:
> On Sat, Jun 03, 2017 at 09:27:38AM -0700, Vagrant Cascadian wrote:
>> On a related note, I haven't noticed any builds on ff64a or jtx1a... did
>> all their partner machines happen to be marked as disabled?
>
> there was a problem with the (new) build service with commented out "jobs"
> which I fixed today, thus those builds should pour in now…

Hrm. Several hours later, jtx1a and ff64a still appear to be idle,
according to munin. *sigh*

We'll get them working eventually. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Old and New armhf/arm64 nodes (Firefly-rk3399, Jetson-tx1, Odroid-C2, Raspberry PI2)

2017-06-02 Thread Vagrant Cascadian
On 2017-06-02, Holger Levsen wrote:
> On Fri, May 26, 2017 at 01:56:54PM -0700, Vagrant Cascadian wrote:
>> Got one old machine freshly reinstalled after a disk failure (rpi2c),
>> and three newly readied machines (ff64a, jtx1a, odc2a) that are arm64
>> capable, but configured with armhf userspace to be armhf build nodes.
>
> those are now all set up for tests.reproducible-builds.org, but…
>
> <  h01ger> | ff64a seems quite slow
> <  h01ger> | it is, now that i can compare to jtx1a and odc2a
>  * | h01ger assumes disk io

Yes, disk i/o seems likely, although shouldn't be hugely worse than any
of the other systems...  Waiting on native sata hardware to become
available... so it's currently using a usb-sata adapter. Hopefully that
will resolve the issue.


> <  h01ger> | odc2a is faster than jtx1a (also io wise, but both are 
> rather fine…)
> <  h01ger> | odc2a takes ages to generate the gpg key for the jenkins 
> user, somehow haveged fails to start…
> <  h01ger> | root@odc2a:~# /usr/sbin/haveged --Foreground --verbose=1 
> --write=1024
> <  h01ger> | haveged starting up
> <  h01ger> | Segmentation fault
>  * | h01ger sighs
> <  h01ger> | root@odc2a:~# uname -a
> <  h01ger> | Linux odc2a 4.12.0-rc2+ #4 SMP Wed May 24 23:18:20 UTC 2017 
> aarch64 GNU/Linux
> <  h01ger> | hui ui ui :)

Hrm. that's a new one! Didn't see it myself during the bring-up
testing:

odc2a login: [ 8759.086506] haveged[8523]: unhandled level 3 translation
fault (11) at 0xabf1147c, esr 0x9207
...
[ 8779.711346] haveged[8703]: unhandled level 2 translation fault (11)
at 0x93c6374c, esr 0x9206

Will experiment with combinations of stretch, kernel versions, etc. and
see if it behaves the same.


> so while I have added 10 new builder jobs, I've also disabled 5 of
> them again for now:

Ah well... will do more troubleshooting and make those nubmers
better. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Old and New armhf/arm64 nodes (Firefly-rk3399, Jetson-tx1, Odroid-C2, Raspberry PI2)

2017-05-26 Thread Vagrant Cascadian
On 2017-05-26, Chris Lamb wrote:
>> jtx1a-armhf-rb.debian.net:
>> Jetson-tx1, quad-core (big.LITTLE Cortex-A53/A57), ~3.5GB ram,
>
> Hm? _Approximately_ 3.5GB ram? :)

I think the kernel is overly agressive reserving some ram(there was a
recent thread about it on one of the kernel lists), and with rounding
errors and so on, I figured safer to use approximations. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Bug#861109: diffoscope: Please add support for .dtb (device tree blob) files.

2017-04-24 Thread Vagrant Cascadian
Control: tag 861109 pending

On 2017-04-24, Chris Lamb wrote:
> Thanks for implementing this. :)  I'd just check two things:
>
>   a) Unused subprocess import in test_dtb.py

Tested that it works without; committed and pushed.


>   b) Whether the tests pass with jessie's device-tree-compiler
>  (1.4.0) and add a conditional if so.

I was unable to build diffoscope on jessie. It appears that fdtdump from
jessie *does* produce different output, so I guess I'll also add
versioned dependency...


> After that. please just go-ahead and push these commits to the
> experimental branch :)

Thanks!

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Bug#861109: diffoscope: Please add support for .dtb (device tree blob) files.

2017-04-24 Thread Vagrant Cascadian
Package: diffoscope
Version: 81
Severity: wishlist
Tags: patch

I've worked on support for diffing .dtb files, a format used to
describe systems such as powerpc, arm and mips.

Attached are two patches, the first implementing support, and the
second adding tests.

They're also available as a git branch:

  https://anonscm.debian.org/cgit/reproducible/diffoscope.git/log/?h=vagrant/dtb


I basically copied the openssh pub key diffing support and modifed to
support .dtb files instead.  Any comments, suggestions or improvements
welcome!


live well,
  vagrant

From c55c67da65ed37bd4268005fbdede27767b1331b Mon Sep 17 00:00:00 2001
From: Vagrant Cascadian <vagr...@debian.org>
Date: Mon, 24 Apr 2017 10:21:21 -0700
Subject: [PATCH 1/2] Add support for .dtb (device tree blob) files.

---
 diffoscope/comparators/__init__.py |  1 +
 diffoscope/comparators/dtb.py  | 39 ++
 diffoscope/external_tools.py   |  3 +++
 3 files changed, 43 insertions(+)
 create mode 100644 diffoscope/comparators/dtb.py

diff --git a/diffoscope/comparators/__init__.py b/diffoscope/comparators/__init__.py
index 81f6d16..6527b6d 100644
--- a/diffoscope/comparators/__init__.py
+++ b/diffoscope/comparators/__init__.py
@@ -84,6 +84,7 @@ class ComparatorManager(object):
 ('gif.GifFile',),
 ('pcap.PcapFile',),
 ('pgp.PgpFile',),
+('dtb.DeviceTreeFile',),
 )
 
 _singleton = {}
diff --git a/diffoscope/comparators/dtb.py b/diffoscope/comparators/dtb.py
new file mode 100644
index 000..12dbf39
--- /dev/null
+++ b/diffoscope/comparators/dtb.py
@@ -0,0 +1,39 @@
+# -*- coding: utf-8 -*-
+#
+# diffoscope: in-depth comparison of files, archives, and directories
+#
+# Copyright © 2016 Emanuel Bronshtein <e3am...@gmx.com>
+# Copyright © 2016 Vagrant Cascadian <vagr...@debian.org>
+#
+# diffoscope is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# diffoscope is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with diffoscope.  If not, see <https://www.gnu.org/licenses/>.
+
+import re
+
+from diffoscope.tools import tool_required
+from diffoscope.difference import Difference
+
+from .utils.file import File
+from .utils.command import Command
+
+
+class DeviceTreeContents(Command):
+@tool_required('fdtdump')
+def cmdline(self):
+return ['fdtdump', self.path]
+
+class DeviceTreeFile(File):
+RE_FILE_TYPE = re.compile(r'^Device Tree Blob')
+
+def compare_details(self, other, source=None):
+return [Difference.from_command(DeviceTreeContents, self.path, other.path)]
diff --git a/diffoscope/external_tools.py b/diffoscope/external_tools.py
index 3ce2cbd..8788936 100644
--- a/diffoscope/external_tools.py
+++ b/diffoscope/external_tools.py
@@ -57,6 +57,9 @@ EXTERNAL_TOOLS = {
 'debian': 'enjarify',
 'arch': 'enjarify',
 },
+'fdtdump': {
+'debian': 'device-tree-compiler',
+},
 'file': {
 'debian': 'file',
 'arch': 'file',
-- 
2.11.0

From 78d43c516db58a89c23ee41a1eb23e47bf2109c8 Mon Sep 17 00:00:00 2001
From: Vagrant Cascadian <vagr...@debian.org>
Date: Mon, 24 Apr 2017 10:35:52 -0700
Subject: [PATCH 2/2] Add tests for .dtb files.

---
 debian/control  |   1 +
 debian/copyright|  12 ++
 tests/comparators/test_dtb.py   |  57 +
 tests/data/devicetree1.dtb  | Bin 0 -> 68260 bytes
 tests/data/devicetree2.dtb  | Bin 0 -> 68323 bytes
 tests/data/devicetree_expected_diff |  71 
 6 files changed, 141 insertions(+)
 create mode 100644 tests/comparators/test_dtb.py
 create mode 100644 tests/data/devicetree1.dtb
 create mode 100644 tests/data/devicetree2.dtb
 create mode 100644 tests/data/devicetree_expected_diff

diff --git a/debian/control b/debian/control
index d0eef28..e3cbc55 100644
--- a/debian/control
+++ b/debian/control
@@ -19,6 +19,7 @@ Build-Depends:
  dh-python (>= 2.20160818~),
  docx2txt ,
  dpkg-dev (>= 1.17.14),
+ device-tree-compiler ,
  enjarify ,
  fontforge-extras ,
  fp-utils ,
diff --git a/debian/copyright b/debian/copyright
index ac52ac0..8c0e230 100644
--- a/debian/copyright
+++ b/debian/copyright
@@ -93,6 +93,18 @@ Copyright: 2007-2014 OpenWrt.org
2010 Vertical Communications
 License: GPL-2
 
+Files: tests/data/devicetree*.dtb
+Copyright:
+	2015 Nikolaus Schaller <h...@goldelico.com>
+	2012 Texas Instruments Incorporated - http:www.ti.com/
+	2011 Texas In

Bug#856248: trydiffoscope: traceback with "500 Server Error"

2017-02-26 Thread Vagrant Cascadian
Package: trydiffoscope
Version: 64
Severity: important

I haven't been able to use trydiffoscope for some weeks now, getting
tracebacks like the following:

  $ date > a
  $ # wait a bit...
  $ date > b
  $ trydiffoscope a b
  Traceback (most recent call last):
File "/usr/bin/trydiffoscope", line 142, in 
  sys.exit(TryDiffoscope(args).main())
File "/usr/bin/trydiffoscope", line 45, in main
  response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 893, in 
raise_for_status
  raise HTTPError(http_error_msg, response=self)
  requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for 
url: https://try.diffoscope.org/api/v3/comparison/xhdzpgzfphhf


Presumably this is an issue server-side, or an API incompatibility, or... ?


live well,
  vagrant


-- System Information:
Debian Release: 9.0
  APT prefers testing
  APT policy: (500, 'testing'), (120, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: armhf

Kernel: Linux 4.9.0-1-amd64 (SMP w/4 CPU cores)
Locale: LANG=C.UTF-8, LC_CTYPE=C.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages trydiffoscope depends on:
ii  python3-requests  2.12.4-1
pn  python3:any   

trydiffoscope recommends no packages.

trydiffoscope suggests no packages.

-- no debconf information


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Moving towards a deb-buildinfo(5) Format 1.0

2017-02-19 Thread Vagrant Cascadian
On 2017-02-19, Guillem Jover wrote:
>> * .buildinfo files are not generated when creating source-only uploads
>
> Fixed. Now always generated.

On a related note, is it currently possible to create a .buildinfo with
both the source and binary, but a corresponding .changes with only the
source?

This would really facilitate source-only uploads, but with binaries
listed in the .buildinfo produced by the uploader.

Something similar is possible with sbuild's --source-only-changes
option, where it creates both a *_source.changes and appropriate
*_ARCH.changes. It would be nice to have a similar feature for
.buildinfo.


Thanks everyone for getting dpkg support for .buiildinfo this far!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: New armhf node (Pine64+)

2017-02-16 Thread Vagrant Cascadian
On 2017-02-16, Holger Levsen wrote:
> On Mon, Feb 06, 2017 at 01:39:47PM -0800, Vagrant Cascadian wrote:
> linux-image-4.10.0-rc6-arm64-unsigned (4.10~rc6-1~exp1) wird eingerichtet ...
> /etc/kernel/postinst.d/initramfs-tools:
> update-initramfs: Generating /boot/initrd.img-4.10.0-rc6-arm64
> DTB: sun50i-a64-pine64-plus.dtb
> Couldn't find 
...
> Can you fix this up ("somehow" on the host…), please?!

Just purged the un-unsed kernel packages which didn't have support for
these boards. I guess you must have installed something that triggered
an "update-initramfs" call on the older kernel versions...

Removed your workaround, and re-ran update-initramfs. Should be working
now.


>> This one is interesting in that it's running an arm64 kernel with armhf
>> userland (like the i386 builders that run amd64 kernels).
>
> nice! is this the same for p64c too?

Yup.


Thanks for getting themn into production!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Another new armhf node (Pine64+)

2017-02-14 Thread Vagrant Cascadian
Yet Another arm board ready to be configured for the build farm!

p64c-armhf-rb.debian.net:
Pine64+, Allwinner A64 (cortex-a53) quad-core, 2GB ram
ssh port: 2248
ssh fingerprints:
2048 7e:11:62:84:b2:e8:cd:2b:52:f5:41:c1:98:bf:7a:d2 (RSA)
256 83:48:b1:c2:11:45:5c:51:9d:67:d1:58:0a:95:3d:33 (ECDSA)
256 2a:5a:c1:f3:90:14:dd:25:c3:6a:92:72:f4:23:e9:7e (ED25519)

Another arm64 system running armhf userland. This one has just a plain
USB stick; we'll see how that goes.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: diffoscope 77 in stretch or not?

2017-02-13 Thread Vagrant Cascadian
On 2017-02-13, Mattia Rizzolo wrote:
> On Mon, Feb 13, 2017 at 01:32:55PM -0800, Vagrant Cascadian wrote:
>> The other obvious option is to not ship a version in stretch and rely on
>> stretch-backports, if diffoscope development hasn't yet settled down
>> enough (will it ever) for a Debian stable release cycle...
>
> THAT'S NOT POSSIBLE.
>
> backports master have *always* be against such method.
> If you upload something to backports you're committing to ship that in
> the next stable release.

They've always been against backporting versions not present in testing,
but I haven't seen a response like that to what I proposed, and don't
see it listed anywhere on:

  https://backports.debian.org/Contribute/


> (capitalized, as it happens too often that somebody wants to do it, and
> then it causes a lot of noise in debian-backports (either IRC or ML),
> everybody gets more annoyed, etc.)

I've seen that plenty with a version not in testing, or not in unstable,
etc. but not noticed any firestorms around packages not in
stable...


>> How does trydiffoscope fit into the picture? Is it sufficiently isolated
>> that the heavy lifting offloaded to a network service is more
>> maintainable for a stable release?
>
> It's a difference source package, with a difference schedule, etc.  How
> does it fit in this topic?  :)

It provides some of the same functionality, but might reduce the need
for a full-blown diffoscope in stable, or the need to continue to
support an older, buggier, less-maintainable version of diffoscope in
stable. That was my line of thinking anyways.


If, for whatever reason, it's reasonable to continue to maintain an
older version of diffoscope in stable, the above two points are pretty
much moot. Just wanted to spell out all the options regarding diffoscope
and stable, but I'm not attached to any particular strategy.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: diffoscope 77 in stretch or not?

2017-02-13 Thread Vagrant Cascadian
On 2017-02-13, Chris Lamb wrote:
> Mattia Rizzolo wrote:
>> Then, if the unblock is accepted, I'd say we should either freeze
>> diffoscope development […]
>
> Strongly against this. :)
>
>> or stop uploading to unstable
>
> We can always upload to experimental to keep release momentum. Or just
> keep uploading to unstable and t-p-u any further critical issues. *g*

The other obvious option is to not ship a version in stretch and rely on
stretch-backports, if diffoscope development hasn't yet settled down
enough (will it ever) for a Debian stable release cycle...

That said, it's useful even with some of the bugs.

How does trydiffoscope fit into the picture? Is it sufficiently isolated
that the heavy lifting offloaded to a network service is more
maintainable for a stable release?


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

New armhf node (Pine64+)

2017-02-06 Thread Vagrant Cascadian
Another arm board ready to be configured for the build farm!

p64b-armhf-rb.debian.net:
Pine64+, Allwinner A64 (cortex-a53) quad-core, 2GB ram
ssh port: 2247
ssh fingerprints:
2048 19:26:31:fa:7b:ff:96:ae:14:20:b8:25:36:59:37:df 
/etc/ssh/ssh_host_rsa_key.pub (RSA)
256 74:1d:59:57:c7:3b:c9:ad:a7:09:30:03:a4:95:2f:12 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
256 a6:41:94:be:eb:3f:b0:f5:c0:84:58:dd:16:a3:fb:ac 
/etc/ssh/ssh_host_ed25519_key.pub (ED25519)

Running a non-Debian kernel, but built from the linux-next tree, so
should be possible to switch to experimental and/or stretch-backports
when the time comes.

This one is interesting in that it's running an arm64 kernel with armhf
userland (like the i386 builders that run amd64 kernels). We may not
have enough of these to do this systematically yet unless we divert some
of the other arm64 builders, though I'll likely get a few more in this
configuration set up "soon" regardless.


Space is getting a little tight, so if this one
performs well, I'll probably want to decomission one of the slower
boards. I've got another Pine64+ that should be ready soon, and *maybe*
an odroid-c2 as well, and likely some additional board donations
coming... maybe I should get a bigger UPS and another network switch to
support another 8 boards...


I think it is only configured with ssh keys for holger, but if someone
else is able to configure it and has the time I can add them as well.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: What do we really mean by "reproducible"?

2017-01-16 Thread Vagrant Cascadian
On 2017-01-16, Santiago Vila wrote:
> Before I use this rationale more times in some discussions out there, I'd
> like to be sure that there is a consensus.
>
> What's the definition of reproducible? It is more like A or more like B?

I don't know if you're aware of the recently created:

  https://reproducible-builds.org/docs/definition/
  

> A. Every time the package is attempted to build, the build succeeds,
> and the same .deb are always created.
>
> B. Every time the build is attempted and the builds succeeds, the
> same .deb are always created.
>
> In other words: It is ok to consider "always build ok" as a prerequisite
> to consider a source package "reproducible"?

If it reproducibly FTBFS, well, I guess that's a form of
reproducibility... but I tend to think you need to actually have
meaningfully produced binaries, packages, objects, etc. as a result of
the build process compare to consider it reproducible.

If there's randomness or variability inherent in the build process that
causes the build to fail sometimes, I'd say that's not
reproducible... so I'd be inclined to say "A".


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] arm64 reproducible build network

2016-12-08 Thread Vagrant Cascadian
On 2016-12-06, Axel Beckert wrote:
> Holger Levsen wrote:
>> On Tue, Dec 06, 2016 at 09:24:50AM -0800, Martin Michlmayr wrote:
>> > I heard from a Linaro contact that LeMaker wasn't able to fix the PCIE
>> > issue and was going to produce the board without, but that update is
>> > from October and there has been nothing since. :(
> […]
>> on the plus side we now got access to some moonshot-arm64 hardware, so
>> at least we'll be testing on arm64 soon. though for diversity reasons,
>> we absolutly still like to get access to LeMaker boards too!
>
> Not that I want to undermine Martin's efforts, but I wonder if we
> should start by taking some more boards into account for arm64, too:
>
> * Raspberry Pi 3 now works also with arm64:
>   https://wiki.debian.org/RaspberryPi3
>   https://people.debian.org/~stapelberg/raspberrypi3/
>   (Serial console only as of now, but that should be ok-ish for a
>   reproducible builds node.)

I just don't think 1GB of ram is worth it. I'd like to decommission the
1GB ram systems we have running in the armhf network...


> * Odroid C2 works fine with arm64, too. Not sure what's needed to get
>   all kernel modifications upstreamed:
>   https://www.armbian.com/odroid-c2/

I've got an odroid-c2 as part of the funded boards for
reproducible-builds, but upstream support for u-boot and kernel are
quite a ways off, last I looked.


> I'm aware of the connectivity requirements, but wrt. Raspberry Pi 3
> and Odroid C2 I wonder about disk space and speed requirements. Would
> e.g. a Class 10 UHS-1 8 GB microSD card suffice?

microSD is surely too slow.


> http://www.pollin.de/shop/dt/OTQ2NzcyOTk-/Computer_Informationstechnik/Speichermedien/microSD_SDHC_Speicherkarten/MicroSDHC_Card_VERBATIM_44004_8_GB.html
>
> Or are the probably faster but also more expensive eMMC devices
> preferred?

eMMC might be fast enough, not sure how easy it is to find a 64-128GB
eMMC, though.


> http://www.pollin.de/shop/dt/NTk0OTgxOTk-/Bauelemente_Bauteile/Entwicklerboards/Odroid/ODROID_C2_eMMC_Modul_8_GB_mit_Linux.html
>
> Or is even some real hard disk or SSD needed?

Many of the boards for the armhf build network use USB-to-SATA adapters
over USB2 or USB3, and a 128GB SSD, although typically leave 20-50%
unpartitioned to leave extra room for wear levelling, since these are
doing almost constant writes 24/7.


I have three pine64+ boards that I'd consider using, and mainline or
near-mainline support is coming along. I'm not sure if the lemaker hikey
will ever be good enough to use, but I have one and keep an eye on
upstream commits.


Although, any of the boards I have access to I might want to use as part
of a "test armhf chroots on arm64 hardware" installation, to get similar
kernel variation to i386 on amd64 hardware (and also not to tax my
bandwidth limits by downloading packages for another architecture).


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: Buildinfo in the Debian archive, updates

2016-12-06 Thread Vagrant Cascadian
On 2016-12-07, Jonathan McDowell wrote:
> On Tue, Dec 06, 2016 at 09:24:20PM +, Holger Levsen wrote:
>> On Mon, Nov 14, 2016 at 02:57:00PM +, Ximin Luo wrote:
>> > This email is a summary of some discussions that happened after the
>> > last post to bug #763822, plus some more of my own thoughts and
>> > reasoning on the topic.
>> 
>> I think that given our last mail on this bug was >4 weeks ago, it's
>> mostly important we reply to the bug at all now…
>>  
>> > I think having the Debian FTP archive distribute unsigned buildinfo
>> > files is an OK intermediate solution, with a few tweaks:
>> > 
>> > 1. the hashes of the *signed* buildinfo files must be referred-to
>> > for each binary package, in Packages.gz
>> 
>> I actually think thats too much to ask for right now. we should
>> *propose* this now as a 2nd step, but right now the first step should
>> be that those .buildinfo files are stored *at all*, for later
>> consumption.
>
> The storage of the hashes of the signed buildinfo files in Packages.gz
> seems to be in order to deal with the fact that the signature is not
> available elsewhere.

Well, the storage of the hashes of buildinfo files in Packages is a way
to for the archive to attest "this is the exact buildinfo file used to
create this exact .deb package version on archictecture X", in the same
file it distributes information about the .deb Packages. Having a
separate Buildinfos.xz file *might* at some point be out of sync with
the Packages file (even if just due out-of-sync mirrors).

It seems (perhaps naively) like it should be cheap to add a single-line
checksum along with the Packages file, which allows users to look for
the corresponding .buildinfo (perhaps in-archive, perhaps through a
third-party service) without having to download additional
Buildinfos.xz.

I think supporting multiple buildinfo files in-archive, while nice,
shouldn't block getting the .buildinfo files used to generate the .deb
(and related) files in-archive. It's more important to distribute the
.buildinfo files in-archive that were used to build the package included
in-archive, than require an implementation supporting multiple signers.


Overall, I'm in favor of whatever incremental progress moves in the
right general direction, even if not the "perfect" direction. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Use lazy unmount for disorderfs on jenkins.debian.net

2016-11-15 Thread Vagrant Cascadian
Retrying without the diff, since that got moderated...

Using lazy unmount *might* allow the disorderfs filesystems to unmount
if a file descriptor is held open but later releases... worst case is it
shouldn't make anything worse.

available in "lazy-unmount" branch at:

  https://anonscm.debian.org/git/users/vagrant/jenkins.debian.net.git

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Help with SCALE 15x submission *today*

2016-11-15 Thread Vagrant Cascadian
Hi, Just heard about the SCALE CfP this weekend, and it's due *today*,
so am working on re-doing the talk I did at SeaGL, but could use help on
the long description. I've put up a pad for this with links to my SeaGL
slides:

  https://pad.riseup.net/p/1Q5KsaDJnnis

If anyone could help extending the summary into a longer description,
it'd be appreciated! Making sure to put in enough extra detail, but not
too much... if you've got some time today, ping me on irc and we can
start editing...

Thanks!


The short summary I have so far:

Introduction to Reproducible builds

The Reproducible Builds project creates infrastructure and fixes
upstream code so that binaries can be independently verified as the
result of compiling source code. Without verifying the connection
between source code and binary software, toolchains become a tempting
target to inject exploits.

This talk will demonstrate why reproducibility matters, common issues
and fixes, and tools used to identify and troubleshoot issues, moving
towards reproducibility as a set of best practices when developing and
improving software.

https://reproducible-builds.org


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

opi2a back! init system variations?

2016-10-05 Thread Vagrant Cascadian
opi2a is back! Not sure how to rearrange build jobs off the top of my
head, but we've got several cores just sittle idle again. :)

For some reason, systemd wasn't starting consistantly, hanging on
systemd-logind. Oddly, it worked well enough to still run build jobs,
but restarting daemons and upgrading some packages would hang.

With some effort, I switched to sysvinit from a debug shell, finished
some pending upgrades, of which were several systemd related. After the
upgrade, it's been working consistantly using systemd ever since...  So
I don't *really* know why it's working now... but it is. Best guess is
it crashed in the middle of a upgrade (maybe disk full?) at some point
and corrupted some relevent file(s)?


The whole ordeal did trigger a thought about adding init system
variations to the builds... at least could do systemd and sysvinit at
the moment without a *huge* amount of trouble, unless we're relying on
some init-specific features for the builders?


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

New armhf node (Jetson-TK1)

2016-09-07 Thread Vagrant Cascadian
New armhf board up and ready for configuration.

Thanks to Nvidia for the donation, and to Martin Michlmayr and Eric
Brower for making the arrangements!

jtk1a-armhf-rb.debian.net:
Jetson-TK1, nvidia tegra-k1 (cortex-a15) quad-core, 2GB ram
ssh port: 2246
ssh fingerprints:
256 ec:8e:dc:22:dd:69:d0:4e:01:bc:60:66:89:e6:8b:52 (ECDSA)
2048 ff:a6:b6:34:0c:24:c7:a6:71:7a:cb:e3:0e:3d:ea:07 (RSA)
256 4b:9a:a8:e2:b9:57:6c:5b:54:93:63:ef:1f:ce:b5:c2 (ED25519)

Once it's fully configured and ready for builds, please reallocate
opi2a's jobs to jtk1a, so I can take opi2a offline to troubleshoot some
ongoing issues. Might still be a net gain in performance, as the
cortex-a15 CPUs and native SATA *should* be faster.


Nvidia also donated another Jetson-TK1 board, but I'll be using that for
linux, u-boot and debian-installer testing/troubleshooting for the near
future to try and make sure it's well supported for Debian Stretch.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Bug#834016: ddd: please make the build reproducible

2016-08-11 Thread Vagrant Cascadian
On 2016-08-11, Chris Lamb wrote:
> +--- ddd-3.3.12.orig/ddd/config-info
>  ddd-3.3.12/ddd/config-info
> +@@ -59,6 +59,10 @@ esac
> + month=`date '+%m'`
> + day=`date '+%d'`
> + date=${year}-${month}-${day}
> ++if [ -n "${SOURCE_DATE_EPOCH}" ]
> ++then
> ++date=`date --utc --date="@${SOURCE_DATE_EPOCH}" '+%Y-%m-%d'`
> ++fi
> + (
> + echo "@(#)Built $date by $userinfo"
> + if $features; then

Wouldn't this also require the "month" and "day" variables to be set
using SOURE_DATE_EPOCH?


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Moving towards buildinfo on the archive network

2016-08-02 Thread Vagrant Cascadian
On 2016-07-25, Jonathan McDowell wrote:
> I propose instead a Buildinfo.xz (or gz or whatever) file, which is
> single text file with containing all of the buildinfo information that
> corresponds to the Packages list. What is lost by this approach are the
> OpenPGP signatures that .buildinfo files can have on them. I appreciate
> this is an important part of the reproducible builds aim, but I believe
> one of its strengths is the ability for multiple separate package builds
> to attest that they have used that buildinfo information to build the
> exact same set of binary artefacts. This is not something that easily
> scales on the archive network and I think it is better served by a
> separate service; it would be possible to take the package snippet from
> the buildinfo file and sign that alone, uploading the signature to the
> attestation service. For "normal" Debian operation the usual archive
> signatures would provide a basic level of attestation of chain of build
> information.
>
> The rest of this mail continues on the above assumptions. If you do not
> agree with the above the below is probably null and void, so ignore it
> and instead educate me about what the requirements are and I'll try and
> adjust my ideas based on that.
>
> So. If a single Buildinfo.xz file is acceptable, with the attestation
> being elsewhere, I think this is doable without too much hackery in dak.
> There are some trade-offs to make though, and I need to check which are
> acceptable and which are viewed as too much.

I just wanted to give a huge thanks for taking a good look at this, even
if it isn't exactly what has been specced out by earlier
reproducible-builds discussions. Evaluating a somewhat different
approach, especially if it turns out to be more feasible (at least from
some angles), is really valuable in my eyes.

FWIW, I wasnt involved in the discussions spelling out what the
reproducible builds projects wanted in the archive, so I don't have much
concrete to say, but you've clearly given some serious thought and
effort to this, so I didn't want it to slip through the cracks!

I tried to read through some of the documentation I could find:

  https://wiki.debian.org/ReproducibleBuilds/BuildinfoSpecification
  https://reproducible-builds.org/events/athens2015/debian-buildinfo-review/
  https://reproducible-builds.org/events/athens2015/buildinfo-content/

Having reviewed the above, there doesn't seem to be a huge conflict that
you haven't at least considered already.

Hopefully, someone with more history and context with the .buildinfo
file discussions can chime in soonish...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Remaining reprotest variations

2016-07-27 Thread Vagrant Cascadian
On 2016-07-27, Ceridwen wrote:
> For most of the variations I've done so far, I've been either
> depending on external utilities or had POSIX-compliant ways to execute
> them.  The rest of the variations pose more problems.
...
> 5. kernel: While `uname` is in the POSIX standard, mechanisms for
> altering its output aren't.  `setarch`, what prebuilder uses and what
> reprotest uses at the moment, is Linux-specific.
> - What methods of changing `uname` will work on other OSes?

Using "setarch uname26" on non-x86 architectures may cause issues with
recent versions of glibc, too:

  
https://lists.alioth.debian.org/pipermail/reproducible-builds/Week-of-Mon-20151130/004040.html
  https://bugs.debian.org/806911
  https://sourceware.org/ml/libc-alpha/2015-12/msg00028.html

So it certainly should be optional, at best.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Notes to mariadb-10.0

2016-07-11 Thread Vagrant Cascadian
On 2016-07-10, Otto Kekäläinen wrote:
> Diffs reveal that the issue is expressed in products that stem from
> storage/tokudb/*. We are not able to poinpoint what it is exactly, and
> we are not even able to reproduce the unreproducible build.
>
> Here's what we've done in a sid amd64 VM:
>
> ===
> sudo apt-get update
> sudo apt-get install dh-exec libpcre3-dev git-buildpackage
> mkdir orig
> cd orig
> gbp clone --pristine-tar git://git.debian.org/git/pkg-mysql/mariadb-10.0.git
> cd ..
> export DEB_BUILD_OPTIONS="nocheck"
> cp -r orig xxx
> cd xxx/mariadb-10.0
> gbp buildpackage
> cd ../..
> mv xxx yyy
> cp -r orig xxx
> cd xxx/mariadb-10.0
> gbp buildpackage
> ===

There are numerous other things such as locale, date, timezone,
username, and uid (just for a few examples) that are varied in the
reproducible builds infrastructure that you would also need to change
between builds:

  https://tests.reproducible-builds.org/debian/index_variations.html


There is a new tool in development, reprotest, to help set up builds
with variations:

  https://packages.debian.org/unstable/main/reprotest

I'm not sure how far along it is or if it would be useful to you yet,
perhaps others could chime in about that?


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] [PATCH] Respect SOURCE_DATE_EPOCH when building FIT images.

2016-06-16 Thread Vagrant Cascadian
Embedding timestamps in FIT images results in unreproducible builds
for targets that generate a fit image, such as dra7xx_evm.

This patch uses the SOURCE_DATE_EPOCH environment variable, when set,
to use specified value for the date.

Thanks to HW42 for debugging the issue and providing the patch:

  
https://lists.alioth.debian.org/pipermail/reproducible-builds/Week-of-Mon-20160606/005722.html

For more information about reproducible builds and the
SOURCE_DATE_EPOCH specification:

  https://reproducible-builds.org/specs/source-date-epoch/
  https://reproducible-builds.org/

Signed-off-by: Vagrant Cascadian <vagr...@debian.org>
---

 tools/default_image.c | 14 +-
 tools/fit_image.c |  6 --
 tools/imagetool.c | 20 
 tools/imagetool.h | 16 
 4 files changed, 41 insertions(+), 15 deletions(-)

diff --git a/tools/default_image.c b/tools/default_image.c
index 3ed7014..6e4ae14 100644
--- a/tools/default_image.c
+++ b/tools/default_image.c
@@ -88,7 +88,6 @@ static void image_set_header(void *ptr, struct stat *sbuf, 
int ifd,
struct image_tool_params *params)
 {
uint32_t checksum;
-   char *source_date_epoch;
time_t time;
 
image_header_t * hdr = (image_header_t *)ptr;
@@ -98,18 +97,7 @@ static void image_set_header(void *ptr, struct stat *sbuf, 
int ifd,
sizeof(image_header_t)),
sbuf->st_size - sizeof(image_header_t));
 
-   source_date_epoch = getenv("SOURCE_DATE_EPOCH");
-   if (source_date_epoch != NULL) {
-   time = (time_t) strtol(source_date_epoch, NULL, 10);
-
-   if (gmtime() == NULL) {
-   fprintf(stderr, "%s: SOURCE_DATE_EPOCH is not valid\n",
-   __func__);
-   time = 0;
-   }
-   } else {
-   time = sbuf->st_mtime;
-   }
+   time = imagetool_get_source_date(params, sbuf->st_mtime);
 
/* Build new header */
image_set_magic(hdr, IH_MAGIC);
diff --git a/tools/fit_image.c b/tools/fit_image.c
index 0551572..23dbce4 100644
--- a/tools/fit_image.c
+++ b/tools/fit_image.c
@@ -51,8 +51,10 @@ static int fit_add_file_data(struct image_tool_params 
*params, size_t size_inc,
}
 
/* for first image creation, add a timestamp at offset 0 i.e., root  */
-   if (params->datafile)
-   ret = fit_set_timestamp(ptr, 0, sbuf.st_mtime);
+   if (params->datafile) {
+   time_t time = imagetool_get_source_date(params, sbuf.st_mtime);
+   ret = fit_set_timestamp(ptr, 0, time);
+   }
 
if (!ret) {
ret = fit_add_verification_data(params->keydir, dest_blob, ptr,
diff --git a/tools/imagetool.c b/tools/imagetool.c
index 08d191d..855a096 100644
--- a/tools/imagetool.c
+++ b/tools/imagetool.c
@@ -115,3 +115,23 @@ int imagetool_get_filesize(struct image_tool_params 
*params, const char *fname)
 
return sbuf.st_size;
 }
+
+time_t imagetool_get_source_date(
+struct image_tool_params *params,
+time_t fallback)
+{
+   char *source_date_epoch = getenv("SOURCE_DATE_EPOCH");
+
+   if (source_date_epoch == NULL)
+   return fallback;
+
+   time_t time = (time_t) strtol(source_date_epoch, NULL, 10);
+
+   if (gmtime() == NULL) {
+   fprintf(stderr, "%s: SOURCE_DATE_EPOCH is not valid\n",
+   params->cmdname);
+   time = 0;
+   }
+
+   return time;
+}
diff --git a/tools/imagetool.h b/tools/imagetool.h
index a3ed0f4..b422df2 100644
--- a/tools/imagetool.h
+++ b/tools/imagetool.h
@@ -205,6 +205,22 @@ int imagetool_save_subimage(
  */
 int imagetool_get_filesize(struct image_tool_params *params, const char 
*fname);
 
+/**
+ * imagetool_get_source_date() - Get timestamp for build output.
+ *
+ * Gets a timestamp for embedding it in a build output. If set
+ * SOURCE_DATE_EPOCH is used. Else the given fallback value is returned. Prints
+ * an error message if SOURCE_DATE_EPOCH contains an invalid value and returns
+ * 0.
+ *
+ * @params:mkimage parameters
+ * @fallback:  timestamp to use if SOURCE_DATE_EPOCH isn't set
+ * @return timestamp based on SOURCE_DATE_EPOCH
+ */
+time_t imagetool_get_source_date(
+   struct image_tool_params *params,
+   time_t fallback);
+
 /*
  * There is a c file associated with supported image type low level code
  * for ex. default_image.c, fit_image.c
-- 
2.1.4


___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds


[Reproducible-builds] [PATCH] Use C locale when setting CC_VERSION_STRING and LD_VERSION_STRING.

2016-06-12 Thread Vagrant Cascadian
The output reported may be locale-dependent, which results in
unreproducible builds.

  $ LANG=C ld --version | head -n 1
GNU ld (GNU Binutils for Debian) 2.26

  $ LANG=it_CH.UTF-8 ld --version | head -n 1
ld di GNU (GNU Binutils for Debian) 2.26

Forcing LC_ALL=C ensures the output is consistant regardless of the
build environment.

Thanks to HW42 for debugging the issue:

  
https://lists.alioth.debian.org/pipermail/reproducible-builds/Week-of-Mon-20160606/005722.html

For more information about reproducible builds:

  https://reproducible-builds.org/

Signed-off-by: Vagrant Cascadian <vagr...@debian.org>
---

 Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index 0f7d6f3..14e09d4 100644
--- a/Makefile
+++ b/Makefile
@@ -1269,8 +1269,8 @@ prepare: prepare0
 define filechk_version.h
(echo \#define PLAIN_VERSION \"$(UBOOTRELEASE)\"; \
echo \#define U_BOOT_VERSION \"U-Boot \" PLAIN_VERSION; \
-   echo \#define CC_VERSION_STRING \"$$($(CC) --version | head -n 1)\"; \
-   echo \#define LD_VERSION_STRING \"$$($(LD) --version | head -n 1)\"; )
+   echo \#define CC_VERSION_STRING \"$$(LC_ALL=C $(CC) --version | head -n 
1)\"; \
+   echo \#define LD_VERSION_STRING \"$$(LC_ALL=C $(LD) --version | head -n 
1)\"; )
 endef
 
 # The SOURCE_DATE_EPOCH mechanism requires a date that behaves like GNU date.
-- 
2.1.4


___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds


[Reproducible-builds] ff2b and opi2c back online

2016-06-11 Thread Vagrant Cascadian
On 2016-06-11, Holger Levsen wrote:
> This shall give us some more time+ease while deciding if/when to remove
> all ff2b and opi2c related jobs…

Both should be back online now! So no need to rearrange the jobs
anymore.

Whew.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] [PATCH] Reassign jobs for ff2b-armhf-rb to other nodes

2016-06-11 Thread Vagrant Cascadian
On 2016-06-11, Holger Levsen wrote:
> On Fri, Jun 10, 2016 at 09:18:40AM -0700, Vagrant Cascadian wrote:
>> > thanks for the patch, before I apply something similar I have a
>> > question: does that mean ff2b is gone *forever*?
>> 
>> Not 100% sure just yet; it still turns on, but doesn't boot. The
>> recovery process doesn't work, and *sometimes* I can get it into MaskRom
>> mode, but not always...
>> 
>>   http://wiki.t-firefly.com/index.php/Firefly-RK3288/Boot_mode/en
>>   http://wiki.t-firefly.com/index.php/Firefly-RK3288/MaskRom/en
>> 
>> So, it's not quite bricked... but close.
>
> given the recent issues with opi2c I just implemented code, to check
> whether both build nodes are up, before starting a build. That way we
> can keep jobs enabled, without wasting builds where the 2nd build will
> fail anyway…

Thanks, that sounds like a great improvement!


> This shall give us some more time+ease while deciding if/when to remove
> all ff2b and opi2c related jobs…

opi2c is back now, running the same kernel as the other orangepi plus2
boards... will see if it is a little more stable now... it may be power
supply issues... not 100% sure yet.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] [PATCH] Reassign jobs for ff2b-armhf-rb to other nodes

2016-06-09 Thread Vagrant Cascadian
Reassign jobs for ff2b-armhf-rb to other nodes, as it is down for the
forseeable future.

Available in the ff2b-reassign branch:

  https://anonscm.debian.org/git/users/vagrant/jenkins.debian.net.git

---
 job-cfg/reproducible.yaml | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/job-cfg/reproducible.yaml b/job-cfg/reproducible.yaml
index 641bb5d..33245a5 100644
--- a/job-cfg/reproducible.yaml
+++ b/job-cfg/reproducible.yaml
@@ -586,17 +586,14 @@
 - '22': { my_node1: 'ff2a-armhf-rb',  my_node2: 
'rpi2c-armhf-rb' }
 - '23': { my_node1: 'rpi2c-armhf-rb', my_node2: 
'odxu4b-armhf-rb'}
 - '24': { my_node1: 'rpi2c-armhf-rb', my_node2: 
'odxu4c-armhf-rb'}
-- '25': { my_node1: 'odxu4b-armhf-rb',my_node2: 
'ff2b-armhf-rb'  }
+- '25': { my_node1: 'odxu4b-armhf-rb',my_node2: 
'opi2a-armhf-rb' }
 - '26': { my_node1: 'opi2a-armhf-rb', my_node2: 
'ff2a-armhf-rb'  }
 - '27': { my_node1: 'odxu4c-armhf-rb',my_node2: 
'cbxi4a-armhf-rb'}
-- '28': { my_node1: 'opi2a-armhf-rb', my_node2: 
'ff2b-armhf-rb'  }
-- '29': { my_node1: 'ff2b-armhf-rb',  my_node2: 
'opi2a-armhf-rb' }
-- '30': { my_node1: 'ff2b-armhf-rb',  my_node2: 
'cbxi4b-armhf-rb'}
-- '31': { my_node1: 'ff2b-armhf-rb',  my_node2: 
'opi2b-armhf-rb' }
+- '28': { my_node1: 'opi2a-armhf-rb', my_node2: 
'cbxi4b-armhf-rb'}
 - '32': { my_node1: 'opi2a-armhf-rb', my_node2: 
'cbxi4b-armhf-rb'}
 - '33': { my_node1: 'ff2a-armhf-rb',  my_node2: 
'opi2b-armhf-rb' }
 - '34': { my_node1: 'cbxi4a-armhf-rb',my_node2: 
'opi2b-armhf-rb' }
-- '35': { my_node1: 'cbxi4a-armhf-rb',my_node2: 
'ff2b-armhf-rb'  }
+- '35': { my_node1: 'cbxi4a-armhf-rb',my_node2: 
'opi2b-armhf-rb' }
 - '36': { my_node1: 'opi2a-armhf-rb', my_node2: 
'cbxi4a-armhf-rb'}
 - '37': { my_node1: 'cbxi4b-armhf-rb',my_node2: 
'wbq0-armhf-rb'  }
 - '38': { my_node1: 'cbxi4b-armhf-rb',my_node2: 
'opi2a-armhf-rb' }
-- 
2.1.4



signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] u-boot locale variations with it_*.UTF-8

2016-06-08 Thread Vagrant Cascadian
Well, I've been debugging why u-boot stopped building reprodicbly, and
it appears to be that the recent fix that really switches armhf to build
with it_CH.UTF-8 locale actually breaks u-boot reproducibility.

So, just for comparison, I also tested with other locales, including
some with non-latin "alphabets" and other countries just to vary as much
as possible, and the following all built reproducibly with each other:

  C
  el_GR.UTF-8
  ru_RU.UTF-8
  ar_EG.UTF-8
  zh_CN.UTF-8
  es_ES.UTF-8
  fr_CH.UTF-8
  et_EE.UTF-8
  ja_JP.UTF-8
  ko_KR.UTF-8
  en_US.UTF-8

And these locales built reproducibly with each other:

  it_CH.UTF-8
  it_IT.UTF-8


Haven't really figured out *what* the actual issue is, but figured I'd
share my confusion.


With this as evidence, there appear to be two kinds of locales, Italian,
and not-Italian. At least when it comes to u-boot. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] diskspace probs on opi3c (was Re: New armhf build nodes (Odroid-U3, Cubietruck, OrangePI Plus2)

2016-06-02 Thread Vagrant Cascadian
On 2016-06-02, Holger Levsen  wrote:
> opi2c already has too little diskspace, from
>
> https://jenkins.debian.net/view/reproducible/view/Debian_setup_armhf/job/reproducible_setup_schroot_experimental_armhf_opi2c/1/console
>
> tee: /tmp/schroot-create-8VxOVXtl: No space left on device

Sorry about that; forgot to set up a separate /srv partition.

Should be fixed now!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] New armhf build nodes (Odroid-U3, Cubietruck, OrangePI Plus2)

2016-06-02 Thread Vagrant Cascadian
On 2016-06-02, Holger Levsen wrote:
> Hi Vagrant,
>
> On Sun, May 29, 2016 at 09:13:20AM -0700, Vagrant Cascadian wrote:
>> odu3a-armhf-rb.debian.net:
>> cb3a-armhf-rb.debian.net:
>
> those I can reach nicely…

Yay!

>> opi2c-armhf-rb.debian.net:
>> ssh port: 2245
>
> but not this one:
>
> debug1: Connecting to opi2c-armhf-rb.debian.net [71.214.90.150] port
> 2245.
> debug1: connect to address 71.214.90.150 port 2245: No route to host
> ssh: connect to host opi2c-armhf-rb.debian.net port 2245: No route to
> host

Apparently had crashed and rebooted to the on-board OS. Hrmpf. Thought I
disabled that, but I'll need to disable it harder, apparently...

Should be up now!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] New armhf build nodes (Odroid-U3, Cubietruck, OrangePI Plus2)

2016-05-29 Thread Vagrant Cascadian
Three new build nodes ready to be added to the build network:

odu3a-armhf-rb.debian.net:
Odroid-U3, exynos4122 quad-core (cortex-a9), 2GB ram, ~60GB USB2 SSD
ssh port: 2243
ssh fingerprints:
256 96:13:3b:ec:45:72:45:bc:c7:78:29:73:56:81:4e:ae 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 e3:08:75:07:7d:63:94:ab:d9:a1:60:85:04:6c:fd:93 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

cb3a-armhf-rb.debian.net:
Cubietruck/Cubieboard3, Allwinner A20 (cortex-a7), dual-core, 2GB ram, 
  ~60GB USB2 SATA SSD
ssh port: 2244
ssh fingerprints:
256 f6:c0:48:5e:51:1f:ff:57:88:de:a5:db:60:8f:ff:f6 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 52:55:08:00:f4:2c:d7:d8:9f:f0:f9:be:aa:ba:c5:1a 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

opi2c-armhf-rb.debian.net:
OrangePi Plus2, Allwinner H3 (cortex-a7) quad-core, 2GB ram, ~60GB USB2 SATA SSD
ssh port: 2245
ssh fingerprints:
256 68:9a:b6:7a:f5:cb:af:07:3d:74:1a:b0:1c:ca:76:fb 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 10:69:e7:39:c7:b5:20:38:68:43:85:83:05:5d:c4:72 
/etc/ssh/ssh_host_rsa_key.pub (RSA)


Configure away!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] bpi0 armhf node disk failure

2016-05-09 Thread Vagrant Cascadian
On 2016-05-04, Holger Levsen wrote:
> On Tue, May 03, 2016 at 11:35:07PM -0700, Vagrant Cascadian wrote:
>> I think bpi0's SSD may be in a very bad state. It's one of the oldest
>> nodes, so it's no huge surprise to see disk failures there...

Yup, disk seemed to be in a pretty bad state. It did boot once again and
then promptly failed hard...


>> I do have an extra SSD on hand to replace it with, but it will mean a
>> clean re-install and then jenkins setup code run, so probably best to
>> suspend scheduling jobs related to bpi0 for the near future. Not sure
>> exactly when I can get to it.
>
> ok. Just do it when you have the time and ping me once it's done. Feel
> free to keep or not keep the ssh host keys…

Reinstalled with a new SSD (which has considerably more space for
wear-levelling).

Ready for the jenkins setup scripts to be run on it!

ssh key fingerprints for the new bpi0:

256 12:47:f1:47:2c:fd:a6:af:84:bf:6b:f2:ab:5f:2a:ab 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 0f:5a:72:49:a6:8e:29:8f:2a:40:d6:59:3c:c6:48:18 
/etc/ssh/ssh_host_rsa_key.pub (RSA)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] bpi0 armhf node disk failure

2016-05-04 Thread Vagrant Cascadian
I think bpi0's SSD may be in a very bad state. It's one of the oldest
nodes, so it's no huge surprise to see disk failures there...

I do have an extra SSD on hand to replace it with, but it will mean a
clean re-install and then jenkins setup code run, so probably best to
suspend scheduling jobs related to bpi0 for the near future. Not sure
exactly when I can get to it.


Does make me think a bit about the idea Steven Chamberlain had mentioned
at one point of using networked disks... but then finding a good network
disk server might be too much effort.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Performance of armhf boards

2016-04-17 Thread Vagrant Cascadian
On 2016-04-17, Steven Chamberlain wrote:
> I was wondering what is the performance of various armhf boards, for
> package building.  Of course, the reproducible-builds team have a lot of
> stats already.  Below I'm sharing the query I used and the results in
> case anyone else is interested in this.

Thanks for crunching some numbers and sharing!

Somewhat similar numbers are calculated daily:

  
https://jenkins.debian.net/view/reproducible/job/reproducible_nodes_info/lastBuild/console


> Figures will vary based on which packages were assigned to which
> node, as some are easier to build than others, but I hope over 21
> days that variance is smoothed out.

I wonder if 21 days is long enough to average things out, some builds
normally take nearly 12 hours or more, while some take only a few
minutes.


> Assuming the nodes had no downtime, we can compare pkgs_built over
> the 21-day period to assess performance.


There has definitely been downtime... particularly odxu4c, ff4a and
bbx15. And many of the systems occasionally get rebooted for testing and
upgrading u-boot or the linux kernel.

FWIW, 15 of 18 nodes are running kernels coming from the official
debian.org linux packages from jessie, jessie-backports, sid or
experimental! (only 9/18 for u-boot)


> Also avg_duration is meaningful, but will increase where the
> reproducible-builds network scheduled more concurrent build jobs on a
> node.  (Low avg_duration does not always mean high package throughput,
> it may just be doing fewer jobs in parallel.)
>
> Finally, the nodes' performance will depend on other factors such as
> storage device used, kernel, etc.

I've often wondered what the impacts are if "fast" nodes are mostly
paired with "slow" nodes for the 1st or 2nd builds, since each build job
is specifically tied to two machines. This was one of the factors
leading me to build pools based on load... but haven't had the time to
implement that.


> I don't know whether to believe these figures yet!
>
>   * wbq0 is impossibly fast for just 4x1GHz cores, 2GB RAM...

My guess is that it is one of the most stable. Only tends to be rebooted
for updates.


>   * odxu4 looks slightly faster than the other two.

That's tricky to track down, as odxu4c has had stability issues, and
odxu4 also to a lesser extent, and odxu4b has been relatively stable.

Many of the machines have different brand/model SSDs, so I was thinking
of comparing that against build stats on all the nodes to see if there's
a pattern. They're all pretty cheap SSDs, so I wouldn't be surprised if
there was significant variation in performance.


>   * cbxi4a/b seem no faster than cbxi4pro0 despite twice the RAM?

That is definitely surprising (although technically they only have
access to 3.8GB, but still!). They seem to be doing better according to
the daily averge stats.

195.1 builds/day (13075/67) on cbxi4b-armhf-rb.debian.net
188.2 builds/day (14121/75) on cbxi4a-armhf-rb.debian.net
172.9 builds/day (22658/131) on cbxi4pro0-armhf-rb.debian.net


>   * ff2a/b show USB3 SSD to be no faster than USB2?

All of the Firefly boards are USB2, I think. ff2a was running with only
512MB for a few weeks due to a u-boot I didn't notice until recently.


>   * bbx15 may be able to handle more build jobs (low avg_duration).

That's really impressive, because sometimes it's running 6 concurrent
builds, and only has two cores. It is a higher-performance cortex-a15.


>   * bpi0 may be overloaded (high avg_duration).

That curious. Not sure what to make of it.


>   * ff4a maybe had downtime, and seems to be under-utilised.

Yeah, it's had some multiple-hour stretches of downtime regularly,
partially due to stability issues, and partially due to kernel/u-boot
testing.


>   * rpi2b maybe had downtime, or has a slower disk than rpi2c.

Those numbers look surprising, especially since rpi2c has been rebooted
more often.

I'm also not sure if the rpi2 processors are running at full speed since
I switched to using the debian.org provided kernels, which don't have
cpufreq support.


>   * wbd0 slowness is likely due to the magnetic hard drive.

The disk was upgraded to an SSD at some point, although I'm suspecting
performance issues due to wear-leveling, as it's a smaller SSD and TRIM
isn't supported over any of the USB-SATA adapters I've found.


> Many thanks to Vagrant for hosting all these armhf nodes!

Thanks for taking a fresh look at it and making some suggestions of
things to look into!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Blacklist packages for armhf

2016-04-17 Thread Vagrant Cascadian
Doing some sql queires and a bit of eyeball heuristics, I've determined
the packages listed below frequently FTBFS due to timeout on armhf.

I was hoping we could drop blacklisting altogether as the build network
grew, but I'm not sure that's realistic with the current infrastructure.

Please blacklist the following packages in all suites for armhf (some
may already be blacklisted on particular suites, but I think it makes
sense to just blacklist them on all suites):

agda
aptdaemon
ceph
chromium-browser
debci
doc-linux-fr
eclipse
firefox
firefox-esr
freedict
gazebo
gcc-5
gcc-6
gcc-mingw-w64
ghc
gnat-mingw-w64
gnucash-docs
gnuchess-book
gradle
iceweasel
kicad
libapache-poi-java
libint2
libitext5-java
lilypond
llvm-toolchain-3.8
lucene2
lucene-solr
madness
mlton
mongodb
nwchem
openjdk-6
openjdk-7
openjdk-9
openms
openturns
pcl
python-2.7
tomcat8
ufoai-maps
witty


My rough process went like so:

Get the names of packages:

sqlite3 reproducible.db \
  'select name from stats_build
   where architecture is "armhf"
   and status is "FTBFS"
   and cast(build_duration as integer) >=43200
   and cast(build_duration as integer) <= 45000
   and build_date >= "2016-03-01 00:00"'

Which I then visually compared against:

printf 'select * from stats_build where architecture is "armhf" and name is 
"%s" and build_date >= "2016-01-01 00:00";' $name | sqlite3 reproducible.db

I excluded packages from the submitted that had a recent completed build
(reproducible or unreproducible), or where the FTBFS usually didn't take
more than 12 hours. It maybe wasn't a perfect process, but hopefully
will allow for better coverage of most of the rest of the archive. Might
have been wise for me to figure out how to do more of that
programatically (my SQL isn't too solid)...


It would be really helpful if we could mark failures due to timeouts
separately from "normal" FTBFS, and then packages could be rescheduled
differently (e.g. an incremental delay for rescheduling, or not at
all). Alternately or additionally, if the FTBFS rescheduling could
adjust the frequency based on number of times a package has (recently)
FTBFS, that might help too.

I think Holger at one point mentioned increasing the timeouts higher
(currently 12h for 1st build, and 18h for 2nd build?), although
with all the suites tested, some builds are over 45 days old, so I'm not
sure if that would be ideal.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] jenkins.debian.net: [PATCH] Disable download of package descriptions

2016-04-09 Thread Vagrant Cascadian
So, basically, nevermind, apparently pbuilder already does this
automatically:

< mapreri> I added  acquire::languages="none" 6/7 months ago, i think
< mapreri> vagrantc: since pbuilder 0.216, 26 august 2015

And looks like the builders are using 0.223~bpo8+1, and I checked the
config on one of the builders and it appears to include the option I was
proposing to add... so...

Sorry for the noise!

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] jenkins.debian.net: [PATCH] Disable download of package descriptions

2016-04-09 Thread Vagrant Cascadian
I noticed the jenkins.debian.net reproducible pbuilder chroots download
the full package descriptions, but this isn't necessary for a build
machine, and consumes a bit of bandwidth and disk space. The patch below
configures APT (and anything that respects apt.conf.d) to not download
them.

Also should be available in the "no-package-descriptions" branch on
alioth (once alioth syncs):

  https://anonscm.debian.org/cgit/users/vagrant/jenkins.debian.net.git/

It's untested, so I've looked it over at least three times. I hope it's
actually happening early enough to be useful...

Keep on reproducibilitizing!


live well,
  vagrant


From ff8da4f3ef9674503da93e38be565c3a7837c7d2 Mon Sep 17 00:00:00 2001
From: Vagrant Cascadian <vagr...@debian.org>
Date: Sat, 9 Apr 2016 16:43:31 -0700
Subject: [PATCH] Disable download of package descripts by setting apt.conf
 languages to none. This should save some disk space and download bandwidth,
 as the full descriptions really shouldn't be needed in a build chroot.

---
 bin/reproducible_setup_pbuilder.sh | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/bin/reproducible_setup_pbuilder.sh 
b/bin/reproducible_setup_pbuilder.sh
index 57ef3e8..b272d5b 100755
--- a/bin/reproducible_setup_pbuilder.sh
+++ b/bin/reproducible_setup_pbuilder.sh
@@ -61,6 +61,9 @@ echo
 echo "Configuring APT to ignore the Release file expiration"
 echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/398future
 echo
+echo "Configuring APT to not download package descriptions"
+echo 'Acquire::Languages "none";' > 
/etc/apt/apt.conf.d/10no-package-descriptions
+echo
 apt-get update
 apt-get -y upgrade
 apt-get install -y $@
-- 
2.1.4


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] arm64 reproducible build network

2016-03-23 Thread Vagrant Cascadian
On 2016-03-06, Holger Levsen wrote:
> On Sonntag, 6. März 2016, Axel Beckert wrote:
>> I now wonder if a bunch of Raspberry Pi 3 -- since they have 64-bit
>> CPUs (IIRC an Allwinner A53T) -- would make us able to check
>> reproducible-builds for Debian arm64, too.
>
> as far as I know, yes. Though I would want to use some other arm64 boards as 
> well, to have hardware variety right from the start.
...
> I think we'd need 10-12 of such boards for a start, not sure if we still have 
> Debian funds to buy those (but I almost think so… Vagrant?) - providing we 
> would find someone interested+able to host these. Would you (Axel) be able 
> too?

There's about US$1200 left.

To follow through with number of boards in the original proposal, I
should probably get at least 3-4 more boards... or should we branch out
into an arm64 network with the remaining funds?

If that's the case, I'd probably need someone else to host the boards,
though I could proably host some initial experimentation just to get the
basic infrastructure set up. I could also experiment with using the
arm64 boards but building armhf packages, which seems like an
interesting middle-ground. Curious what folks think.


Could get a few odroid-c2 for ~$60 with usb-sata adapter, power
supply, and case. Mainline linux for similar boards exists... but not
for odroid-c2 explicitly.

The LeMaker Hikey goes for ~$100+accessories. Limited mainline support
exists. I already have one donated by LeMaker for experimenting with.

The Pine64 boards are currently going for ~$30 for the 2GB ram
variants. Not sure what the timeline on delivery is, though I have a few
coming for experimentation.

Seems like raspberry PI 3 doesn't yet really support arm64, and only has
1GB of ram. I don't think it's worth considering.

Cheap 120GB SSD costs around US$40-50 each.

So, you might be able to get 8-10 boards running with the remaining
funds...


The LeMaker Cello is about ~US$300+accessories, with dual native SATA,
and may come with 8GB ram out of the box, upgradable to 32GB or more:

  http://www.lenovator.com/product/103.html

Two to three LeMaker Cello might outperform a network of 8-10 lower spec
machines, and probably be a lot easier to maintain. Mainline linux
support for similar boards exists... that said, The Cello isn't actually
shipping yet (but soon?)... s...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Number of CPUs for armhf builds

2016-03-07 Thread Vagrant Cascadian
On 2016-03-07, Holger Levsen wrote:
> On Sonntag, 6. März 2016, Vagrant Cascadian wrote:
>> > I'm also pondering to change it to use CPUs+1 for the first builds and
>> > CPUs for the 2nd ones.
>> That would be interesting, although I was thinking we might want to do a
>> fewer number of CPUs on the first build, to make it more likely the
>> second build doesn't timeout.
>
> I don't think we should make any build slower by design.

Understandable...


>> e.g. If your first build is with 4 CPUs (from one of the quad-cores),
>> and your second builds with 1 CPU (a dual-core), you're more likely to
>> reach the timeout limit on the second build...
>
> you cannot assume such things. Builds are scheduled on arbitrary hosts.

Sure, but I suspect there are *some* builds that will never succeed in
under 12 hours with a single CPU core (on the current armhf build
hardware, anyways).


>> So combining the two ideas, I guess I would propose CPUs for first
>> build, and CPUs+1 for the second build?
>
> I'm not sure, this is what I have been meaning to do, but then I fear that 
> CPUs+1 might _slow down_ things if the machine is overloaded already… So I'm 
> now pondering to just use the number of cores always, and
>
> - on amd64+i386: make sure by node "hw" design, that every build has a 
> different number of cores (either 16 or 17)
> - on armhf: stop systematically varying numbers of CPUs, often they will vary 
> anyway (by scheduling choices) and then… there's not much unreproducibility 
> due to this issue anyway, and we will still notice if there is, on x86 and 
> sometimes here too. (And flipping status is actually even more noticable.)
>
> What do you think? Or should I go with CPUs+1 on armhf?

Overall I like the proposal to stop varying number of CPUs for armhf.
There's some variation with number of cores simply due to having
dual-core, quad-core and (recently) octa-core systems.  That also means
we have no builds running with only CPUs=1, which I think is a good
thing overall.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Number of CPUs for armhf builds

2016-03-06 Thread Vagrant Cascadian
On 2016-03-06, Holger Levsen wrote:
> On Sonntag, 6. März 2016, Vagrant Cascadian wrote:
>> Ah, looking at:
>> 
>>   https://tests.reproducible-builds.org/reproducible.html
...
>> I guess it must just use the number of CPUs for first build, and number
>> of CPUs-1 for the second build (at least on armhf)?
>
> yup, that's true on armhf, the variation table there is inaccurate. (Patches 
> welcome…)
>
> I'm also pondering to change it to use CPUs+1 for the first builds and CPUs 
> for the 2nd ones.

That would be interesting, although I was thinking we might want to do a
fewer number of CPUs on the first build, to make it more likely the
second build doesn't timeout.

e.g. If your first build is with 4 CPUs (from one of the quad-cores),
and your second builds with 1 CPU (a dual-core), you're more likely to
reach the timeout limit on the second build...

So combining the two ideas, I guess I would propose CPUs for first
build, and CPUs+1 for the second build?


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Raspi 3 suitable for arm64?

2016-03-06 Thread Vagrant Cascadian
Cc'ed correct debian-arm address...

On 2016-03-06, Holger Levsen wrote:
> On Sonntag, 6. März 2016, Axel Beckert wrote:
>> I now wonder if a bunch of Raspberry Pi 3 -- since they have 64-bit
>> CPUs (IIRC an Allwinner A53T) -- would make us able to check
>> reproducible-builds for Debian arm64, too.

I got the impression it was another broadcom chip, maybe with a
cortex-a53? Gotta love ARM's confusing namespace.


> as far as I know, yes. Though I would want to use some other arm64 boards as 
> well, to have hardware variety right from the start. for example:
> http://www.heise.de/newsticker/meldung/RasPi-Angreifer-Odroid-C2-Schneller-
> etwas-teurer-aber-ohne-WLAN-3123182.html also has an arm64 cpu, but twice 
> as much ram (2gb) as the raspi3 and 2 ghz cores instead of 1.2 ghz ones on 
> the 
> raspi3. *buntu also has a new arm64 developer/reference board. There will be 
> more in 3 months too.

I've definitely been thinking about some of these new arm64 capable
boards. The Odroid-C2, Pine64 and LeMaker HiKey have 2GB RAM options, at
least. I do worry a bit about sufficient mainline support, as a few of
the boards I've tried (odroid-c1+, cubieboard4) didnt really work out
with vendor kernels and the mainline support wasn't close enough to be
useable.

That said, I'm not sure how much CPU/board diversity has really resulted
in reproducibility issues at this point, so getting a less diverse
network set up might be fine.

Also of note, the arm64 boards *could* be used to test arm64, armhf or
armel. So they're certainly more flexible. I've considered getting a
small number of the arm64 capable boards and plugging them into the
armhf network with the idea that they could be switched over to arm64 in
the future.


> I think we'd need 10-12 of such boards for a start, not sure if we still have 
> Debian funds to buy those (but I almost think so… Vagrant?) -

I need to do a final tally yet, but it's nearly done. There might be
some funding left for a few boards, but probably not 10 boards once you
factor in the costs of SSDs, power supplies, and other necessary
adapters.

We've certainly met the target of tripling the build network speed, at
least!

  https://tests.reproducible-builds.org/stats_builds_per_day_armhf.png


> providing we would find someone interested+able to host these. Would
> you (Axel) be able too?

One thing is it uses a considerable amount of bandwidth monthly. The 16
boards we had running last month consumed about 500GB of upload
bandwidth for the last two months. Most of that was upload bandwidth,
since a caching proxy minimizes the impact of downloads (~90GB
downloaded on the proxy).

Power consumption is, of course, relatively low... even with 18 boards
and some infrastructure, it uses under 150 watts.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Raspi 3 suitable for arm64?

2016-03-06 Thread Vagrant Cascadian
On 2016-03-06, Holger Levsen wrote:
> On Sonntag, 6. März 2016, Axel Beckert wrote:
>> I now wonder if a bunch of Raspberry Pi 3 -- since they have 64-bit
>> CPUs (IIRC an Allwinner A53T) -- would make us able to check
>> reproducible-builds for Debian arm64, too.

I got the impression it was another broadcom chip, maybe with a
cortex-a53? Gotta love ARM's confusing namespace.


> as far as I know, yes. Though I would want to use some other arm64 boards as 
> well, to have hardware variety right from the start. for example:
> http://www.heise.de/newsticker/meldung/RasPi-Angreifer-Odroid-C2-Schneller-
> etwas-teurer-aber-ohne-WLAN-3123182.html also has an arm64 cpu, but twice 
> as much ram (2gb) as the raspi3 and 2 ghz cores instead of 1.2 ghz ones on 
> the 
> raspi3. *buntu also has a new arm64 developer/reference board. There will be 
> more in 3 months too.

I've definitely been thinking about some of these new arm64 capable
boards. The Odroid-C2, Pine64 and LeMaker HiKey have 2GB RAM options, at
least. I do worry a bit about sufficient mainline support, as a few of
the boards I've tried (odroid-c1+, cubieboard4) didnt really work out
with vendor kernels and the mainline support wasn't close enough to be
useable.

That said, I'm not sure how much CPU/board diversity has really resulted
in reproducibility issues at this point, so getting a less diverse
network set up might be fine.

Also of note, the arm64 boards *could* be used to test arm64, armhf or
armel. So they're certainly more flexible. I've considered getting a
small number of the arm64 capable boards and plugging them into the
armhf network with the idea that they could be switched over to arm64 in
the future.


> I think we'd need 10-12 of such boards for a start, not sure if we still have 
> Debian funds to buy those (but I almost think so… Vagrant?) -

I need to do a final tally yet, but it's nearly done. There might be
some funding left for a few boards, but probably not 10 boards once you
factor in the costs of SSDs, power supplies, and other necessary
adapters.

We've certainly met the target of tripling the build network speed, at
least!


> providing we would find someone interested+able to host these. Would
> you (Axel) be able too?

One thing is it uses a considerable amount of bandwidth monthly. The 16
boards we had running last month consumed about 500GB of upload
bandwidth for the last two months. Most of that was upload bandwidth,
since a caching proxy minimizes the impact of downloads (~90GB
downloaded on the proxy).

Power consumption is, of course, relatively low... even with 18 boards
and some infrastructure, it uses under 150 watts.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf nodes (Firefly-4gb, BeagleBoard-X15)

2016-03-05 Thread Vagrant Cascadian
On 2016-03-05, Holger Levsen wrote:
> On Donnerstag, 3. März 2016, Vagrant Cascadian wrote:
>> This new board recognizes all 4GB of ram, yay!
>> 
>> ff4a-armhf-rb.debian.net:
>> Firefly-RK3288, quad-core rockchip 3288 (A12/A17?), 4GB ram
>
> yay indeed & added to jenkins!
>
>> Well, got eSATA working finally, so decided to run with it!
>> 
>> bbx15-armhf-rb.debian.net:
>> BeagleBoard-X15, dual-core TI AM57xx (a15), 2GB ram
>
> more yay & also added :)
>
> I've also added 8 new builder jobs for armhf to make use of them!

Thanks! Anticipating spikes in the graphs...


>> Also, all the Odroid-XU4 (odxu4*) boards now recognize all 8 cores!
>> I
>> don't know if that requires any manual configuration to take advantage
>> of them,
>
> yes, pbuilder uses all available cores for each build job.

Ah, looking at:

  https://tests.reproducible-builds.org/reproducible.html

env DEB_BUILD_OPTIONS   DEB_BUILD_OPTIONS="parallel=XXX" 
DEB_BUILD_OPTIONS="parallel=YYY"
XXX for amd64: 18 or 17   YYY for amd64: 17 or 18 (!= 
the first build)
XXX for armhf: 4 or 2 YYY for armhf: 1 or 3

I guess it must just use the number of CPUs for first build, and number
of CPUs-1 for the second build (at least on armhf)?


>> or if they sould take on a few more build jobs...
>
> As I see it, diskspace and RAM are the limiting factors for defining the 
> number of concurrent build jobs, so I came up with this:
>
> #   8 jobs for quad-cores with 4 gb ram
> #   6 jobs for octo-cores with 2 gb ram
> #   6 jobs for quad-cores with 2 gb ram
> #   3 jobs for dual-cores with 1 gb ram
> #   3 jobs for quad-cores with 1 gb ram
>
> So no more jobs for odxu4 boards ;)

Just added bbx15, a dual-core with 2GB of ram (and relatively fast
cpus), not sure where to fit that in the picture. Maybe 4 or 5 jobs
would actually be appropriate.


> Thanks for maintaining all these nodes! It's totally awesome to get so much 
> build power and usefulness out of such a zoo of hardware and cables! 8-)

And thanks for maintaining the other side of the infrastructure! :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (BeagleBoard-X15)

2016-03-03 Thread Vagrant Cascadian
On 2016-03-03, Vagrant Cascadian wrote:
> I need to do some more testing with the BeagleBoard-X15 board to get
> support in the kernel and debian-installer, and try to enable eSATA
> support, but should be ready real soon now...

Well, got eSATA working finally, so decided to run with it!

bbx15-armhf-rb.debian.net:
BeagleBoard-X15, dual-core TI AM57xx (a15), 2GB ram
ssh port: 2242
ssh fingerprints:
256 0c:9b:bc:0a:cd:cf:19:e5:1d:94:4c:bd:76:d6:15:98 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 10:5a:ad:5c:44:1f:0c:c4:3e:d7:32:7a:70:a7:e4:9b 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

First system with a processor from TI, too.

Big thanks to Beagleboard.org for donating the board.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] new armhf node (Firefly-4gb)

2016-03-03 Thread Vagrant Cascadian
This new board recognizes all 4GB of ram, yay!

ff4a-armhf-rb.debian.net:
Firefly-RK3288, quad-core rockchip 3288 (A12/A17?), 4GB ram
ssh port: 2241
ssh fingerprints:
2048 87:3e:d4:f7:00:65:bd:39:ea:47:81:16:55:d6:cf:66 
/etc/ssh/ssh_host_rsa_key.pub (RSA)
256 b8:a5:af:69:6a:76:03:9a:e3:77:82:a3:d9:7e:17:6d 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)


Also, all the Odroid-XU4 (odxu4*) boards now recognize all 8 cores! I
don't know if that requires any manual configuration to take advantage
of them, or if they sould take on a few more build jobs...


I need to do some more testing with the BeagleBoard-X15 board to get
support in the kernel and debian-installer, and try to enable eSATA
support, but should be ready real soon now...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] licencing reproducible/presentations.git

2016-02-18 Thread Vagrant Cascadian
On 2016-02-04, Holger Levsen wrote:
> I'd like to add proper licencing to the presentations in 
> git.debian.org/git/reproducible/presentations.git and you in the to: headers 
> of the mail are a git commiter to this repository, so I would like you to 
> (re-)licence your contributions under the following licence:
...
> but in any case the question is: are you fine to licence your work under 
> Creative Commons Attribution-Share Alike 3.0 License or (at the choice of the 
> user of the works) under GNU Free Documentation License, Version 1.3 or 
> later, 
> with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts?

I think my only contribution was those awful pictures of the armhf build
infrastructure, but I'm perfectly fine with those being licensed under
anything DFSG-free, and the above proposal sounds fine to me.

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Cubox-i4x4), part 2

2016-02-11 Thread Vagrant Cascadian
On 2016-02-01, Vagrant Cascadian wrote:
> cbxi4a-armhf-rb.debian.net:
> Cubox-i4x4, imx6 quad-core, 2GB ram, ~60GB SATA SSD

Upgraded cbxi4a to 3.8GB of ram!

And added another node to join it:

cbxi4b-armhf-rb.debian.net:
Cubox-i4x4, imx6 quad-core, 3.8GB ram, ~60GB SATA SSD
ssh port: 2240
ssh fingerprints:
256 fb:dc:95:4f:01:f9:f6:09:b9:ca:30:ee:3f:8c:6d:17 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 20:7c:5d:4a:2e:23:2a:2b:0d:b2:6c:46:63:d3:65:95 
/etc/ssh/ssh_host_rsa_key.pub (RSA)


Happy reproducible building!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Cubox-i4x4)

2016-02-02 Thread Vagrant Cascadian
On 2016-02-02, Holger Levsen wrote:
> thanks for setting it up, but:
>
> jenkins@jenkins:~$ ssh -v -p 2239 cbxi4a-armhf-rb.debian.net
> OpenSSH_6.7p1 Debian-5+deb8u1, OpenSSL 1.0.1k 8 Jan 2015
> debug1: Reading configuration data /etc/ssh/ssh_config
> debug1: /etc/ssh/ssh_config line 19: Applying options for *
> debug1: Connecting to cbxi4a-armhf-rb.debian.net [71.236.158.44] port 2239.
> [hangs]

Been having network problems all day... lot of orphaned builds today as
a result... meh.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] new armhf node (Cubox-i4x4)

2016-02-01 Thread Vagrant Cascadian
Here's another board, in theory with more ram, but in practice...

cbxi4a-armhf-rb.debian.net:
Cubox-i4x4, imx6 quad-core, 2GB ram, ~60GB SATA SSD
ssh port: 2239
ssh fingerprints:
256 dc:9a:85:0a:d5:85:72:26:9b:5b:b8:89:24:8a:fb:46 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 c1:dd:a6:c8:bf:26:a5:bc:1b:cc:91:8b:e3:e5:0c:66 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

Technically, this has 4GB of ram, but it's not reconizing all 4GB. Will
work on that with the second Cubox-i4x4; figured may as well put the
first to work while I sort out the ram issues.

It's got real sata, and TRIM should work, so yay.

This is the first board in a while that worked with debian-installer
From jessie! So it's also running a 3.16 kernel from jessie.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (OrangePi Plus2)

2016-01-14 Thread Vagrant Cascadian
On 2016-01-14, Holger Levsen wrote:
> On Mittwoch, 13. Januar 2016, Vagrant Cascadian wrote:
>> opi2b-armhf-rb.debian.net:
>> OrangePi Plus2, Allwinner H3 (cortex-a7) quad-core, 2GB ram, ~60GB USB2
>> SATA SSD ssh port: 2238
>
> thanks, added to the jenkins setup. Once pbuilder and schroots jobs have been 
> run I'll add builder jobs making use of it…

Thanks, looks like it's busy building packages now!


> oh, by chance I just found /etc/apt/sources.list.d/sid.list and disabled it. 
> Please try to tell me if such "strange configurations" are needed…

I've been using that to download and test new u-boot versions (and
sometimes kernels), but have pinned sid to a priority of 1 so that it
wouldn't upgrade packages... tried pinning u-boot specifically, but it
doesn't work :/ There here are several other machines with a similar
configuration.

If you'd rather it be done differently, let me know!

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (OrangePi Plus2)

2016-01-13 Thread Vagrant Cascadian
This one has been rather finicky, but appears to be working now...

opi2b-armhf-rb.debian.net:
OrangePi Plus2, Allwinner H3 (cortex-a7) quad-core, 2GB ram, ~60GB USB2 SATA SSD
ssh port: 2238
ssh fingerprints:
256 3e:b4:a4:f3:3a:c1:ba:75:3a:4b:4f:c6:80:e6:04:5b 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 23:91:3d:cc:cc:d2:d9:0c:e5:0a:e1:02:5a:9b:11:fa 
/etc/ssh/ssh_host_rsa_key.pub (RSA)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf nodes (Firefly/OrangePi Plus2)

2016-01-06 Thread Vagrant Cascadian
On 2016-01-05, Holger Levsen wrote:
> On Dienstag, 5. Januar 2016, Vagrant Cascadian wrote:
>> Two more quad-core systems ready to join the fun!
>
> awesome, thanks a lot!
>  
> I'm setting them up right now, pbuilder/schroot/maintenance jobs are 
> basically 
> there and then, once the schroots/pbuilder base.tgz's are set up (and after I 
> went doing some errands…) I'll setup some more build jobs using them.

Thanks!


>> The two Cubieboard4 are still blocked by a working u-boot... but should
>> have decent mainline linux support, in theory. Need to reach out to the
>> u-boot and linux-sunxi communities to get that sorted out...
>
> great! out of curiosity: other there more boards planned to come besides 
> those 
> two?

As mentioned in IRC, struggling with the second OrangePI Plus2, seems to
be bricked. Basically plan on getting six to ten more boards configured
over the next month or so, although may retire some of the slower boards
as part of the process.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf nodes (Firefly/OrangePi Plus2)

2016-01-04 Thread Vagrant Cascadian
Two more quad-core systems ready to join the fun!


ff2b-armhf-rb.debian.net:
Firefly, rockchip rk3288 (cortex-a12(?)) quad-core, 2GB ram, ~60GB USB2 SATA SSD
ssh port: 2237
ssh fingerprints:
256 ca:65:9c:9c:df:6e:8c:b8:00:10:dc:f0:71:b0:57:ab 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 cd:d8:b6:ea:67:2e:60:78:e4:09:66:8e:12:6e:bc:55 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

The Firefly boards should have a kernel waiting for them in debian
experimental, although it's currently stuck in NEW processing. I
uploaded some u-boot fixes for firefly to experimental.


opi2a-armhf-rb.debian.net:
OrangePi Plus2, Allwinner H3 (cortex-a7) quad-core, 2GB ram, ~60GB USB2 SATA SSD
ssh port: 2236
ssh fingerprints:
256 00:7d:85:87:bc:21:e2:02:bd:c7:ec:db:9f:50:6f:6b 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 76:aa:5e:c9:e0:64:89:a2:91:20:c5:a0:1d:fd:76:2e 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

The OrangePI Plus2 doesn't yet have working serial monitoring, but I
figured it could be put to work rather than waiting for for the serial
adapters to arrive... It's also using a USB network adapter I had laying
around, the built-in networking needs further driver development. Will
hopefully reach mainline in linux 4.5.x. U-boot support is mainlined,
and will include in the next upload to Debian.


The two Cubieboard4 are still blocked by a working u-boot... but should
have decent mainline linux support, in theory. Need to reach out to the
u-boot and linux-sunxi communities to get that sorted out...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Raspberry PI 2B)

2016-01-01 Thread Vagrant Cascadian
On 2016-01-01, Holger Levsen wrote:
> On Mittwoch, 30. Dezember 2015, Vagrant Cascadian wrote:
>> Got a second Raspberry PI 2 set up, awaiting configuration...
>> rpi2c-armhf-rb.debian.net:
>
> I've set it up now and am in the process of integrating it into the jenkins 
> build setup now. Should be ready within the next 30min…

Yay, thanks!

> In related news, we are now also testing testing/armhf since today.

Given the spikes in builds per day, figured this was coming soon... :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Raspberry PI 2B)

2015-12-30 Thread Vagrant Cascadian
Got a second Raspberry PI 2 set up, awaiting configuration...

rpi2c-armhf-rb.debian.net:
Raspberry PI 2B, broadcom(?) quad-core, 1GB ram, ~60GB USB3 SATA SSD
ssh port: 2235
ssh fingerprints:
256 85:c9:4c:6a:99:6a:d0:61:26:4b:00:2d:19:0c:67:a2 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 39:da:ca:f0:1d:58:b6:fe:97:4b:b4:6a:de:4d:57:b0 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Firefly)

2015-12-26 Thread Vagrant Cascadian
On 2015-12-25, Vagrant Cascadian wrote:
> Have been working a bit on two firefly boards, but need to
> sort out issues with MMC and USB, but at least u-boot is able to boot a
> debian kernel (built with a few config options added)...

Well, today was productive! Figured out the kernel options to enable:

  https://bugs.debian.org/809083

ff2a-armhf-rb.debian.net:
Firefly, rockchip rk3288 (cortex-a12(?)) quad-core, 2GB ram, ~60GB USB3 SATA SSD
ssh port: 2234
ssh fingerprints:
256 63:2a:5a:90:ec:89:42:06:6a:e1:32:e2:c7:bd:b2:f1 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 c1:00:67:e9:25:11:5c:24:89:ee:70:90:40:25:6d:a2 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

Ready for configuration.

Will work on the other one soon.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Odroid-XU4)

2015-12-25 Thread Vagrant Cascadian
Ok, got the first of the armhf build network upgrade plan machines up
and running, awaiting integration into the network...

odxu4b-armhf-rb.debian.net:
Odroid-XU4, exynos5422 quad-core, 2GB ram, ~60GB USB3 SATA SSD
ssh port: 2232
ssh fingerprints:
256 71:38:c6:43:9c:85:aa:08:1f:e1:1c:8f:d0:a3:77:d1 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 06:7a:98:0b:5b:84:00:1f:be:0e:7a:80:54:dc:de:70 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

odxu4c-armhf-rb.debian.net:
Odroid-XU4, exynos5422 quad-core, 2GB ram, ~60GB USB3 SATA SSD
ssh port: 2233
ssh fingerprints:
256 33:ae:c6:04:f4:9d:83:94:ce:95:28:69:e8:7c:e7:9a 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 da:3b:c1:28:7c:77:56:60:ec:25:a0:16:00:69:06:0b 
/etc/ssh/ssh_host_rsa_key.pub (RSA)


They currently require manual power cycling if it comes to that, as the
remote power switch hasn't hasn't arrived yet.

Should get three more boards by the end of the year, and a couple more
in January. Have been working a bit on two firefly boards, but need to
sort out issues with MMC and USB, but at least u-boot is able to boot a
debian kernel (built with a few config options added)...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Second build on failures

2015-12-21 Thread Vagrant Cascadian
On 2015-12-21, Holger Levsen wrote:
> On Donnerstag, 3. Dezember 2015, Vagrant Cascadian wrote:
>> Filed against libc-bin:
>>   https://bugs.debian.org/806911
>> Aurelian Jarno filed a patch upstream to support using the uname26
>> personality:
>>   https://sourceware.org/ml/libc-alpha/2015-12/msg00028.html
>
> ok, cool, thanks!
>  
>> So it might get fixed in future versions ...although we'd need to run
>> From within sid (or backport util-linux) to run on jenkins any time
>> soon...
>
> yeah :/
>  
>> For now, relying on the fact that there are different actual kernels on
>> various builds (4.x vs. 3.x) will hopefully be good enough to detect the
>> issue that using "linux64 --uname-2.6" was trying to solve.
>
> yeah. what I don't like about this is that it forces us to do that. I liked 
> the flexibility using --uname-2.6 gave us…

The impression I got was the patch implementation was rejected upstream,
but in theory a better patch could be written. Aurelian wasn't planning
on working on it.

So if it's wanted for reproducible builds purposes, probably need to
come up with a patch that would get accepted upstream...

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Patch V2 for build nodes pools

2015-12-21 Thread Vagrant Cascadian
On 2015-12-21, Holger Levsen wrote:
> On Samstag, 19. Dezember 2015, Vagrant Cascadian wrote:
>> I didn't spend any time really figuring out which nodes to add to the
>> example 16th build job, so that might need some adjusting.
>
> put some 4cores in one pool, and 2cores in another?

It could be done any number of ways, I merely added it to show how it
would work with the code I was proposing. I was hoping the pool code
could be ready enough to use with the new nodes that should be coming by
the end of the year, and they'd be reasonable first tests...


>> - Split load estimating into it's own script, and add support for
>> available memory.
>
> I'd still suggest to measure the load constantly by a job outside the build 
> script… (then it's also easy to read "not updated node load since $time" as 
> "node is to busy to be scheduled on…)
>
>> - Call timeout so that the ssh processes don't take too long to complete.
>
> see above, don't ssh from the build script please.

Implementing that outside of the build script would make this much more
complicated...

The second build needs to check for load when it is about to be run, as
it doesn't make sense to check when build_rebuild is run (unless you run
both build1 and build2 in parallel... but that's a whole different
proposal), as the load of the machines is likely to change between the
first and second build.

I'm not sure how to do all that outside the build script and keep the
code reasonably simple.

What's the primary concern with ssh from within the build script? Taking
too long to get a response?


>> diff --git a/bin/reproducible_build.sh b/bin/reproducible_build.sh
>
> I'll only comment on the most "pressing" issues now.
>
>>  build_rebuild() {
>>  FTBFS=1
>>  mkdir b1 b2
>> +local selected_node
>> +selected_node=$(select_least_loaded_node $NODE1_POOL)
>
> please make this somehow conditional so that this code path is not used for 
> "normal operation" (=without this new pooling), so we can test this easily on 
> one builder job, but not on all.

It basically is conditional in that the select_least_loaded_node
function simply returns the node if only one argument is passed.


> so for builder_armhf_16…:
>
>> +++ b/job-cfg/reproducible.yaml
>> +- '16': { my_node1: 'wbd0-armhf-rb:2223
>> wbq0-armhf-r:2225', my_node2: 'bpi0-armhf-rb: odxu4-armhf-rb:2229' } +
>>my_shell: '/srv/jenkins/bin/reproducible_build.sh "{my_node1}"
>
> …reproducible_build.sh should probably be called with "experimental-pooling" 
> as first param, which is then shifted away…

That shouldn't be too hard, sure.

Could alternately use something like:

   - '16': { my_node1: 'pool,wbd0-armhf-rb:2223,wbq0-armhf-rb:2225',
 my_node2: 'pool,bpi0-armhf-rb:,odxu4-armhf-rb:2229' }


Maybe this should be written in two stages, first implementing a simpler
patch just providing failover, and then adding the load checks later.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Patch for build nodes pools

2015-12-18 Thread Vagrant Cascadian
I've had this idea that we could make more efficient use of the nodes by
grouping them into pools...

This would hopefully balance out the load on the nodes (most of the
armhf nodes CPUs are idle roughly 25% of the time) a little better.

Perhaps more importantly, it should be much more resilient if one node
is down, as a given build job can use one of several build machines for
the second build.

You can still group pools into categories to ensure diversity in kernel
version, cpu type, operating date, etc.

The load check isn't perfect, but it's better than nothing, maybe good
enough as is. Adding a check for available ram should be simple enough.

Another option is to only use pools for the second build, which still
gets most of the benefits, but perhaps is a little simpler
configuration-wise.

Patch below! No idea if it works, given that I don't have a spare
jenkins.debian.net or build network to test on, but hopefully it
demonstrates the idea, and is mostly there.

My biggest concern with the code is not knowing if setting the NODE1,
PORT1, NODE2 and PORT2 within the function will work correctly and be
available outside of the function for the remainder of the process, or
other functions that run outside of reproducible_build.sh that need to
know those variables.

live well,
  vagrant

commit 200c45bbb5768dce5649b05ad599c85c6bb14b50
Author: Vagrant Cascadian <vagr...@debian.org>
Date:   Fri Dec 18 15:02:32 2015 -0800

Implement support for build pools, and add an example pool.
---
 bin/reproducible_build.sh | 55 +--
 job-cfg/reproducible.yaml |  3 ++-
 2 files changed, 55 insertions(+), 3 deletions(-)

diff --git a/bin/reproducible_build.sh b/bin/reproducible_build.sh
index 338c207..c9d725f 100755
--- a/bin/reproducible_build.sh
+++ b/bin/reproducible_build.sh
@@ -688,9 +688,53 @@ check_buildinfo() {
rm -f $TMPFILE1 $TMPFILE2
 }
 
+select_least_loaded_node() {
+local pool_nodes
+local node
+local port
+local load
+local best_load
+local selected
+# default to the first node
+selected="$1"
+pool_nodes="$@"
+if [ "$selected" = "$pool_nodes" ]; then
+   echo $selected
+   return 0
+fi
+load = 0
+best_load = 0
+for this_node in $pool_nodes ; do
+   node=$(echo $this_node | cut -d : -f 1)
+   port=$(echo $this_node | cut -d : -f 2)
+   # Compare the number of processors against the load, and add
+   # 1000 so we don't need to bother comparing negative numbers.
+   # 
+   # TODO: account for available memory.
+   #
+   # TODO: this could be improved upon and simplified by calling
+   # a shell script on the remote end.
+   load=$(echo \
+  $(ssh $node -p $port \
+"grep ^processor /proc/cpuinfo | wc -l ; echo ' \* X 
100 + 1000 - 100 X ' ; cut -d ' ' -f 1 /proc/loadavg") | \
+ tr 'X' '*' | \
+ bc | \
+ cut -d . -f 1)
+   if [ "$load" -gt "$best_load" ]; then
+   selected="$this_node"
+   best_load="$load"
+   fi
+done
+echo $selected
+}
+
 build_rebuild() {
FTBFS=1
mkdir b1 b2
+   local selected_node
+   selected_node=$(select_least_loaded_node $NODE1_POOL)
+   NODE1=$(echo $selected_node | cut -d : -f 1)
+   PORT1=$(echo $selected_node | cut -d : -f 2)
remote_build 1 $NODE1 $PORT1
if [ ! -f b1/${SRCPACKAGE}_${EVERSION}_${ARCH}.changes ] && [ -f 
b1/${SRCPACKAGE}_*_${ARCH}.changes ] ; then
echo "Version mismatch between main node 
(${SRCPACKAGE}_${EVERSION}_${ARCH}.dsc expected) and first build node ($(ls 
b1/*dsc)) for $SUITE/$ARCH, aborting. Please upgrade the schroots..." | tee -a 
${RBUILDLOG}
@@ -700,6 +744,9 @@ build_rebuild() {
exit 0
elif [ -f b1/${SRCPACKAGE}_${EVERSION}_${ARCH}.changes ] ; then
# the first build did not FTBFS, try rebuild it.
+   selected_node=$(select_least_loaded_node $NODE2_POOL)
+   NODE2=$(echo $selected_node | cut -d : -f 1)
+   PORT2=$(echo $selected_node | cut -d : -f 2)
remote_build 2 $NODE2 $PORT2
if [ -f b2/${SRCPACKAGE}_${EVERSION}_${ARCH}.changes ] ; then
# both builds were fine, i.e., they did not FTBFS.
@@ -750,10 +797,14 @@ elif [ "$1" = "1" ] || [ "$1" = "2" ] ; then
exit 0
 elif [ "$2" != "" ] ; then
MODE="master"
+   NODE1_POOL="$1"
+   NODE2_POOL="$2"
+   # FIXME: postpone setting NODE1/PORT1 and NODE2/PORT2 until the builds
+   # run
NODE1="$(echo $1 | cut -d ':' -f1).debian.net"
NODE2="$(echo $2 | cut -d

Re: [Reproducible-builds] new armhf node (Odroid-C1+)

2015-12-06 Thread Vagrant Cascadian
On 2015-12-05, Vagrant Cascadian wrote:
> It might not come online for real until Sunday evening (just need to
> move it into place with all the others), but figured I'd announce it
> now:
>
> odc1-armhf-rb.debian.net:
...
> This is the first one running an older version of linux, 3.10, and using
> the kernel package from odrobian

Which seems to be having disk issues, meh. Not ready after all...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] new armhf node (Odroid-C1+)

2015-12-05 Thread Vagrant Cascadian
It might not come online for real until Sunday evening (just need to
move it into place with all the others), but figured I'd announce it
now:

odc1-armhf-rb.debian.net:
Odroid-C1+, quad-core Amlogic (meson8b? cortex-a5), 1GB ram, ~60GB USB3 SATA SSD
ssh port: 2231
ssh fingerprints:
256 9e:09:96:a8:98:b6:a2:3b:36:16:eb:a3:28:5f:df:e9 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 82:ee:b4:5f:00:36:05:f5:2d:ed:9e:96:2b:d1:52:a2 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

This is the first one running an older version of linux, 3.10, and using
the kernel package from odrobian, rather than a kernel from debian, or a
lightly/moderately patched kernel based on debian's kernel packaging.

The u-boot is terribly old...

This one has been a bit finicky to set up, but hopefully it'll add a few
more CPU cycles to the build network...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] blacklist armhf: vxl qtbase-opensource-src

2015-12-04 Thread Vagrant Cascadian
On 2015-12-03, Holger Levsen wrote:
> On Donnerstag, 3. Dezember 2015, Vagrant Cascadian wrote:
>> Both hit the 12 hour timeout at least once, and armhf is slow enough,
>> and hasn't neared 100% enough to keep building these...
>
> no need to justify things, I assume you know what you're doing :)

Since I this is the first time I've emailed a blacklist request for
armhf to the list, I figured I'd at least document a reason, given that
nearly all of the armhf blacklisted packages are for the same reason. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] blacklist armhf: vxl qtbase-opensource-src

2015-12-03 Thread Vagrant Cascadian
Please (add to the growing) blacklist on armhf: vxl qtbase-opensource-src

Both hit the 12 hour timeout at least once, and armhf is slow enough,
and hasn't neared 100% enough to keep building these...

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Second build on failures

2015-12-02 Thread Vagrant Cascadian
On 2015-12-01, Reiner Herrmann <rei...@reiner-h.de> wrote:
> On Tue, Dec 01, 2015 at 02:13:07PM -0800, Vagrant Cascadian wrote:
>> Hey, I think all of the second builds on armhf are failing to set up the
>> build environment:
>> 
>>   
>> https://reproducible.debian.net/logs/unstable/armhf/gb_0.3.2-1.build2.log.gz
>> 
>>   I: Installing the build-deps
>>   I: user script 
>> /srv/workspace/pbuilder/5651/tmp/hooks/D01_modify_environment starting
>>   FATAL: kernel too old
>
> Interesting... According to codesearch this comes from glibc [1].
> It could be related to "linux64 --uname-2.6", which we use use to fake a 
> different
> kernel version.
>
> [1]: 
> https://sources.debian.net/src/glibc/2.19-18/sysdeps/unix/sysv/linux/dl-osinfo.h/?hl=45#L45

Indeed, the "linux64 --uname-2.6" seems to be the culprit.

I first tested removing the linux64 call on two of the nodes, and they
were the only ones successfully running second builds for the last
several hours... so I've just now removed the linux64 calls on all the
armhf nodes.

Would be best to investigate the issue further...

On a related note, it might be worth trying "setarch uname26" instead of
linux64 (which is just a symlink to setarch). But I doubt if this will
change anything with the above issue.

live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Bug#806911: libc-bin: ldconfig segfaults when run using "setarch uname26"

2015-12-02 Thread Vagrant Cascadian
Package: libc-bin
Version: 2.21-1
Severity: normal
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

Apparently, when run with "setarch uname26" or "linux64 --uname-2.6",
ldconfig segfaults.

  setarch uname26 ldconfig
  FATAL: kernel too old
  Segmentation fault

libc-bin version 2.19-22 in stretch does not segfault when run this
way.

I haven't tried, but this may also fail similarly when run on an old
kernel as well.

At the very least, maybe it shouldn't segfault with old kernels.

The reproducible builds project use "linux64 --uname-2.6" to set a
different kernel version for the second build to find bugs in packages
that build differently depending on the running kernel version, and it
would be nice if this would continue to work.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Second build on failures

2015-12-01 Thread Vagrant Cascadian
Hey, I think all of the second builds on armhf are failing to set up the
build environment:

  https://reproducible.debian.net/logs/unstable/armhf/gb_0.3.2-1.build2.log.gz

  I: Installing the build-deps
  I: user script /srv/workspace/pbuilder/5651/tmp/hooks/D01_modify_environment 
starting
  FATAL: kernel too old


Also saw this message on an amd64/experimental build, although much
later in the build environment setup:

  
https://reproducible.debian.net/logs/experimental/amd64/python-letsencrypt_0.0.0.dev20151123-1.build2.log.gz

  Traitement des actions différées (« triggers ») pour libc-bin (2.21-1)...
  FATAL: kernel too old
  Segmentation fault
  FATAL: kernel too old
  Segmentation fault
  dpkg: erreur de traitement du paquet libc-bin (--unpack)


Hrm.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] new armhf node (Raspberry PI 2B)

2015-11-19 Thread Vagrant Cascadian
And another!

rpi2b-armhf-rb.debian.net:
Raspberry PI 2B, broadcom(?) quad-core, 1GB ram, ~60GB USB3 SATA SSD
ssh port: 2230
ssh fingerprints:
256 42:3a:92:86:e8:a9:db:2f:1e:86:9b:f2:07:fa:87:97 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 d1:a8:26:2a:0a:c9:1b:67:0d:c9:48:95:23:4b:9f:31 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

Even though this is a quad-core, since it only has 1GB of memory, might
be worth configuring it with only as many jobs as the other machines
with 1GB of memory (but maybe with parallel=4 ?)...

In related news, upgraded odxu4 to an SSD.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] new armhf node (Wandboard Dual)

2015-11-10 Thread Vagrant Cascadian
And another one!

wbd0-armhf-rb.debian.net:
Wandboard-Dual, imx6 dual-core, 1GB ram, ~60GB USB2 SATA spinning disk
ssh port: 2223
ssh fingerprints:
256 d9:50:c6:3b:f8:1d:38:ac:97:ea:62:ea:37:cd:98:ed 
/etc/ssh/ssh_host_ecdsa_key.pub (ECDSA)
2048 48:4c:9c:b4:b7:41:76:6f:36:0d:eb:b5:83:bf:66:4c 
/etc/ssh/ssh_host_rsa_key.pub (RSA)

Disk is a bit slow, SSD on the way, though USB2 might be a limiting
factor...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] new armhf node (Odroid-XU4)

2015-11-09 Thread Vagrant Cascadian
On 2015-11-09, Holger Levsen wrote:
> thanks a lot for this new node. I've set it up and included in the machinery
> now

That was fast... yay!


> btw, how will you attach the sdd? 
> http://archlinuxarm.org/platforms/armv7/samsung/odroid-xu4 only speaks about
> sd-cards as far as I could see.

USB3 to SATA adapter, same way the HD is attached now. Could alternately
get a 64GB eMMC for around $80, but not sure how fast they are by
comparison:

  http://ameridroid.com/products/64gb-emmc-50-module-xu3-linux

FWIW the current setup build times for...

u-boot:

real98m55.905s
user163m25.932s
sys 24m50.184s

linux:

real430m0.811s
user1197m29.396s
sys 94m45.264s

For at least 60 of those minutes it was building both concurrently, with
DEB_BUILD_OPTS=parallel=5, and didn't seem to fall over.


> please reboot to make sure all modifications are sane.

I did think about trying to make a quick hack to get a newer (and saner)
u-boot version running, but I might wait a bit for that.


> last and least: odxu4 breaks your own naming scheme - not sure what else 
> though ;)

Heh. The naming scheme isn't exactly consistant, true. It should
probably be odxu40 or odxu4-0, but figured I'd keep it shorter. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Reproducible U-Boot build support, using SOURCE_DATE_EPOCH

2015-09-30 Thread Vagrant Cascadian
On 2015-09-28, Paul Kocialkowski wrote:
> Le jeudi 24 septembre 2015 à 09:05 -0700, Vagrant Cascadian a écrit :
>> I think the use of "time = mktime(time_universal);" is where the problem
>> lies:
>
> […]
>
>> According to the mktime manpage:
>> 
>>The  mktime()  function converts a broken-down time structure,
>>expressed as local time, to calendar time representation.  
>> 
>> So my interpetation is that it's taking the UTC time and converts it
>> into local time using the configured timezone... not sure what would be
>> a viable alternative to mktime.
>
> That seems to make sense. Come to think of it, it probably was not
> necessary to call gmtime in the first place: if SOURCE_DATE_EPOCH is
> always in UTC, we should be able to stick that as-is in the time
> variable. At best, gmtime + mktime (assuming mktime working in UTC)
> would give us back the same timestamp.
>
> What do you think? Please let me know if I'm wrong.

This patch on top of 2015.10-rc4 seems to resolve the issue for me:

Index: u-boot/tools/default_image.c
===
--- u-boot.orig/tools/default_image.c
+++ u-boot/tools/default_image.c
@@ -108,8 +108,6 @@ static void image_set_header(void *ptr,
fprintf(stderr, "%s: SOURCE_DATE_EPOCH is not valid\n",
__func__);
time = 0;
-   } else {
-   time = mktime(time_universal);
}
} else {
time = sbuf->st_mtime;


It still checks for the validity of SOURCE_DATE_EPOCH using gmtime, but
doesn't call mktime at all, just re-uses the value set from
SOURCE_DATE_EPOCH.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Reproducible U-Boot build support, using SOURCE_DATE_EPOCH

2015-09-24 Thread Vagrant Cascadian
On 2015-07-26, Paul Kocialkowski wrote:
> In order to achieve reproducible builds in U-Boot, timestamps that are defined
> at build-time have to be somewhat eliminated. The SOURCE_DATE_EPOCH 
> environment
> variable allows setting a fixed value for those timestamps.
...
> However, some other devices might need some more tweaks, especially regarding
> the image generation tools.

With this patch, there is still variation based on timezone in any of
the u-boot.img and u-boot-sunxi-with-spl.bin produced in the Debian
packages:

  https://reproducible.debian.net/rb-pkg/unstable/armhf/u-boot.html

The good news is that all the u-boot.bin targets are produced
reproducibly, so here's to progress!


I think the use of "time = mktime(time_universal);" is where the problem
lies:

> diff --git a/tools/default_image.c b/tools/default_image.c
> index cf5c0d4..18940af 100644
> --- a/tools/default_image.c
> +++ b/tools/default_image.c
> @@ -96,9 +99,25 @@ static void image_set_header(void *ptr, struct stat *sbuf, 
> int ifd,
>   sizeof(image_header_t)),
>   sbuf->st_size - sizeof(image_header_t));
>  
> + source_date_epoch = getenv("SOURCE_DATE_EPOCH");
> + if (source_date_epoch != NULL) {
> + time = (time_t) strtol(source_date_epoch, NULL, 10);
> +
> + time_universal = gmtime();
> + if (time_universal == NULL) {
> + fprintf(stderr, "%s: SOURCE_DATE_EPOCH is not valid\n",
> + __func__);
> + time = 0;
> + } else {
> + time = mktime(time_universal);
> + }
> + } else {
> + time = sbuf->st_mtime;
> + }
> +
>   /* Build new header */
>   image_set_magic(hdr, IH_MAGIC);
> - image_set_time(hdr, sbuf->st_mtime);
> + image_set_time(hdr, time);
>   image_set_size(hdr, sbuf->st_size - sizeof(image_header_t));
>   image_set_load(hdr, params->addr);
>   image_set_ep(hdr, params->ep);
> -- 
> 1.9.1

According to the mktime manpage:

   The  mktime()  function converts a broken-down time structure,
   expressed as local time, to calendar time representation.  

So my interpetation is that it's taking the UTC time and converts it
into local time using the configured timezone... not sure what would be
a viable alternative to mktime.

Running with the TZ=UTC environment variable exported works around the
problem; not sure if it would be appropriate to always run with TZ=UTC
when SOURCE_DATE_EPOCH is set...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] [U-Boot] [U-Boot, v2] Reproducible U-Boot build support, using SOURCE_DATE_EPOCH

2015-08-25 Thread Vagrant Cascadian
On 2015-08-25, Andreas Bießmann wrote:
 On 07/28/2015 05:00 PM, Tom Rini wrote:
 On Sun, Jul 26, 2015 at 06:48:15PM +0200, Paul Kocialkowski wrote:
 In order to achieve reproducible builds in U-Boot, timestamps that are 
 defined
 at build-time have to be somewhat eliminated. The SOURCE_DATE_EPOCH 
 environment
 variable allows setting a fixed value for those timestamps.

 Simply by setting SOURCE_DATE_EPOCH to a fixed value, a number of targets 
 can be
 built reproducibly. This is the case for e.g. sunxi devices.

 However, some other devices might need some more tweaks, especially 
 regarding
 the image generation tools.

 Signed-off-by: Paul Kocialkowski cont...@paulk.fr
 
 Applied to u-boot/master, thanks!

 This commit breaks build on non GNU hosts (like OS X and persumably
 other *BSD hosts). Before, those hosts where supported, so for me this
 has to be fixed for 2015.10

 We need a) some mechanism to search for the GNU date variant or b) some
 wrapper to provide the correct output on those host machines.

 I vote for a), it is acceptable to have the GNU date available but we
 should error on 'no GNU date available'. Furthermore we need to have the
 date command exchangeable by e.g. gdate, gnudate, ... maybe with full path.

There was a proposed patch which only uses the GNU date extensions if
SOURCE_DATE_EPOCH environment variable is set, would this sufficiently
address your concerns, at least for the short term?

  Message-Id: 1438337042-30762-1-git-send-email-judge.pack...@gmail.com
  http://lists.denx.de/pipermail/u-boot/2015-August/221429.html


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] dpkg_1.18.1.0~reproducible5 ftbfs

2015-08-04 Thread Vagrant Cascadian
On 2015-07-31, Guillem Jover wrote:
 On Fri, 2015-07-31 at 16:49:13 +0200, Holger Levsen wrote:
 so yesterday I tried to build 
 http://reproducible.alioth.debian.org/debian/dpkg_1.18.1.0~reproducible5.dsc 
 with pbuilder on sid/armhf and that failed _exactly_ like 
 https://reproducible.debian.net/rbuild/unstable/amd64/dpkg_1.18.1.0~reproducible5.rbuild.log
 
 I then tried to build dpkg_1.18.1.dsc from sid proper and that built 
 flawlessly on sid/armhf.
 
 Thus I conclude our reproducible patches cause this, whatever this is. Yet I 
 lack time atm to debug this, so this mail is merely a note to Guillem and a 
 call for help to the reproducible folks.

 Right, I noticed this quite some time ago, but forgot to bring it up.
 W/o having checked anything, it might be that whoever prepared the
 release perhaps forgot to «autoreconf -f -i» the sources and prepared
 it from a previous .dsc instead of a git tree?

I *think* what's happening is the reproducible builds git tree doesn't
include tags for the version, and .dist-version isn't present, so
./get-version is returning an empty string (it can't get it from git or
From .dist-version), so autoreconf -f -i fails... running:

  echo 1.18.1.0~reproducible5  .dist-version
  autoreconf -f -i
  debuild -us -uc # or your build of choice

Seems to fix/workaround the issue.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] ARM build machines

2015-08-03 Thread Vagrant Cascadian
On 2015-08-03, Holger Levsen wrote:
 On Samstag, 1. August 2015, Vagrant Cascadian wrote:
 wbq0-armhf-rb.debian.net:

 $ host wbq0-armhf-rb.debian.net
 Host wbq0-armhf-rb.debian.net not found: 3(NXDOMAIN)

Oops. Accidentally dropped it on one of the debian.net updates. Re-added
just now, got the ack that it was added, now just need to wait for DNS
propagation...

(as you may notice, they're all CNAMEs with a consistant pattern, so you
could try accessing them directly in a pinch...)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] ARM build machines

2015-07-31 Thread Vagrant Cascadian
On 2015-07-27, Vagrant Cascadian wrote:
 The current specs:

 bpi0-armhf-rb.debian.net:
 Banana PI, dual-core, 1GB ram, ~20GB mSATA.
 ssh port: 
 ssh fingerprints:
 2048 8f:0a:d0:77:a8:59:c0:bb:d0:76:de:14:13:5b:a6:56 (RSA)
 256 af:b4:13:04:21:30:46:b3:e8:79:ff:7d:99:20:86:f0 (ECDSA)

 hb0-armhf-rb.debian.net:
 HummingBoard i2ex, dual-core, 1GB ram, ~20GB mSATA.
 ssh port: 2224
 ssh fingerprints:
 - 2048 04:af:b4:e8:f0:13:13:66:25:7b:e3:d6:ee:b3:0d:0a (RSA)
 - 256 2f:1b:3a:fb:55:cf:27:3f:f6:de:e4:3d:e1:4c:59:c8 (ECDSA)

Added new machine...

wbq0-armhf-rb.debian.net:
Wandboard Quad, quad-core, 2GB ram, ~50GB SATA SSD
ssh port: 2225
ssh fingerprints:
- 2048 ca:04:3e:d6:92:7c:32:20:1e:2f:d7:41:df:29:19:15 (RSA)
- 256 4f:57:be:2e:aa:a4:7d:b2:6d:18:8a:d3:35:5e:df:a6 (ECDSA)


Holger, I've added access for you with the same key, so tear it up!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] ARM build machines

2015-07-28 Thread Vagrant Cascadian
On 2015-07-28, Holger Levsen wrote:
 do you need to be cc:ed?

I am subscribed to the list as of a few weeks ago.


 On Montag, 27. Juli 2015, Vagrant Cascadian wrote:
 So I set about trying to patch u-boot and test reproducibility myself,
 only to discover that the reproducible toolchain didn't include arm
 packages...

 yes, we either need to make sure to manually build there in future too, or 
 setup some autobuilders...

Would it be feasible to use the same machines for that?


 I've currently got two up and running, but it sounds like there's more
 work to do on the backend to integrate with jenkins and other
 infrastructure...

 yes, we need to make some changes to our infrastructure as well:

 - add scheduling for armhf
 - have jenkins jobs to prepare pbuilder chroots for armhf on these machines
 - add support for building on remote machines
 - integrate the armhf build results into the web ui

Regarding scheduling, I think we'll need a *lot* more nodes to get real
archive coverage, or maybe selectively schedule packages for builds that are
known or suspected to have architecture-specific divergence (e.g. builds
different packages on different architectures).


 how is the network connectivity? I assume we want to use a proxy, though I 
 have no idea whether one for both machines or one on each machine (though 
 with 
 20gb diskspace there is not really space for a proxy...)

Network isn't amazingly fast, but there's a squid-deb-proxy on the local
network for package caching with ~40GB for the cache, and it could be
expanded if needed.


 Let me know if you need access to these machines to configure them.

 yes, we need access. Preferedly root/sudo, else user ssh access (account name 
 holger for me please) and we need a jenkins users with a quite liberal 
 sudoers.d...

Will give you access and free reign to set up as you wish. :)


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] [PATCH] Reproducible U-Boot build support, using SOURCE_DATE_EPOCH

2015-07-20 Thread Vagrant Cascadian
On 2015-07-20, Paul Kocialkowski wrote:
 In order to achieve reproducible builds in U-Boot, timestamps that are defined
 at build-time have to be somewhat eliminated. The SOURCE_DATE_EPOCH 
 environment
 variable allows setting a fixed value for those timestamps.
...
 diff --git a/Makefile b/Makefile
 index 37cc4c3..71aeac7 100644
 --- a/Makefile
 +++ b/Makefile
 @@ -1231,9 +1231,10 @@ define filechk_version.h
  endef
  
  define filechk_timestamp.h
 - (LC_ALL=C date +'#define U_BOOT_DATE %b %d %C%y'; \
 - LC_ALL=C date +'#define U_BOOT_TIME %T'; \
 - LC_ALL=C date +'#define U_BOOT_TZ %z')
 + (SOURCE_DATE=$${SOURCE_DATE_EPOCH:+@$$SOURCE_DATE_EPOCH}; \
 + LC_ALL=C date -u -d $${SOURCE_DATE:-now} +'#define U_BOOT_DATE %b %d 
 %C%y'; \
 + LC_ALL=C date -u -d $${SOURCE_DATE:-now} +'#define U_BOOT_TIME %T'; 
 \
 + LC_ALL=C date -u -d $${SOURCE_DATE:-now} +'#define U_BOOT_TZ %z' )
  endef
  
  $(version_h): include/config/uboot.release FORCE

This does effectively hard-code U_BOOT_TZ to UTC; may as well not call
date for setting U_BOOT_TZ. Or conditionally set it to UTC only when
SOURCE_DATE_EPOCH is set?

Any reason not to use the longhand options for date, e.g. --utc and
--date ? They're more readable; are they less portable?


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] [U-Boot] [PATCH] build: create time and date independent binary

2015-07-19 Thread Vagrant Cascadian
On 2015-07-19, Holger Levsen wrote:
  All this said, if you send me patches, I will probably deploy them as I'm
  very curious and more reproducibility efforts are good :-) We can can
  always decide to remove or move them later.
 
 I wish to make all contributions upstream. What would really help at
 first would be to have all targets built regularly to see where work is
 needed. This is where I think the Debian infrastructure could help, in a
 similar way as what was started for Coreboot.

FWIW, I was planning on including this patch to u-boot in the next
upload to Debian:

  
https://anonscm.debian.org/cgit/collab-maint/u-boot.git/tree/debian/patches/use-date-from-debian-changelog.patch?h=experimental-2015.07

I *think* that actually makes u-boot build reproducibly with Debian's
reproducible builds toolchain when SOURCE_DATE_EPOCH is set, but I
haven't tested it fully. I might have missed some other sources of
non-determinism...


Hoping to get some armhf buildd nodes up an running soonish... although
it should also be buildable with the cross-toolchains, if the
reproducible buildds coulld be made to support that.


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] [U-Boot] [PATCH] build: create time and date independent binary

2015-06-17 Thread Vagrant Cascadian
Thanks for sharing the patches to set U_BOOT_DATE/U_BOOT_TIME/U_BOOT_TZ.

I happened to independently work on a similar patch in the last few
days:

Index: u-boot/Makefile
===
--- u-boot.orig/Makefile
+++ u-boot/Makefile
@@ -1231,10 +1231,14 @@ define filechk_version.h
echo \#define LD_VERSION_STRING \$$($(LD) --version | head -n 1)\; )
 endef
 
+ifeq ($(BUILD_DATE),)
+  BUILD_DATE = $(shell date)
+endif
+
 define filechk_timestamp.h
-   (LC_ALL=C date +'#define U_BOOT_DATE %b %d %C%y'; \
-   LC_ALL=C date +'#define U_BOOT_TIME %T'; \
-   LC_ALL=C date +'#define U_BOOT_TZ %z')
+   (LC_ALL=C date --utc --date $(BUILD_DATE) +'#define U_BOOT_DATE %b 
%d %C%y'; \
+   LC_ALL=C date --utc --date $(BUILD_DATE) +'#define U_BOOT_TIME %T'; 
\
+   LC_ALL=C date --utc --date $(BUILD_DATE) +'#define U_BOOT_TZ %z')
 endef
 
 $(version_h): include/config/uboot.release FORCE


This allows us to set a specific timestamp by passing BUILD_DATE (or
whatever standardized variable name is agreed upon) in debian/rules
taken from the debian/changelog entry.

I do see some value in there being a time-based string there, as it's
displayed to the user at boot and can be useful for troubleshooting what
build they're booting...

I'll likely include this patch or soemthing like it in the next u-boot
uploads to Debian; though it'd be great to get it merged upstream.


Interestingly, u-boot kind of flys under the reproducible builds radar
as the amd64 build doesn't include any binaries impacted by the
datestrings. I've been making mumblings on irc of setting up some armhf
nodes for testing reproducible builds on armhf to catch some of these
packages...


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Bug#778571: sbuild: predictible build location for reproducible builds

2015-02-16 Thread Vagrant Cascadian
Package: sbuild
Version: 0.65.0-1
Severity: wishlist
Tags: patch

Thanks for maintaining sbuild!

In order to use sbuild for reproducible builds, the build dir needs to
be consistent across rebuilds, but sbuild currenty generates a
randomly named build dir.


The following proof-of-concept patch does this by setting the build
dir to /build/PACKAGE-VERSION rather than /build/PACKAGE-XX:

From 15b77405a67faaea7bc3974a4e7a3862620d0b42 Mon Sep 17 00:00:00 2001
From: Vagrant Cascadian vagr...@debian.org
Date: Fri, 13 Feb 2015 23:18:23 -0800
Subject: [PATCH 1/2] Make predictible build dir location based on the package
 version.

---
 lib/Sbuild/Build.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/Sbuild/Build.pm b/lib/Sbuild/Build.pm
index 155e4fc..5149a8a 100644
--- a/lib/Sbuild/Build.pm
+++ b/lib/Sbuild/Build.pm
@@ -396,8 +396,8 @@ sub run_chroot_session {
# TODO: Don't hack the build location in; add a means to customise
# the chroot directly.  i.e. allow changing of /build location.
$self-set('Chroot Build Dir',
-  tempdir($self-get('Package') . '-XX',
-  DIR =  $session-get('Location') . /build));
+  $session-get('Location') . /build/ . 
$self-get('Package') . - . $self-get('Version'));
+   mkdir $self-get('Chroot Build Dir');
 
$self-set('Build Dir', $session-strip_chroot_path($self-get('Chroot 
Build Dir')));
 
-- 
2.1.4


This patch is really only a small step forward, making the build dir
consistent only when building with sbuild. Ideally, the build dir
should be set to /usr/src/debian/PACKAGE-VERSION (or some other widely
agreed upon dir) so that builds would be reproducible if built with
other tools such as pbuilder using the same build dir.

Making the build dir configurable might also help...


live well,
  vagrant

-- System Information:
Debian Release: 8.0
  APT prefers testing
  APT policy: (500, 'testing'), (120, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386, armhf

Kernel: Linux 3.16.0-4-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages sbuild depends on:
ii  adduser 3.113+nmu3
ii  apt-utils   1.0.9.6
ii  libsbuild-perl  0.65.0-1
ii  perl5.20.1-5
ii  perl-modules5.20.1-5

Versions of packages sbuild recommends:
ii  debootstrap  1.0.66
ii  fakeroot 1.20.2-1

Versions of packages sbuild suggests:
pn  deborphan  none
ii  wget   1.16-1

-- no debconf information


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Bug#778570: sbuild: ignore .buildinfo files in .changes

2015-02-16 Thread Vagrant Cascadian
Package: sbuild
Version: 0.65.0-1
Severity: wishlist
Tags: patch
X-Debbugs-Cc: reproducible-builds@lists.alioth.debian.org

Thanks for maintaining sbuild!

When using dpkg from the reproducible builds toolchain, it generates a
.buildinfo file in the .changes file:

  https://wiki.debian.org/ReproducibleBuilds/ExperimentalToolchain#dpkg


When .buildinfo files are present in the .changes, sbuild treats it as
an attempted build, rather than a successful build; it appears to be
treating the .buildinfo file as a .deb and tries to unpack it:

  ltsp_5.5.4-4~20150213~1_amd64.buildinfo
  ───

  dpkg-deb: error: 
`/«CHROOT»/«BUILDDIR»/ltsp_5.5.4-4~20150213~1_amd64.buildinfo' is not a debian 
format archive

  dpkg-deb: error: 
`/«CHROOT»/«BUILDDIR»/ltsp_5.5.4-4~20150213~1_amd64.buildinfo' is not a debian 
format archive


The following patch should fix/workaround this:

From 8468411099b8ec28641df015742784b63b98b573 Mon Sep 17 00:00:00 2001
From: Vagrant Cascadian vagr...@debian.org
Date: Fri, 13 Feb 2015 23:51:11 -0800
Subject: [PATCH 2/2] Ignore .buildinfo files produced by reproducible builds.

---
 lib/Sbuild/Build.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/Sbuild/Build.pm b/lib/Sbuild/Build.pm
index 5149a8a..f15e94a 100644
--- a/lib/Sbuild/Build.pm
+++ b/lib/Sbuild/Build.pm
@@ -1768,6 +1768,8 @@ sub build {
foreach (@debcfiles) {
my $deb = $build_dir/$_;
next if $deb !~ /(\Q$host_arch\E|all)\.[\w\d.-]*$/;
+   # ignore .buildinfo files produced by reproducible builds.
+   next if $deb =~ /\.*buildinfo$/;
 
$self-log_subsubsection($_);
if (!open( PIPE, dpkg --info $deb 21 | )) {
-- 
2.1.4


live well,
  vagrant


-- System Information:
Debian Release: 8.0
  APT prefers testing
  APT policy: (500, 'testing'), (120, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)
Foreign Architectures: i386, armhf

Kernel: Linux 3.16.0-4-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_US.utf8, LC_CTYPE=en_US.utf8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages sbuild depends on:
ii  adduser 3.113+nmu3
ii  apt-utils   1.0.9.6
ii  libsbuild-perl  0.65.0-1
ii  perl5.20.1-5
ii  perl-modules5.20.1-5

Versions of packages sbuild recommends:
ii  debootstrap  1.0.66
ii  fakeroot 1.20.2-1

Versions of packages sbuild suggests:
pn  deborphan  none
ii  wget   1.16-1

-- no debconf information


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

Re: [Reproducible-builds] Patches for sbuild reproducible builds

2015-02-16 Thread Vagrant Cascadian
On 2015-02-16, Holger Levsen wrote:
 On Samstag, 14. Februar 2015, Vagrant Cascadian wrote:

 One patch sets the build dir to a static location based on the package
 version, rather than a tempdir with a random string.
 
 The other patch ignores .buildinfo files in the .changes when displaying
 package contents. Without it, the build status gets set to attempted
 and is treated as a failed build:

 cool! I think it would be good if you could file a wishlist bug against 
 sbuild 
 (please add support for reproducible building of packages with sbuild) and 
 attach the patches there, so they do get lost.

Ok, I'll submit bugs about it so that they do *not* get lost. :)


 the current setup on jenkins.d.n just uses pbuilder twice. Maybe we should 
 use 
 sbuild once and pbuilder once? Could you help us with patches for that too? I 
 at least have no clue about sbuild whatsoever ;-) We'd need to use our custom 
 repo anyway, so using an sbuild rebuild plus your two patches is rather 
 trivial...

The patch that configures the build dir would have to be rewritten for
that to work; sbuild seems to hard-code /build/ in a number of places ,
rather than the proposed /usr/src/debian/PACKAGE-VERSION; I'm not sure
what directory pbuilder is configured to use.

Also, pbuilder and sbuild would need to ensure the same exact package
set from build dependencies; there are various different dependency
resolvers which might result in differences that could cause
unreproducibility...

Could also add a third (and/or fourth) build with sbuild, to catch
issues where a package is rebuildable with pbuilder or sbuilder, but
doesn't produce consistant results when built once with pbuilder and
once with sbuild...


 git clone ssh://git.debian.org/git/qa/jenkins.debian.net.git and then check 
 bin/reproducible_build.sh line 193 and 207 to see how pbuilder is used.

 Using sbuild once instead should be trivial, no? ;-)

With the above two issues (and perhaps other undiscovered issues) fixed,
it should be as simple as replacing:

  pbuild --build ... --distribution sid ${SRCPACKAGE}_*.dsc

with:

  sbuild --chroot sid ${SRCPACKAGE}_*.dsc

With appropriately configured sbuild/schroot chroots.


live well,
  vagrant

p.s. not (yet) subscribed to list, so if you need me to respond, please
CC me.


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds