Re: [Reproducible-builds] Performance of armhf boards

2016-04-17 Thread Vagrant Cascadian
On 2016-04-17, Steven Chamberlain wrote:
> I was wondering what is the performance of various armhf boards, for
> package building.  Of course, the reproducible-builds team have a lot of
> stats already.  Below I'm sharing the query I used and the results in
> case anyone else is interested in this.

Thanks for crunching some numbers and sharing!

Somewhat similar numbers are calculated daily:

  
https://jenkins.debian.net/view/reproducible/job/reproducible_nodes_info/lastBuild/console


> Figures will vary based on which packages were assigned to which
> node, as some are easier to build than others, but I hope over 21
> days that variance is smoothed out.

I wonder if 21 days is long enough to average things out, some builds
normally take nearly 12 hours or more, while some take only a few
minutes.


> Assuming the nodes had no downtime, we can compare pkgs_built over
> the 21-day period to assess performance.


There has definitely been downtime... particularly odxu4c, ff4a and
bbx15. And many of the systems occasionally get rebooted for testing and
upgrading u-boot or the linux kernel.

FWIW, 15 of 18 nodes are running kernels coming from the official
debian.org linux packages from jessie, jessie-backports, sid or
experimental! (only 9/18 for u-boot)


> Also avg_duration is meaningful, but will increase where the
> reproducible-builds network scheduled more concurrent build jobs on a
> node.  (Low avg_duration does not always mean high package throughput,
> it may just be doing fewer jobs in parallel.)
>
> Finally, the nodes' performance will depend on other factors such as
> storage device used, kernel, etc.

I've often wondered what the impacts are if "fast" nodes are mostly
paired with "slow" nodes for the 1st or 2nd builds, since each build job
is specifically tied to two machines. This was one of the factors
leading me to build pools based on load... but haven't had the time to
implement that.


> I don't know whether to believe these figures yet!
>
>   * wbq0 is impossibly fast for just 4x1GHz cores, 2GB RAM...

My guess is that it is one of the most stable. Only tends to be rebooted
for updates.


>   * odxu4 looks slightly faster than the other two.

That's tricky to track down, as odxu4c has had stability issues, and
odxu4 also to a lesser extent, and odxu4b has been relatively stable.

Many of the machines have different brand/model SSDs, so I was thinking
of comparing that against build stats on all the nodes to see if there's
a pattern. They're all pretty cheap SSDs, so I wouldn't be surprised if
there was significant variation in performance.


>   * cbxi4a/b seem no faster than cbxi4pro0 despite twice the RAM?

That is definitely surprising (although technically they only have
access to 3.8GB, but still!). They seem to be doing better according to
the daily averge stats.

195.1 builds/day (13075/67) on cbxi4b-armhf-rb.debian.net
188.2 builds/day (14121/75) on cbxi4a-armhf-rb.debian.net
172.9 builds/day (22658/131) on cbxi4pro0-armhf-rb.debian.net


>   * ff2a/b show USB3 SSD to be no faster than USB2?

All of the Firefly boards are USB2, I think. ff2a was running with only
512MB for a few weeks due to a u-boot I didn't notice until recently.


>   * bbx15 may be able to handle more build jobs (low avg_duration).

That's really impressive, because sometimes it's running 6 concurrent
builds, and only has two cores. It is a higher-performance cortex-a15.


>   * bpi0 may be overloaded (high avg_duration).

That curious. Not sure what to make of it.


>   * ff4a maybe had downtime, and seems to be under-utilised.

Yeah, it's had some multiple-hour stretches of downtime regularly,
partially due to stability issues, and partially due to kernel/u-boot
testing.


>   * rpi2b maybe had downtime, or has a slower disk than rpi2c.

Those numbers look surprising, especially since rpi2c has been rebooted
more often.

I'm also not sure if the rpi2 processors are running at full speed since
I switched to using the debian.org provided kernels, which don't have
cpufreq support.


>   * wbd0 slowness is likely due to the magnetic hard drive.

The disk was upgraded to an SSD at some point, although I'm suspecting
performance issues due to wear-leveling, as it's a smaller SSD and TRIM
isn't supported over any of the USB-SATA adapters I've found.


> Many thanks to Vagrant for hosting all these armhf nodes!

Thanks for taking a fresh look at it and making some suggestions of
things to look into!


live well,
  vagrant


signature.asc
Description: PGP signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds

[Reproducible-builds] Performance of armhf boards

2016-04-17 Thread Steven Chamberlain
Hi!

I was wondering what is the performance of various armhf boards, for
package building.  Of course, the reproducible-builds team have a lot of
stats already.  Below I'm sharing the query I used and the results in
case anyone else is interested in this.

Using https://tests.reproducible-builds.org/reproducible.db
(warning: >300MiB)

$ DATE=$(date -u -d "1 day ago" '+%Y-%m-%d')
$ TIMESPAN_RAW=21
$ sqlite3 -column -header reproducible.db \
   "SELECT r.node1 AS buildd, COUNT(r.id) AS pkgs_built, 
CAST(AVG(r.build_duration) AS INTEGER) AS avg_duration FROM results AS r WHERE 
r.build_duration!='' AND r.build_duration!='0' AND r.build_date > 
datetime('$DATE', '-$TIMESPAN_RAW days') GROUP BY r.node1 ORDER BY pkgs_built 
DESC"

Figures will vary based on which packages were assigned to which
node, as some are easier to build than others, but I hope over 21
days that variance is smoothed out.

Assuming the nodes had no downtime, we can compare pkgs_built over
the 21-day period to assess performance.

Also avg_duration is meaningful, but will increase where the
reproducible-builds network scheduled more concurrent build jobs on a
node.  (Low avg_duration does not always mean high package throughput,
it may just be doing fewer jobs in parallel.)

Finally, the nodes' performance will depend on other factors such as
storage device used, kernel, etc.

Rows are annotated with number of cores, amount of RAM, and board.

builddpkgs_built  avg_duration
  --  
profitbricks-build5-amd64.debian.net  17415   514 # 18x,48G
profitbricks-build1-amd64.debian.net  16720   531 # 17x,48G
profitbricks-build6-i386.debian.net   15348   727 # 18x,48G
profitbricks-build2-i386.debian.net   15214   739 # 17x,48G
wbq0-armhf-rb.debian.net  21702359# 4x,2G; 
Wandboard-Quad?
cbxi4b-armhf-rb.debian.net20772582# 4x,4G; 
CuBox-i4x4
odxu4-armhf-rb.debian.net 20072255# 8x,2G; 
Odroid-XU4 (USB3 SATA SSD)
cbxi4a-armhf-rb.debian.net19962365# 4x,4G; 
Cubox-i4x4
cbxi4pro0-armhf-rb.debian.net 19732743# 4x,2G; 
CuBox-i4Pro
opi2a-armhf-rb.debian.net 17672922# 4x,2G; OrangePi 
Plus2 (USB2 SATA SSD)
odxu4c-armhf-rb.debian.net17422180# 8x,2G; 
Odroid-XU4
odxu4b-armhf-rb.debian.net16272295# 8x,2G; 
Odroid-XU4
ff2b-armhf-rb.debian.net  15292745# 4x,2G; 
Firefly-RK3288 (USB2 SATA SSD)
opi2b-armhf-rb.debian.net 14602738# 4x,2G; OrangePi 
Plus2 (USB2 SATA SSD)
ff2a-armhf-rb.debian.net  14352570# 4x,2G; 
Firefly-RK3288 (USB3 SATA SSD)
bbx15-armhf-rb.debian.net 11511827# 2x,2G; 
BeagleBoard X15 - cool!
rpi2c-armhf-rb.debian.net 11371986# 4x,1G; 
Raspberry PI 2
hb0-armhf-rb.debian.net   11342143# 2x,1G; 
HummingBoard Pro i2?
bpi0-armhf-rb.debian.net  773 3433# 2x,1G; Banana 
Pi?
ff4a-armhf-rb.debian.net  630 1728# 4x,4G; 
Firefly-RK3288
rpi2b-armhf-rb.debian.net 626 1972# 4x,1G; 
Raspberry PI 2 Model B
wbd0-armhf-rb.debian.net  403 3176# 2x,1G; 
Wandboard-Dual (USB2 SATA HDD)

I don't know whether to believe these figures yet!

  * wbq0 is impossibly fast for just 4x1GHz cores, 2GB RAM...
  * odxu4 looks slightly faster than the other two.
  * cbxi4a/b seem no faster than cbxi4pro0 despite twice the RAM?
  * ff2a/b show USB3 SSD to be no faster than USB2?
  * bbx15 may be able to handle more build jobs (low avg_duration).
  * bpi0 may be overloaded (high avg_duration).
  * ff4a maybe had downtime, and seems to be under-utilised.
  * rpi2b maybe had downtime, or has a slower disk than rpi2c.
  * wbd0 slowness is likely due to the magnetic hard drive.

Corrections/suggestions are welcome.

Many thanks to Vagrant for hosting all these armhf nodes!

Regards,
-- 
Steven Chamberlain
ste...@pyro.eu.org


signature.asc
Description: Digital signature
___
Reproducible-builds mailing list
Reproducible-builds@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/reproducible-builds