On 26/03/16 18:42, Bjarne Saltbæk wrote:
_____________________________
From: Gordan Bobic <[email protected] <mailto:[email protected]>>
Sent: lørdag, marts 26, 2016 3:30 PM
Subject: Re: [RedSleeve-Users] ?==?utf-8?q? Raspberry Pi 3
To: <[email protected] <mailto:[email protected]>>
> £600 here in UK from here:
https://www.xcase.co.uk/gigabyte-server-boards/gigabyte-mp30-ar0-with-appliedmicror-x-gene1r-processor.html
Ouch, that is some money, after all.
It's not cheap by any length of imagination, but it does provide more
processing power and memory than pretty much all of the rest of my ARM
machines combined.
>Which brings me to the next point. One of the big advantages of the
>MP30-AR0 is that it takes ECC RAM. That means we can have more
>confidence in the packages built on it. What I was hoping we could
>arrange is having a reproducible Koji docker container build (well, a
>method of making a rootfs with all the required configuration, making a
>docker image from a tarball is trivial). Then I could just put your
>docker Koji container on the new server, and make a few builder
>containers to do the actual building. This would still be a lot faster
>than using older machines with a single core and 512MB of RAM.
Sound like a good plan. Note that it is only the builders that needs so
run on ARM architecture. The main controller and the web frontend can
run on any architecture (read - cheaper x86_64)
Sure - but it's much more convenient of it's armv5tel (or armv7hl or
aarch64 if you want to build it on CentOS) if I'm intending to run it on
the MP30-AR0. :)
>As an upshot, you wouldn't have to run anything - once you have put
>together a src.rpm, you could just throw the package at the koji
>container running here (my server will be running 24/7 anyway), and
>it'll take care of it.
True. I use and even better solution - well the same that Fedoraproject,
CentOS etc use. I have stored all .spec files in git and just pull the
spec file from git. So your koji and my koji can pull from the same git
server. We could store it all at GitHub. I just prefer to have a private
git repository since there could be stored sensitive data after all (NSA
go.....)
I would rather like to think that there is nothing sensitive in the spec
files considering we are also making the src.rpm files containing the
spec files available. :p
>If you have instructions I could follow to create reproducible results
>running koji on an armv5tel machine so that I could containerize it on
>the new server, that would be quite awesome.
Sure, the documentation is (still) located at
http://www.saltbaek.dk/dokuwiki/
There are still some bits and pieces missing - for exampel i am so lazy
i want a web gui for creating ssl user certificates instead of i need to
ssh into the machine for this.
And i am missing git hook script that starts rebuilding a package when
git repo is updated
That's fairly easy to configure with github.
Also a verification script that checks that all packages in git have a
build profile in koji.
Other than that it has been running for 3-4 weeks now 24/7 unless when
the builders crash - the Banana Pi M3 is the worst (do to poor design).
I'm rather hoping to reduce the entire build farm to a koji docker
container and a builder docker container (or 8, just to make sure that
all the CPU can end up getting used up even during single-threaded parts
of builds).
>Note that I don't mean to discourage you from running your own, I just
>think having it as a cetrally available resource for registered package
>maintainers would be beneficial. Having extra available builders won't
>hurt (even if an extra SheevaPlug or similar probably won't make a
>noticeable dent in the workload compared to the MP30).
Sure, I dont get discouraged at all - i am not an old fashioned system
administrator that lives in my own little kingdom ;)
But maybe I should offer running the sigul server. It would be good
security having the signing server apart from the build system - again
this can run on any type hardware/OS.
I very much prefer to have signing happen on a separate server, offline,
and out of band. When the build pass is complete, we take a snapshot
(gotta love ZFS...), send it over to the signing server, snapshot that
and send it to the FTP server. And except when signing, the signing
server is physically switched off. I don't like the idea of having the
private signing keys on anything that is accessible in any way from the
internet (or can access the internet, except when sending/receiving the
snapshots.
>I am not sure how that works, but we have had network TFTP boot
>capability in u-boot since forever so I am not entirely sure what PXE
>would bring to the table that we don't already have...
Ok i have not tried that. And that must be the same as PXE. PXE code
AFAIK does nothing other than load the network stack and boot on tftp.
Same as the Anaconda installer can and I guess that feature is build
into uboot.
The most effective in a classroom would be booting from tftp insted of
the teacher having to maintain many SD-cards.
If you look at my ancient *Plug wiki article here, that explains the
basics of how to boot it off the network:
http://redsleeve.wikia.com/wiki/Install_on_Sheeva/Guru/Dream_Plug
>Anyway, which of the above machines would you like? GuruPlug has been
>claimed, the others are available.
I am currently on Easter vacation but i am back tomorrow evening and
then i will have the answer on that. But anything easy to configure with
1gb RAM, wifi/ethernet build-in and if possible SATA interface :)
LOL! You don't want much, do you! :-p
Most of the mentioned hardware only has 512MB of RAM. The Compulab A510
has 1GB, the Arndale Octa has 4-ish GB, and the Cornfed Conserver board
has 4GB. The Compulab and Conserver are both in *TX form factor, which
means it's easier to tidy them away into standard cases.
All of them will require some research (easiest way may be reverse
engineering instructions for getting debian or ubuntu up and running on
the same machines), cobbling together recent u-boot (ideally mainline if
possible), getting a reasonable LT kernel to build, rpm it up, producing
a publicly consumable (i.e. sanitized) image, and documenting the
process on the wiki, as per the terms of the deal.
Btw - can i get [email protected] <mailto:[email protected]> as an
alias/forward to [email protected] <mailto:[email protected]>
Done. I CC-ed this email to it for testing, so you should get it twice.
Gordan
_______________________________________________
users mailing list
[email protected]
https://lists.redsleeve.org/mailman/listinfo/users