Re: smokeqt in utopic
Matthias Klose wrote: Am 22.08.2014 um 18:25 schrieb Jonathan Riddell: Rex from Fedora says he has no problems with GCC 4.9. Fedora doesn't use FSF GCC, but the redhat branch. Please state the branch and the revision used. It would appear the fedora builds used gcc svn revision 214009 (from 20140815) from redhat/gcc-4_9-branch -- Rex -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
armhf builders on Kubuntu Next
Hi everyone One of our community member got Project Neon running on his ARM board. In order to aid his work we have now enabled armhf packages on the Kubuntu Next and Kubuntu Next staging PPA's. Cheers Rohan Garg -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
CP, jenkins, and all that madness
hola, (another tldr mail \o/) the last couple of days I have been looking into jenkins while staging a first proof of concept using the existing neon5 tech. I'd like to see if anyone has thoughts on whether or not we should use jenkins. I personally am yet undecided because as a matter of fact we have pretty much all the tooling jenkins would provide floating around in standalone tools (the various status scripts, all the neon stuff doing automatic build orchestration, retry management, and whatnot). # what's jenkins? jenkins is a CI orchestration system with webui used for most CI setups (kde uses it, canonical uses various setups for different CI concepts). it schedules and manages builds and tracks their present as well as over-time status. # why jenkins? for our purposes jenkins would be a glorified schedule manager and status dashboard. effectively there would be very little difference between it and something homemade as what it does is not exactly rocket science. it does however have a thriving user base, a nice web status dashboard, logging, status tracking etc. since it is used by a lot of other projects there certainly wouldn't be harm in using the same thing in particular since in the distant future this could also allow for resource and experience sharing and whatnot.' a general jenkins job would do the following: - poll whatever SCM we use for packaging and *automatically* builds when a change arrives - at least once a day it triggers by time and fetches a new tarball (I am yet unsure how exactly that would work but oh well..) - the job updates packaging, the relevant upstream clone and merges the two into a debian source packages - hurls the source off to launchpad - polls launchpad for status - fails depending on whether BOTH i386 and amd64 built successfully (for now I'd not do arm builds because we have no actual production quality products for arm hardware) - fetches build log - extracts data from log (cmake deps met, linitian clean, symbol fails, install fails... pretty much what ppa status does currently) - fails if a thing we require did not work out (e.g. missing optional cmake dep) on top of that we *could* have jobs reflect the actual dependencies of a build either through jenkins itself (which would formally block a job from building until its deps are built) or less formally as part of the actual build where the build would be waiting in progress until deps are build (and then fail depending on that). # why not jenkins? as mentioned jenkins would be doing what we already have, so from an effort POV it probably makes little difference whether we use jenkins or write a similar orchestration system from scratch because most of the heavy lifting logic is already present in various other scripts and tools and only needed to get refactored to allow for more atomic usage. I have not looked particularly into the expandability of jenkins, but it being java I am not too keen on writing own jenkins plugins :P to ensure jenkins is as dynamic as possible we will need to write additional tooling that manages jenkins jobs. namely we'd at the very least need a script that gets a list of all packages we want built and then automatically creates/updates/deletes jenkins jobs accordingly. a brief look at the REST api suggests that we can automate this entirely. nevertheless it is a bit of software that will need writing and probably wouldn't be needed (or at least not in such a formal manner) if we wrote our own orchestration. # random notes jenkins apparently has no concept of job grouping (or directorying) such that jenkins job names will have to have fun ball names like utopic_unstable_plasma-workspace. on the dashboard one can still have dedicated views showing all utopic builds, or all unstable builds. the builds themself can be done by any old script, jenkins for the most part would just be the trigger for the script. the script itself would probably do a schroot overlayfs (or lxc overlayfs) and do most of the package business internally. Thoughts? HS -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Re: armhf builders on Kubuntu Next
On Wed, Aug 27, 2014 at 3:46 PM, Rohan Garg rohang...@kubuntu.org wrote: Hi everyone One of our community member got Project Neon running on his ARM board. In order to aid his work we have now enabled armhf packages on the Kubuntu Next and Kubuntu Next staging PPA's. Do we intend to go anywhere production wise with this or is it just playing around with arm? HS -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Re: armhf builders on Kubuntu Next
Do we intend to go anywhere production wise with this or is it just playing around with arm? It's mostly intended for people who have access to ARM boards and are interested in making Plasma 5 work on it. Since I do not have access to such boards, I'll probably only end up making sure things build, but run time adjustments are up to whoevever has access to hardware. -- Regards Rohan Garg -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Re: Changing the default IO scheduler in Kubuntu
On Wed, Aug 27, 2014 at 6:42 PM, Rohan Garg rohang...@ubuntu.com wrote: Feel free to air any concerns you might have over here. I still think there should be a formal bug report for the foundations team to investigate why we are not using CFQ by default anyway. At any rate +1 to not having shit performance on !ssds. HS -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Re: CP, jenkins, and all that madness
On Wed, Aug 27, 2014 at 5:53 PM, Harald Sitter apachelog...@ubuntu.com wrote: hola, (another tldr mail \o/) the last couple of days I have been looking into jenkins while staging a first proof of concept using the existing neon5 tech. I'd like to see if anyone has thoughts on whether or not we should use jenkins. I personally am yet undecided because as a matter of fact we have pretty much all the tooling jenkins would provide floating around in standalone tools (the various status scripts, all the neon stuff doing automatic build orchestration, retry management, and whatnot). # what's jenkins? jenkins is a CI orchestration system with webui used for most CI setups (kde uses it, canonical uses various setups for different CI concepts). it schedules and manages builds and tracks their present as well as over-time status. # why jenkins? for our purposes jenkins would be a glorified schedule manager and status dashboard. effectively there would be very little difference between it and something homemade as what it does is not exactly rocket science. it does however have a thriving user base, a nice web status dashboard, logging, status tracking etc. since it is used by a lot of other projects there certainly wouldn't be harm in using the same thing in particular since in the distant future this could also allow for resource and experience sharing and whatnot.' a general jenkins job would do the following: - poll whatever SCM we use for packaging and *automatically* builds when a change arrives - at least once a day it triggers by time and fetches a new tarball (I am yet unsure how exactly that would work but oh well..) - the job updates packaging, the relevant upstream clone and merges the two into a debian source packages - hurls the source off to launchpad - polls launchpad for status - fails depending on whether BOTH i386 and amd64 built successfully (for now I'd not do arm builds because we have no actual production quality products for arm hardware) - fetches build log - extracts data from log (cmake deps met, linitian clean, symbol fails, install fails... pretty much what ppa status does currently) - fails if a thing we require did not work out (e.g. missing optional cmake dep) on top of that we *could* have jobs reflect the actual dependencies of a build either through jenkins itself (which would formally block a job from building until its deps are built) or less formally as part of the actual build where the build would be waiting in progress until deps are build (and then fail depending on that). # why not jenkins? as mentioned jenkins would be doing what we already have, so from an effort POV it probably makes little difference whether we use jenkins or write a similar orchestration system from scratch because most of the heavy lifting logic is already present in various other scripts and tools and only needed to get refactored to allow for more atomic usage. I have not looked particularly into the expandability of jenkins, but it being java I am not too keen on writing own jenkins plugins :P to ensure jenkins is as dynamic as possible we will need to write additional tooling that manages jenkins jobs. namely we'd at the very least need a script that gets a list of all packages we want built and then automatically creates/updates/deletes jenkins jobs accordingly. a brief look at the REST api suggests that we can automate this entirely. nevertheless it is a bit of software that will need writing and probably wouldn't be needed (or at least not in such a formal manner) if we wrote our own orchestration. # random notes jenkins apparently has no concept of job grouping (or directorying) such that jenkins job names will have to have fun ball names like utopic_unstable_plasma-workspace. on the dashboard one can still have dedicated views showing all utopic builds, or all unstable builds. the builds themself can be done by any old script, jenkins for the most part would just be the trigger for the script. the script itself would probably do a schroot overlayfs (or lxc overlayfs) and do most of the package business internally. Thoughts It's what we use in KDE, it seems like a good idea to use it further. Actually, maybe it could be interesting if you could just host the machines doing the tasks but extend the build.kde.org instance so the information could be shown together with each project. Or maybe it doesn't make sense, just a thought. Good luck! Aleix -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Re: CP, jenkins, and all that madness
On Wed, Aug 27, 2014 at 7:37 PM, Aleix Pol aleix...@kde.org wrote: Actually, maybe it could be interesting if you could just host the machines doing the tasks but extend the build.kde.org instance so the information could be shown together with each project. Or maybe it doesn't make sense, just a thought. absolutely. it is certainly a thing to consider in the future. adding a kubuntu specific build slave to build.kde that does the heavy lifting and having the projects orchestrated on build.kde proper certainly should be possible. it would neatly aggregate all the relevant information in one place. to be fair though, from what I understand this does not actually require the builds to be orchestrated by build.kde though: jenkins has a concept of an external project which gets status information fed from the outside and is in essence just reflecting something going on elsewhere (either through the client.jar or the REST API apparently), so propagating kubuntu integration status into build.kde wouldn't necessarily require us to use jenkins directly. HS -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Re: Changing the default IO scheduler in Kubuntu
On Wednesday, August 27, 2014 18:49:24 Harald Sitter wrote: On Wed, Aug 27, 2014 at 6:42 PM, Rohan Garg rohang...@ubuntu.com wrote: Feel free to air any concerns you might have over here. I still think there should be a formal bug report for the foundations team to investigate why we are not using CFQ by default anyway. At any rate +1 to not having shit performance on !ssds. HS Additionally, this is the kind of thing that needs an FFe now, so once it's sorted with foundations, then the release team needs a go at it. Scott K -- kubuntu-devel mailing list kubuntu-devel@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel